What is MCP?
Model Context Protocol (MCP) is emerging as the industry standard for LLM interactions with external tools and data sources. Just as LLM APIs converged around the OpenAI specification, we're seeing a similar consolidation around MCP as the unified protocol for LLM applications. You can read more about the MCP here
At Future AGI, we're thrilled about the possibilities that MCP brings to our ecosystem. It opens up powerful new ways for our customers to interact with our platform and create more sophisticated AI applications.
Why Future AGI MCP Matters?
LLMs are powerful—but to truly make them useful in real-world workflows, they need access to tools, data, and evaluations. That’s what the Model Context Protocol (MCP) enables.
With Future AGI’s MCP Server, you can do the following using natural language:
Run automatic evaluations — Evaluate batch and single inputs on various evaluation metrics present in Future AGI both on local points and large datasets
Prototype and Observe your Agents — You can add observability and evaluations while both prototyping and deploying your agents into production using natural language
Manage datasets — Upload, evaluate, download datasets and find insights with natural language
Add Protection Rules— Apply toxicity detection, prompt injection protection, and other guardrails to your applications automatically using chat
Synthetic Data Generation — Generate Synthetic Data by describing about the dataset and objective

Image 1: Future AGI' MCP Server's features
How to Set Up Future AGI MCP?
Create an account at http://app.futureagi.com/ and obtain your API key and Secret key from the dashboard. These credentials are required for the Future AGI MCP Server to authenticate with our API.
To run the server locally:
Configure your MCP clients (Cursor/Claude Desktop) with
You can also add the Future AGI docs MCP to your clients by running the below command in your terminal. It will prompt you to choose the mcp clients present on your local system. You can choose all, which will add configuration for all the mcp clients
npx @mintlify/mcp@latest add futureagi

Image 2: Future AGI MCP with tools like evaluate, protect, upload_dataset, and command configurations in Cursor IDE.
Exploring the Future AGI Features Using MCP
4.1 Evaluating single and batch inputs
You can provide a prompt like "Evaluate the following inputs for tone, toxicity" along with your input. The AI assistant will automatically fetch all available evaluators from the Future AGI Platform. It will then run evaluation on the data and provides the output in structured format

Image 3: MCP tool evaluation detecting toxicity in text

Image 4: MCP deterministic evaluation validating image of Asian and Indian men
4.2 A Powerful Use Case: Conversing with Your Data Using Cursor and Future AGI
Let’s explore an exciting example of how you can leverage Cursor as a natural interface to interact with your organization's data—powered by Future AGI.
Imagine being able to evaluate your data through a simple conversation, without ever touching a graphical UI. Thanks to our Future AGI MCP server, this is now a reality.
Suppose you ask Cursor:
"Can you find rag_chat.csv, upload it to Future AGI, suggest three evaluations, and add them to the dataset?"
Here's what happens behind the scenes:
File Discovery & Upload: Cursor searches for rag_chat.csv locally and uploads it to the Future AGI platform.
Evaluator Selection: It automatically fetches all available evaluators and selects three appropriate ones.
Evaluation Assignment: These selected evaluations are then attached to the dataset.
It also tries to correct itself based on the error thrown by the tool, as shown below

Image 5: Cursor interface using MCP to upload rag_chat.csv

Image 6: Future AGI MCP adding three LLM evaluations—context relevance, factual accuracy, and groundedness
Insight Delivery: Once evaluations are complete, you can ask Cursor to download the evaluated dataset and present the insights—again, all through natural language.

Image 7: Evaluation insights from Future AGI MCP showing context relevance, factual accuracy, and groundedness scores
4.3 Synthetic Data Generation
You can now ask your assistant to generate Synthetic Data for a specific use case. It will decide on the dataset columns and their types and send it to Future AGI. Data generation then starts in the background; wait for some time and ask it to download the data. It will be served to your local folder. Be specific about dataset for best results

Image 8: Cursor interface showing synthetic dataset generation and insights for e-commerce customer support
4.4 Seamlessly Add Code Observability and Prototype Using Natural Language
A simple prompt like:
“Can you search for crew_ai instrumentation in the Future AGI docs and suggest the code changes using the custom trace_provider?”
…is all it takes.
Cursor will take care of the rest—searching the documentation, understanding the relevant instrumentation steps, and generating the necessary code changes. No need to manually comb through pages of docs. Just ask, and it delivers.

Image 9: Cursor editor displaying Crew AI instrumentation setup using Future AGI’s trace_provider integration
Through natural conversations with tools like Claude, team members can prototype ideas, run evaluations, and analyze performance metrics or edge cases—right from their seats.
For the more technical members of your team, modern AI-powered IDEs like Cursor seamlessly integrate into the workflow, accelerating development, instrumentation, and iteration with minimal friction.
Conclusion
The Future AGI MCP Server isn’t just a tool—it’s a developer-first platform that redefines how you work with LLMs. By bringing together the power of Model Context Protocol, LLM evaluation, and natural language interfaces, it allows any team to build, test, and monitor AI applications with unmatched simplicity and speed.
Whether you're evaluating batch data, generating synthetic samples, or hardening your model against security risks, the MCP Server makes it easier than ever to move from idea to deployment.
Ready to experience the future of conversational AI and evaluation? Start here.
FAQs
