Agents

What Is Pydantic AI?

Pydantic AI is a Python agent framework for typed LLM agents, structured outputs, dependency injection, and validated tool calls.

What Is Pydantic AI?

Pydantic AI is a Python agent framework for building type-safe LLM agents with tools, dependency injection, and structured outputs validated through Pydantic schemas. It is an agent-framework term, not a model provider. In production, Pydantic AI shows up as agent runs, tool-call spans, output-validation attempts, retries, and token usage inside a trace. FutureAGI connects that surface through traceAI:pydantic-ai, so teams can evaluate typed agent behavior rather than trusting schemas alone.

Why Pydantic AI matters in production LLM and agent systems

Pydantic AI reduces a common agent failure class: untyped tool calls and malformed structured outputs crossing into production services. The risk does not disappear, though. A Pydantic schema can reject an invalid payload, but it cannot prove the model chose the right tool, picked the right customer record, or completed the business task. If teams treat validation as reliability, they still ship agents that are well-typed and wrong.

Developers feel this first as confusing exception patterns: output validation errors, retry spikes, tool arguments that pass type checks but fail domain rules, and logs where the final AgentRunResult.output looks correct while earlier tool calls drifted. SREs see p99 latency and token-cost-per-trace rise when the agent retries after schema failures. Product teams see inconsistent task completion across cohorts. Compliance reviewers ask whether a typed tool call was authorized, not only whether it parsed.

The issue is sharper in 2026 multi-step pipelines because Pydantic AI agents often sit between MCP servers, databases, retrieval systems, user state, and payment or support tools. Unlike LangGraph, which makes workflow state transitions explicit as graph nodes, Pydantic AI emphasizes typed Python functions, dependency injection, and output schemas. That makes it pleasant for Python engineers, but it also means runtime behavior must be traced step by step. The reliability question is not “did Pydantic validate the object?” It is “did the agent take the correct path to a safe, useful result?”

How FutureAGI handles Pydantic AI

FutureAGI’s approach is to treat Pydantic AI’s type safety as a starting condition, then verify each agent run with trace-level evidence and evaluator scores. The anchor traceAI:pydantic-ai maps to the pydantic-ai traceAI integration in the FutureAGI inventory. It captures a Pydantic AI run as an OpenTelemetry trace with agent steps, model calls, tool names, tool arguments, output validation outcomes, latency, and token fields such as llm.token_count.prompt when emitted by the instrumentation layer.

Example: a banking support agent uses Pydantic AI with a SupportDependencies object for account services, a lookup_transaction tool, and a Pydantic output type for the final resolution. A customer asks why a wire transfer failed. FutureAGI records each agent.trajectory.step, the selected tool through gen_ai.tool.name, the raw arguments through gen_ai.tool.call.arguments, and the final typed output. JSONValidation checks whether the structured output matches the schema. FunctionCallAccuracy and ToolSelectionAccuracy check whether the lookup tool and arguments match the user intent. TaskCompletion checks whether the agent actually resolved the support goal.

The engineer then turns failures into release criteria. If JSONValidation stays high but ToolSelectionAccuracy drops after a prompt change, the schema is not the problem; the agent is calling the wrong function. The next action might be a stricter tool description, a smaller tool registry, a regression eval on failed traces, or an Agent Command Center model fallback for high-risk routes. The point is to evaluate the Pydantic AI runtime as a trajectory, not just a validated return type.

How to measure or detect Pydantic AI

Measure Pydantic AI agents across schema, tool, trajectory, and production outcomes:

  • JSONValidation: returns whether the final structured output conforms to the expected JSON Schema.
  • FunctionCallAccuracy: scores function name and argument correctness against expected calls.
  • ToolSelectionAccuracy: scores whether the Pydantic AI agent chose the right tool for the intent.
  • TaskCompletion: checks whether the run completed the assigned user or workflow goal.
  • Trace signals: repeated agent.trajectory.step, rising llm.token_count.prompt, schema-fail rate, retry count, tool-timeout rate, and p99 latency.
  • User proxies: thumbs-down rate, reopened-ticket rate, escalation rate, and human-review rate by agent version.

Minimal Python:

from fi.evals import ToolSelectionAccuracy, JSONValidation

tool_eval = ToolSelectionAccuracy()
schema_eval = JSONValidation()

tool_result = tool_eval.evaluate(trajectory=trace_spans, expected_tool="lookup_transaction")
schema_result = schema_eval.evaluate(output=agent_output, schema=SupportOutput.model_json_schema())
print(tool_result.score, schema_result.passed)

Common mistakes

  • Treating Pydantic validation as agent correctness. A valid object can still contain the wrong account, action, or recommendation.
  • Skipping tool-level evaluation. Typed tool arguments only prove shape; FunctionCallAccuracy checks whether the name and values match intent.
  • Hiding retries inside the framework. Retries after validation errors should appear in traces and dashboards, not only application logs.
  • Overloading one dependency object. Broad dependencies make tools harder to test and make authorization boundaries harder to audit.
  • Comparing only final answers with LangGraph or CrewAI. Frameworks fail differently; compare trajectories, tool calls, and task completion.

Frequently Asked Questions

What is Pydantic AI?

Pydantic AI is a Python framework for building typed LLM agents with tools, dependency injection, and structured outputs. FutureAGI traces its runs through traceAI:pydantic-ai so each tool call and validation step can be evaluated.

How is Pydantic AI different from LangGraph?

LangGraph models workflows as explicit graph and state transitions. Pydantic AI starts from typed Python agents, dependencies, tools, and output schemas, so validation and static typing are central to the runtime.

How do you measure Pydantic AI?

FutureAGI instruments the pydantic-ai traceAI integration and scores spans with ToolSelectionAccuracy, FunctionCallAccuracy, JSONValidation, and TaskCompletion. Monitor agent.trajectory.step, tool arguments, schema-fail rate, and eval-fail-rate-by-release.