Agents

What Is DSPy?

A Python framework for programming, optimizing, and evaluating LLM pipelines through signatures, modules, and compiled prompts.

What Is DSPy?

DSPy is a Python framework for programming and optimizing LLM workflows with signatures, modules, metrics, and compilation. It is an agent-framework and prompt-optimization layer: engineers define what a module should do, then DSPy searches prompt, example, and pipeline variants against a metric. In production, it appears in model traces as module calls, compiled prompts, retrieval steps, and evaluator outputs. FutureAGI connects those runs through traceAI:dspy so DSPy programs can be traced, scored, and regression-tested.

Why DSPy Matters in Production LLM and Agent Systems

Copying a DSPy notebook into a service without eval boundaries creates a specific failure: the compiled prompt can look better on a small development set while degrading a production cohort. A RAG module may learn to overfit short gold answers, skip evidence, or choose a brittle few-shot pattern that breaks when tools or documents change. The visible symptom is not just a bad answer. It is a trace where the retriever returned useful context, the module transformed it poorly, and the final response still passed a loose string metric.

Developers debug optimizer output as if it were hand-authored prompt code. SREs see p99 latency or token-cost-per-trace jump when a compiled module adds hidden retries or longer prompts. Product teams see task-completion rate drift by customer cohort. Compliance reviewers ask why the optimized program cited unsupported context, but the service only logged the final text.

DSPy matters more in 2026 agentic pipelines because a compiled module often sits inside a larger planner, tool, retrieval, or memory workflow. One module can decide which context reaches the planner, which answer format reaches a downstream tool, or which rationale another agent receives. When that module changes, the blast radius is larger than a single prompt edit. Logs should show module name, optimizer version, dataset slice, metric score, prompt length, and parent agent.trajectory.step; otherwise, teams cannot tell whether the model, retriever, optimizer, or orchestration layer caused the regression.

How FutureAGI Handles DSPy

FutureAGI’s approach is to treat DSPy as a programmable optimization surface whose compiled behavior must stay visible after deployment. The traceAI:dspy integration is the specific traceAI surface for Python DSPy programs. It attaches DSPy module runs to the surrounding trace, so engineers can inspect the module call, prompt inputs, output, latency, error state, and parent agent.trajectory.step. Token fields such as llm.token_count.prompt help catch optimizer changes that improve accuracy while inflating cost.

Example: a support automation team builds a DSPy AnswerWithEvidence module that retrieves policy snippets, drafts an answer, and passes the result to an agent that may open a refund tool. FutureAGI records the DSPy step inside the same trace as the retriever and tool decision. Groundedness checks whether the answer is supported by retrieved context, TaskCompletion checks whether the support goal was solved, and ToolSelectionAccuracy checks whether the refund tool was selected only when policy allowed it.

Unlike a DSPy compile score or LangSmith trace review alone, this workflow ties optimization results to release gates. If Groundedness drops below 0.85 for enterprise-contract questions after a compile, the engineer creates a regression dataset from failed traces, tightens the module metric, and blocks the prompt artifact. If task completion improves but llm.token_count.prompt doubles, the team can set a cost alert, simplify signatures, or route expensive cases through an Agent Command Center model fallback policy before widening traffic.

How to Measure or Detect DSPy Reliability

Measure DSPy by separating module quality, agent outcome, and production cost.

  • TaskCompletion evaluates whether the full DSPy-backed workflow completed the assigned user goal.
  • Groundedness evaluates whether the response is supported by the provided context, which matters for DSPy RAG modules.
  • ToolSelectionAccuracy evaluates whether an agent selected the right tool after receiving DSPy-generated reasoning or output.
  • Trace fields such as agent.trajectory.step, module tags, optimizer version, and llm.token_count.prompt expose where compiled behavior changed.
  • Dashboard signals include eval-fail-rate-by-module, p99 latency, token-cost-per-trace, prompt-length delta after compile, and regression-fail-rate by dataset cohort.
  • User proxies include thumbs-down rate, escalation rate, reopened-ticket rate, and manual-review rate for requests served by DSPy modules.

Minimal Python:

from fi.evals import TaskCompletion, Groundedness

completion = TaskCompletion().evaluate(input=user_goal, output=answer)
grounding = Groundedness().evaluate(response=answer, context=retrieved_context)

print(completion.score, grounding.score)

Common DSPy Mistakes

  • Optimizing on a tiny development set. A compile can memorize local examples and fail on longer, noisier production traces.
  • Treating compiled prompts as static source. Store optimizer version, dataset version, metric, and prompt artifact together.
  • Using one metric for every module. A retrieval module, tool-routing module, and answer module need different evaluator gates.
  • Hiding DSPy inside a generic framework trace. Preserve traceAI:dspy spans so module-level regressions are visible.
  • Shipping optimizer wins without cohort checks. Compare enterprise, self-serve, and edge-case slices before widening traffic.

Frequently Asked Questions

What is DSPy?

DSPy is a Python framework for programming LLM workflows as typed modules, signatures, optimizers, and evaluation-driven compilation steps. FutureAGI traces DSPy runs with traceAI:dspy so teams can evaluate compiled behavior in production.

How is DSPy different from LangChain?

LangChain is a broad orchestration framework for chains, agents, tools, and retrievers. DSPy focuses on declaring LLM program structure and compiling prompts or examples against measurable metrics.

How do you measure DSPy reliability?

FutureAGI measures DSPy with traceAI:dspy spans plus evaluators such as TaskCompletion, ToolSelectionAccuracy, Groundedness, and ReasoningQuality across the agent trajectory.