Optimize LangGraph
Agent Frameworks
Use Future AGI's agent-opt SDK to rewrite your LangGraph prompts with measurable improvement on the metrics that matter to you.
Recipes for LangGraph
Prerequisites
Before you start
- · A working LangGraph app — local or already in production.
- · A free Future AGI account with
FI_API_KEYandFI_SECRET_KEY. - · Python 3.9+ / Node 18+ / Java 17+ depending on which SDK you're installing.
- · A dataset of ≥50 examples — Future AGI auto-builds these from your trace history.
Install
pip install traceAI-langchainOptimize recipe
from agent_opt import GEPAOptimizer
from fi.evals.templates import Groundedness, PromptAdherence
optimizer = GEPAOptimizer(
seed_prompt="<your current LangGraph system prompt>",
objectives=[Groundedness(), PromptAdherence()],
rounds=8,
)
best_prompt, score = optimizer.run(dataset_id="langgraph_eval_set_v1")
print(f"+{score.delta}% on grounded answers")What Future AGI captures
Optimize fields you'll see in the dashboard
-
Use a Future AGI dataset of failed LangGraph traces as the optimisation target
-
GEPA, ProTeGi, PromptWizard, MetaPrompt, Bayesian, and Random optimisers — same interface
-
Each optimiser run produces a new prompt version with a measured score delta
-
Push the best prompt back to your prompt registry and replay through the same eval suite
Common gotchas
Read these before you ship
- 01
Seed prompt must include the placeholder format your dataset uses (`{{question}}`, `{input}`, etc.).
- 02
GEPA needs ≥50 examples to converge; for smaller sets prefer ProTeGi or PromptWizard.
- 03
Set a hard `rounds` cap — optimisers will keep improving past your budget if you let them.
Next: chain it with the other recipes
Optimize is the first step. Most teams add an evaluator the same week, and start optimising or simulating once they have a baseline. Each recipe takes minutes to wire up.
Adjacent integrations
More integrations like LangGraph
LangChain
Chains, agents, and LCEL pipelines with auto-traced spans for every step.
LangChain4j
Java-native chains, agents, and RAG with LangChain4j.
DSPy
Declarative LLM programs with optimisable signatures and modules.
CrewAI
Role-based multi-agent crews with task plans and tool use.