Data

What Is Synthetic Data (for LLM Eval)?

Machine-generated test data — personas, scenarios, queries, conversations — used to evaluate and stress-test LLM and agent systems before production.

What Is Synthetic Data?

Synthetic data, in the LLM and agent context, is machine-generated test data — personas, scenarios, queries, conversations, and tool-calling traces — used to evaluate, stress-test, and regression-check AI systems before they hit production. Unlike traditional ML data augmentation aimed at training-set growth, the goal here is realistic eval coverage: cohorts, edge cases, and failure modes that production logs do not yet contain. By 2026, persona-driven scenario generation is the dominant way teams build eval cohorts for agent and voice-agent workflows, and FutureAGI’s simulate-sdk is built around that pattern.

Why It Matters in Production LLM and Agent Systems

The first eval set every team builds comes from production logs. That works until you ship to a new region, support a new language, add a new tool, or onboard a new customer segment — at which point the production-log eval set has zero coverage of what just changed. You need test cases for cohorts that have never called your product, in volume, before you ship. Synthetic eval data is how you get there.

The pain is direct. A voice-agent team rolls out to a new market and discovers, only in production, that the ASR mishears 20% of region-specific names. A support-agent team adds a new “billing dispute” tool and ships without an eval cohort that exercises it — three weeks later a customer complaint reveals a tool-selection bug. A compliance owner cannot prove EU AI Act conformity because they have no test set covering protected-characteristic queries. A product lead cannot regression-test a model swap from gpt-4o to claude-sonnet-4 because the eval cohort is too thin to power statistical comparison.

In 2026 agent stacks, synthetic data is also how you generate trajectory-shaped tests. Single-turn Q&A datasets do not stress the agent loop, the planner, the tool-call sequence, or memory. Persona-based scenarios do — and they let you scale from “we tested 12 cases by hand” to “we tested 10,000 personas overnight in CI” without proportional human cost. That is what lets voice-agent teams hit production with confidence rather than as a controlled experiment on real users.

How FutureAGI Handles Synthetic Data

FutureAGI’s approach is to make synthetic eval data first-class inside the simulate-sdk, anchored to two surfaces: Persona and Scenario. A Persona is a structured test case with persona attributes (background, voice, attitude), a situation, and a desired outcome — it is the unit of agent test. A Scenario is a named collection of personas that runs as one batch. The ScenarioGenerator takes a topic and a count and produces realistic personas using a generator model, optionally seeded by domain context. Scenarios can also be loaded from CSV or JSON via Scenario.load_dataset for hand-curated cases.

Once a scenario is built, two engines run it. CloudEngine orchestrates text-based simulations that call the user’s local agent callback, capturing transcripts and scoring them with attached evaluators. LiveKitEngine runs full voice simulations over LiveKit with transcript and audio capture, returning a TestReport with TestCaseResult per persona. Evaluations run via simulate.evaluation.ai_eval, which attaches any fi.evals metric — TaskCompletion, Faithfulness, ASRAccuracy, ToolSelectionAccuracy, custom — to each transcript.

Concretely: a fintech team building a chargeback-dispute agent generates a 500-persona scenario via ScenarioGenerator covering disputed-charge variations, customer tones (frustrated, polite, confused), and amount ranges. They run the scenario through CloudEngine against the agent, score with TaskCompletion and ToolSelectionAccuracy, and dashboard the per-persona-attribute breakdown. A regression in “frustrated, high-amount” cohorts is caught in CI before any real customer hits it. Unlike Ragas-style synthetic-question generation, which only produces single-turn QA pairs, this gives full agent trajectories evaluated end-to-end.

How to Measure or Detect It

Synthetic data quality is not measured by realism alone — it is measured by eval-set utility:

  • Coverage rate: percent of production-log cohorts (intent, tone, amount range, locale) represented in the synthetic scenario.
  • Per-persona TaskCompletion: distribution of task-success scores across the scenario; tight clusters mean the scenario is not stressing the agent.
  • Eval-set discrimination: do score distributions differ between two model versions on this scenario? If not, the scenario is not informative.
  • ScenarioGenerator diversity: vocabulary and intent diversity across generated personas; flat distribution is a quality red flag.
  • Synthetic vs. production score gap: when a scenario consistently over- or under-estimates production performance, recalibrate the persona distribution.

Minimal Python:

from fi.simulate.simulation import Scenario, ScenarioGenerator
from fi.evals import TaskCompletion

scenario = ScenarioGenerator(
    topic="chargeback dispute support",
    count=500,
).generate()

report = scenario.run(
    agent=my_agent,
    evaluators=[TaskCompletion()],
)
print(report.summary())

Common Mistakes

  • Treating synthetic data as a replacement for production logs. Synthetic covers what you have not seen; production covers what you have. You need both.
  • Generating personas without domain context. Generic personas miss your terminology, SKUs, and edge cases — seed ScenarioGenerator with real domain prompts.
  • No diversity audit. A 1,000-persona scenario where 800 are variants of one archetype gives a false sense of coverage; audit attribute distributions.
  • Scoring synthetic without scoring production on the same metrics. If FutureAGI evaluators run differently against synthetic vs. production, the scenario distribution is off.
  • Confusing synthetic eval data with synthetic training data. Mixing them risks training-test contamination — keep eval scenarios out of any fine-tuning corpus.

Frequently Asked Questions

What is synthetic data for LLM evaluation?

Synthetic data for LLM evaluation is machine-generated test data — personas, scenarios, queries, conversations, tool traces — used to evaluate and stress-test agents and LLM applications before they hit production.

How is synthetic eval data different from synthetic training data?

Synthetic training data exists to expand a model's training set. Synthetic eval data exists to expand coverage of test cohorts, edge cases, and failure modes. They are produced by similar techniques but optimized for different objectives — diversity vs. label fidelity.

How do you generate synthetic eval data with FutureAGI?

The simulate-sdk's ScenarioGenerator generates a Scenario containing N Persona test cases from a topic and persona description. You then run those personas through your agent via CloudEngine or LiveKitEngine and score the resulting transcripts with fi.evals evaluators.