Self-improving evals.
Not just LLM-as-judge.

A unified evaluation agent that runs local heuristics first - free and sub-second - then augments with an LLM only when confidence is low. Submit corrections when the judge is wrong, and it learns. One evaluate() call for faithfulness, RAG, safety, agents, streaming, and more.

Evaluation Run

Passed
billing-qa-v3 | 1,247 rows
Total Cost
$0.41 $4.12 LLM-only 90% saved
Pass Rate
94.2%
Feedback Used
12 corrections
Metric Type Score Pass Rate Cost Feedback
Faithfulness
NLI · Hallucination
Local
0.92
96.1%
$0.00
-
Toxicity
Safety · Moderation
Local
0.97
99.4%
$0.00
-
Context Relevance
RAG · Retrieval
Augmented
0.88
91.7%
$0.18
4 used
Instruction Adherence
Quality · Compliance
Augmented
0.84
87.3%
$0.21
8 used
Feedback loop active - 8 corrections retrieved
Correction #47 (similarity: 0.94) submitted 3 days ago
"Response includes pricing ($49/mo)" was scored 0.31 - corrected to 0.85
Reason: Pricing info was included because user explicitly asked about cost. Instruction constraint only applies to unsolicited pricing.
Correction #51 (similarity: 0.91) submitted 2 days ago
"847 tokens vs 200 limit" was scored 0.42 - corrected to 0.78. Summary sections are exempt from length constraints.
Answer Relevancy
Embedding · Similarity
Local
0.91
93.8%
$0.00
-
Function Call Accuracy
Agent · Tool Use
Local
0.95
98.2%
$0.00
-
4 local $0.00 | 2 augmented $0.39 | 12 corrections applied
Core Features

An eval agent that learns -
not a static metric library

Hybrid Cost Router
optimizing
HYBRID EVALUATION - COST ROUTING Input Trace Confidence? HIGH Local Heuristic $0 · 47ms Score LOW LLM Augment $0.02 · 1.2s Score COST COMPARISON This framework $0.41 LLM-only $4.12 90% saved LOCAL METRICS NLI String Match Embedding Regex Schema
Feedback Loop
learning
FEEDBACK LOOP - SELF-IMPROVING EVALUATION Evaluate Wrong score? Submit Correction ChromaDB vector store Retrieve similar Few-shot inject ACCURACY OVER TIME CORRECTION EXAMPLE Score 0.31 → Corrected to 0.85 similarity: 0.94 Corrections: 47
Unified API
single endpoint
UNIFIED EVALUATE API - ONE FUNCTION, ALL METRICS 1 evaluate ( 2 metrics=[...], data=rows ) METRIC CATEGORIES String Checks NLI Faithfulness RAG Quality LLM-as-Judge Agent Trajectory Function Call Guardrails Streaming Safety Audio Image REPLACES Ragas DeepEval Custom Scripts YAML Export 50+ metrics · 10 categories · 1 API call · deterministic + LLM-backed
Config Propagation
synchronized
EVAL CONFIG - SINGLE SOURCE OF TRUTH Eval Config metrics · thresholds · weights Datasets Simulations Experiments Playground Production (Observe) Results always comparable Sampling: 100% 1 config · 5 surfaces · consistent scoring · no drift

Most eval frameworks send every check through an LLM - expensive and slow. Our hybrid approach runs local heuristics first (NLI models, string matching, embedding similarity, regex, schema validation) at zero cost in sub-second. Only when confidence is low does the system augment with an LLM (Gemini, GPT, Claude, or our fine-tuned Turing models). You get the accuracy of LLM-as-judge at a fraction of the cost.

See the hybrid architecture

When the judge gets a case wrong, submit a correction. It's stored in a vector database and retrieved as a few-shot example for future evaluations on similar inputs. Your eval accuracy improves with every correction - the system learns your domain's edge cases over time. No retraining, no prompt engineering. Just correct and move on.

How feedback works

String checks, NLI-based faithfulness, RAG retrieval quality, LLM-as-judge, agent trajectory scoring, function call validation, guardrails, streaming safety, audio, image - all through a single evaluate() API. Batch multiple metrics in one call. Run locally or in cloud. Export pipelines as YAML for CI/CD. One framework replaces Ragas + DeepEval + guardrail libraries + custom scripts.

Explore the SDK

Define an eval config once and attach it to dataset runs, simulation tests, experiments, playground sessions, or production traces in Observe. Historic batch for retroactive scoring, continuous for real-time. Configurable sampling rates, span limits, and attribute-based filters. Results always comparable across every context.

See eval configs
Use Cases

One framework for every
AI evaluation pattern

LLM OUTPUT The product was launched in 2019 and sold 1M units. NLI Model entailment SOURCE "...2019..." $0.00 Local NLI entailment check - zero LLM cost

Catch hallucinations without LLM cost

Local NLI models detect contradictions and unsupported claims at zero API cost. Faithfulness, claim support, factual consistency, and contradiction detection run locally in sub-second. Augment with an LLM only for ambiguous paraphrases the heuristic can't resolve.

Local NLI Faithfulness Zero Cost
CORRECTION Score: 0.31 0.85 Vector DB NEXT EVAL Retrieved context applied Corrections stored → retrieved for future evals

Build domain-specific evals that learn

Medical shorthand, legal jargon, internal terminology - generic LLM judges misclassify these constantly. Submit corrections when the judge is wrong. ChromaDB stores them and surfaces similar corrections as few-shot examples. Your eval adapts to your domain without prompt engineering.

Feedback Loop ChromaDB Few-Shot
</> Code Build CI Eval Gate pass: 0.85 score: 0.91 Deploy Prod YAML config Pipeline gate - block deploys below threshold

Gate deployments in CI/CD

Export eval configs as YAML. Run in GitHub Actions, GitLab CI, or any pipeline. If hallucination rate exceeds your threshold or safety score drops, the deployment fails. Same config runs in staging and production - no threshold drift.

CI/CD YAML Export AutoEval
RAG chunk1 chunk2 chunk3 context Agent 1 2 3 4 trajectory Tool Use fn( query="...", k=5, filter=true ) params Evaluate every layer of your AI stack

Evaluate RAG, agents, and function calls

Context relevance, chunk attribution, recall for RAG. Task completion, tool selection accuracy, trajectory scoring for agents. Function name match and parameter validation for tool use. One framework covers every AI pattern - not separate libraries for each.

RAG Agents Tool Use
TOKEN STREAM The answer is that you CUT shou ld TOXICITY SCAN 0.72 thr Stream halted Real-time scan - kill stream on detection

Kill toxic streams mid-generation

StreamingEvaluator monitors LLM output token-by-token. When toxicity, PII, or safety violations spike above your threshold, the stream is cut immediately - not after the full response is generated. Early-stop policies let you define exactly when to intervene.

Streaming Early-Stop Safety
TRACE WATERFALL POST /chat/completions 0.94 retriever.search 0.88 llm.generate 0.91 guardrail.check 1.0 Alerts: ON OpenTelemetry

Monitor production with zero config

Attach eval configs to Observe projects. Continuous run type scores new spans as they arrive with configurable sampling. Quality scores attach to traces via OpenTelemetry - search for bad responses in Jaeger, Datadog, or Grafana. Alerts fire when scores cross thresholds.

Observe OpenTelemetry Alerts
How It Works

Local first, augment when needed,
improve with every correction

01

Pick metrics or describe your app

Choose from 50+ local metrics and LLM-augmented evaluators, or use AutoEval - describe your app in natural language and get a pre-configured pipeline. Faithfulness, RAG quality, safety, agent trajectory, function calling, and more. Export as YAML for CI/CD.

Pick Metrics
SELECT EVALUATION METRICS Faithfulness Toxicity Context Relevance Instruction Adherence Answer Relevancy Function Call Accuracy Coherence Readability Conciseness Bias Detection Hallucination OR AutoEval - let AI pick the right metrics Describe your app... "A customer support chatbot that handles refund requests" Export as YAML 6 metrics selected
02

Run locally, augment when needed

Heuristics run first at zero cost. Set augment=True to refine with an LLM only when confidence is low. Pass a feedback store and previous corrections become few-shot examples - your eval improves with every correction.

Execution Pipeline
Running
EVALUATION PIPELINE Input 1,247 rows Local Heuristics Faithfulness $0 Coherence $0 Readability $0 Conciseness $0 4 metrics · deterministic Augment (LLM) Hallucination LLM Context Rel. LLM Feedback corrections injected 2 metrics · LLM-judged + human feedback loop Scores Row 1 Row 2 Row 3 Row 4 Row 5 1,247 / 1,247 complete
03

Attach to any context, compare everything

Same eval config attaches to datasets, simulations, experiments, or production traces. Results include scores, pass/fail, and explanations per row. Compare across runs, track trends, gate deployments. Feed results into Prompt Optimizer and Fix My Agent.

Multi-Context Deployment
ATTACH TO ANY CONTEXT Eval Config 6 metrics · YAML faithfulness, hallucination, ... Dataset Batch evaluation avg: 0.87 Experiment A/B compare avg: 0.91 Simulation Scenario testing avg: 0.82 Prod Continuous avg: 0.89 Compare across runs +4.2% v1.0 v3.2

Powering teams from
prototype to production

From ambitious startups to global enterprises, teams trust Future AGI to ship AI agents confidently.