AI Features & Data Privacy

How AI-powered features in Future AGI handle your data.

Overview

Future AGI uses AI models in several platform features to provide automated analysis, evaluation, and protection. This page explains how each AI-powered feature handles your data, what controls are available, and our commitment to never training models on customer data.

Core principle: Future AGI does not use customer data to train, fine-tune, or improve any AI models. All AI processing is ephemeral and performed solely to deliver the requested feature.

Falcon AI

Falcon AI provides automated error analysis, operational insights, and auto-tagging across your traces and evaluations.

  • How it works: Falcon AI analyzes trace data, evaluation results, and guardrail outputs to surface patterns, anomalies, and actionable recommendations.
  • Data handling: Processing occurs within Future AGI’s infrastructure in your selected data region. Results are stored as part of your project data.
  • Opt-in: Falcon AI features are enabled at the project level. You can disable them at any time from project settings.

Turing Evaluation Models (LLM-as-Judge)

Future AGI’s built-in evaluation framework uses proprietary Turing models to score LLM outputs for correctness, relevance, safety, and other quality dimensions.

  • How it works: When you run an evaluation using Future AGI’s Turing models, the trace data and evaluation criteria are processed by our hosted models.
  • Data handling: All processing is ephemeral with zero retention. Input data is discarded immediately after the evaluation result is generated. No customer data is persisted in model infrastructure.
  • No training: Turing models are never trained or fine-tuned on customer evaluation data.

Bring Your Own Key (BYOK) Evaluations

For teams that prefer to use their own LLM provider for evaluations, Future AGI supports BYOK at $0 platform cost for evaluation compute.

  • How it works: You provide your own API key (OpenAI, Anthropic, Google, etc.), and evaluation prompts are sent directly to your model provider.
  • Data handling: When using BYOK, trace data flows directly from Future AGI to your model provider using your API key. Future AGI does not store or cache model responses beyond the evaluation result. Your provider’s data policies apply to the model processing step.
  • Control: You retain full control over which provider processes your data and under what terms.

AI-Powered Prompt Improvement

Future AGI offers AI-assisted prompt refinement to help optimize evaluation prompts and guardrail configurations.

  • Data handling: Only the prompt text and optional sample inputs are processed. Production trace data is not sent to prompt improvement models unless you explicitly include it.
  • Opt-in: This feature is invoked manually and never runs automatically on your data.

Protect ML Guardrails (Gemma 3n)

The Protect guardrail layer uses Gemma 3n models to perform real-time content classification, toxicity detection, and policy enforcement.

  • How it works: When guardrails are enabled on a gateway route, inbound and outbound messages are classified by the Gemma 3n model before being passed through.
  • Data handling: Classification is performed in real-time with no data retention in the model layer. Only the guardrail decision (pass/block/flag) and metadata are stored in your trace data.
  • Self-hosted option: Enterprise customers can deploy Protect models within their own infrastructure for full data isolation.

Questions?

Reach out to our security team.

security@futureagi.com

Request documents

SOC 2 report, DPA, pen test summary.

Request documents →