AI Agents

Integrations

Company News

Future AGI x Portkey Integration: Unified LLM Observability

Last Updated

Jun 25, 2025

By

NVJK Kartik

Time to read

13 mins

TABLE OF CONTENTS

  1. Introduction

LLM orchestration across multiple providers creates operational blind spots. Teams route calls through different APIs but lack unified visibility into performance, costs, and quality metrics. This fragmentation makes debugging slow responses or poor outputs a manual investigation across disconnected systems.

The Future AGI x Portkey integration solves this by combining operational control with quality evaluation in a single trace. Every LLM request now carries complete visibility - from routing and costs through Portkey's gateway to quality scoring via Future AGI's evaluation engine. This integration transforms fragmented AI operations into a unified, observable system.


  1. Solving LLM Observability: The Integration Breakdown

AI providers release models frequently, creating integration challenges. Teams manage multiple APIs, authentication keys, and failover mechanisms without unified visibility into performance. The Future AGI x Portkey integration provides consolidated monitoring across all LLM providers to address this gap.

2.1 About Portkey

Portkey AI is a comprehensive platform designed to streamline and enhance AI integration for developers and organizations. It serves as a unified interface for interacting with over 250 AI models, offering advanced tools for control, visibility, and security in your Generative AI apps.

2.2 About Future AGI

Future AGI is an AI lifecycle platform designed to support enterprises throughout their AI journey. It combines rapid prototyping, rigorous evaluation, continuous observability, and reliable deployment to help build, monitor, optimize, and secure generative AI applications.

Together, they close the loop between operations (Portkey) and outcomes (Future AGI), turning your generative AI stack into measurable, debuggable, and reliable system.

2.3 Why Unified Tracing Matters

LLM operations lack visibility into performance issues. Debugging slow, expensive, or incorrect responses requires checking multiple systems to identify root causes - prompt quality, model performance, or provider issues.

The integration combines Portkey's gateway operations with Future AGI's evaluation engine to unify monitoring data.

  • Before: Operational metrics (cost, latency, retries) and quality analysis existed in separate systems. Correlating performance spikes with response quality required manual investigation across platforms.

  • After: Each request generates a unified trace containing the complete request lifecycle - prompt, model selection, operational metrics from Portkey, and quality scores from Future AGI in a single view.

This provides granular debugging capabilities and actionable metrics for every request.

2.4 Features and Capabilities of this Integration

The integration combines the capabilities of both the platforms into a unified workflow.

  • End-to-End Request Tracing: Track complete API call lifecycles from application through Portkey gateway to LLM provider. Identify bottlenecks and errors immediately.

  • Automated Quality Evaluation: Future AGI scores every response using custom criteria across text, audio, and image outputs beyond basic pass/fail metrics.

  • Unified Cost & Latency Analytics: Portkey operational data integrates directly into Future AGI. View exact costs and latency for each call across OpenAI, Anthropic, Groq, or Vertex AI providers. Compare provider performance directly.

  • Seamless Fallback & Retry Logging: Portkey's automatic retries and provider fallbacks appear in Future AGI traces with exact timing and triggers, providing complete application reliability visibility.

2.5 Implementation and Setup

The integration uses standard OpenTelemetry SDK and a lightweight Portkey instrumentation library for simple setup.

  1. Portkey Manages the Call: Applications use the Portkey client for LLM requests, accessing virtual keys, caching, and retry features.

  2. The Instrumentor Listens: The traceai-portkey library automatically detects these calls.

  3. Future AGI Traces & Evaluates: The instrumentor packages request, response, and Portkey metadata into traces sent to Future AGI's evaluation engine for quality analysis.

Configuration requires only a few lines of code at application startup without modifying existing business logic.


  1. Quick Setup Guide

Ready to see it in action? The integration can be configured and running within minutes.

Step 1: Get Your Keys: Sign up for Future AGI and Portkey if you haven't already.

Step 2: Install the Packages:

pip install portkey-ai fi-instrumentation traceai-portkey python-dotenv

Step 3: Configure and Run: Add the following snippet to the start of your Python application.

from dotenv import load_dotenv
from portkey_ai import Portkey
from traceai_portkey import PortkeyInstrumentor
from fi_instrumentation import register
from fi_instrumentation.fi_types import ProjectType, EvalTag, EvalTagType, EvalSpanKind, EvalName, ModelChoices

# Load API keys from .env file
load_dotenv()

# --- Configure Future AGI Tracing Once ---
tracer_provider = register(
    project_name="My-AI-App",
    eval_tags=[
        EvalTag(
            type=EvalTagType.OBSERVATION_SPAN,
            value=EvalSpanKind.LLM,
            eval_name=EvalName.CONTEXT_ADHERENCE,
            custom_eval_name="Response_Quality"
        )
    ]
)

# Instrument the Portkey client
PortkeyInstrumentor().instrument(tracer_provider=tracer_provider)

# --- Your application logic remains the same! ---
client = Portkey(virtual_key="your-portkey-virtual-key")

completion = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Write a 6-word story about a robot who discovers music."}]
)

print(completion.choices[0].message.content)

That’s it! Your application is now fully instrumented. Head to your Future AGI and Portkey dashboards to see the data flow in.

A sample of FutureAGI dashboard where the traces are sent with automatic evaluations for the AI generated outputs
Image 1: A sample of Future AGI dashboard where the traces are sent with automatic evaluations for the AI generated outputs

The Portkey dashboard displays operational logs for all API calls, providing unified logs across different providers with cost and latency metrics.

PortKey Dashboard to Monitor your operational metrics like latency, costs, and tokens utilized
Image 2: PortKey Dashboard to Monitor your operational metrics like latency, costs, and tokens utilized

By combining Portkey’s Gateway API with Future AGI’s observability and evaluation stack, you can effortlessly build and monitor complex agentic workflows with full visibility, control, and performance insights.

A complex workflow for a E-commerce assistant using Portkey’s LLM Gateway
Image 3: A complex workflow for a E-commerce assistant using Portkey’s LLM Gateway

Ready to dive deeper?

Complete documentation is available in our docs.

This integration advances production-grade AI application development by combining Portkey's operational control with Future AGI's quality insights. The unified visibility enables confident AI development and deployment.


Conclusion

Managing multiple LLM providers without unified observability creates operational blind spots that hinder AI application reliability. The Future AGI x Portkey integration addresses this by consolidating operational metrics and quality evaluation into a single trace.

This unified approach transforms fragmented AI monitoring into a cohesive system, enabling teams to build production-ready applications with complete visibility. The integration establishes a foundation for reliable AI development with actionable insights across the entire request lifecycle.

FAQs

Are accounts required for both Future AGI and Portkey to access this integration?

Does the integration itself incur additional charges?

Does this integration introduce notable latency or performance impact on applications?

Which programming languages and frameworks does this support?

Kartik is an AI researcher specializing in machine learning, NLP, and computer vision, with work recognized in IEEE TALE 2024 and T4E 2024. He focuses on efficient deep learning models and predictive intelligence, with research spanning speaker diarization, multimodal learning, and sentiment analysis.

future agi background
Background image

Ready to deploy Accurate AI?

Book a Demo