Trace, evaluate, optimise,
and simulate any AI stack
Future AGI ships open-source traceAI
instrumentation for every framework, model provider, vector database, and voice stack you'd use
in production. One .instrument()
call and every LLM call, tool use, retrieval, and chain step becomes an OpenTelemetry span —
with attached evaluators, optimisers, and simulations on the same trace.
- Total
- 79
- Python
- 69
- TypeScript
- 62
- Java
- 23
integrations
packages
packages
packages
▸ Most popular
Featured integrations
The frameworks most teams instrument first.
OpenAI
GPT-4o, GPT-5, o-series, and the OpenAI Responses API.
Anthropic
Claude Opus, Sonnet, and Haiku via the Anthropic Messages API.
Google GenAI
Gemini 2.x via the Google GenAI SDK (Vertex + AI Studio).
Vertex AI
Google Cloud's hosted Gemini, Anthropic, and Llama endpoints.
AWS Bedrock
Amazon Bedrock invocation across Claude, Llama, Mistral, Nova, and Titan.
Azure OpenAI
Microsoft Azure's regulated OpenAI deployments and assistants.
LangChain
Chains, agents, and LCEL pipelines with auto-traced spans for every step.
LlamaIndex
Data ingestion, indexing, retrievers, and query engines for RAG.
CrewAI
Role-based multi-agent crews with task plans and tool use.
OpenAI Agents SDK
OpenAI's official agent runtime with handoffs, guardrails, and tracing.
Model Context Protocol
Anthropic's MCP for tool, resource, and prompt servers.
LiveKit Agents
WebRTC voice agents with STT, LLM, and TTS pipelines.
LangGraph
Stateful multi-actor orchestration on top of LangChain.
▸ LLM Providers
LLM Providers
Frontier and open-weight model APIs you can call directly.
OpenAI
GPT-4o, GPT-5, o-series, and the OpenAI Responses API.
Anthropic
Claude Opus, Sonnet, and Haiku via the Anthropic Messages API.
Google GenAI
Gemini 2.x via the Google GenAI SDK (Vertex + AI Studio).
Cohere
Command, Embed, and Rerank via the Cohere API.
Mistral
Mistral Large, Codestral, and open-weight Mistral / Mixtral.
Groq
LPU inference for Llama, Mixtral, and Qwen at sub-second latency.
Together AI
Hosted open-weight LLMs with serverless and dedicated endpoints.
Fireworks AI
FireFunction, Llama, and Mixtral with function-calling.
DeepSeek
DeepSeek V3 and R1 reasoning models via the OpenAI-compatible API.
xAI Grok
Grok 2/3 models via the xAI OpenAI-compatible API.
Cerebras
Wafer-scale inference for Llama and Qwen models.
Ollama
Local Llama, Mistral, and Qwen via the Ollama runtime.
Hugging Face
Hugging Face Inference and Inference Endpoints across thousands of models.
vLLM
High-throughput open-source LLM serving with PagedAttention.
Perplexity
Perplexity Sonar online + offline models with citations.
DeepInfra
Serverless inference for Llama, Mistral, and Qwen at GPU prices.
Anyscale
Anyscale Endpoints for hosted open-source models.
Hyperbolic
Decentralized GPU inference for Llama, Qwen, and DeepSeek.
Novita AI
Serverless inference for Llama, Mistral, and Qwen models.
Nebius AI Studio
European GPU inference platform for open-source LLMs.
SambaNova
Reconfigurable Dataflow Units for high-throughput Llama inference.
Lambda Inference
Lambda's serverless inference API for open-weight LLMs.
GitHub Models
Free-tier model playground from GitHub for prototyping.
Moonshot AI (Kimi)
Moonshot Kimi K1.5 / K2 long-context reasoning models.
ZhipuAI / Z.AI
GLM-4 frontier models from Zhipu / Z.AI.
MiniMax
MiniMax abab and Hailuo models for chat and reasoning.
Voyage AI
Domain-specific embeddings and reranker from Voyage AI.
Jina AI
Jina embeddings, reranker, and reader API.
▸ Agent Frameworks
Agent Frameworks
Multi-step orchestration libraries for tool-using agents.
LangChain
Chains, agents, and LCEL pipelines with auto-traced spans for every step.
LangChain4j
Java-native chains, agents, and RAG with LangChain4j.
DSPy
Declarative LLM programs with optimisable signatures and modules.
CrewAI
Role-based multi-agent crews with task plans and tool use.
AutoGen
Microsoft's multi-agent conversation framework.
Agno
Lightweight Python agent framework with built-in memory and tools.
smolagents
Hugging Face's tiny agent framework with code agents and tool calling.
Pydantic AI
Type-safe agents with Pydantic models for inputs, outputs, and tools.
OpenAI Agents SDK
OpenAI's official agent runtime with handoffs, guardrails, and tracing.
Google ADK
Google Agent Development Kit for Vertex-hosted multi-agent systems.
Claude Agent SDK
Anthropic's Agent SDK for Claude with code execution and computer use.
Strands Agents
AWS Strands SDK for Bedrock-backed agents and workflows.
BeeAI
IBM BeeAI framework for tool-using agents with streaming and memory.
Instructor
Structured outputs from LLMs using Pydantic and Zod schemas.
Mastra
TypeScript-first agent framework with workflows, memory, and RAG.
Vercel AI SDK
Vercel AI SDK for streaming chat, generation, and tools in Next.js.
Spring AI
Spring framework integration for chat models, embeddings, and RAG in Java.
Spring Boot Starter
Auto-configuration starter for tracing Spring Boot AI apps in seconds.
Semantic Kernel
Microsoft's SDK for plugins, planners, and Skill orchestration.
LangGraph
Stateful multi-actor orchestration on top of LangChain.
▸ RAG Frameworks
RAG Frameworks
Retrieval-augmented generation pipelines and indexing tools.
▸ Vector Databases
Vector Databases
Storage backends for embeddings, retrieval, and hybrid search.
Pinecone
Managed vector database with hybrid search and metadata filtering.
Weaviate
Open-source vector database with built-in vectorizers and modules.
Qdrant
Vector search engine with payload filtering and quantisation.
Chroma
Embeddings database for AI applications with first-class collections.
Milvus
Distributed vector database for trillion-scale similarity search.
LanceDB
Embedded multimodal vector database built on Apache Arrow.
pgvector
Postgres extension for vector similarity search.
Redis Vector
RediSearch with vector similarity for fast in-memory retrieval.
MongoDB Atlas Vector
MongoDB Atlas Vector Search for hybrid retrieval at scale.
Elasticsearch
Elasticsearch dense_vector search for hybrid lexical + vector retrieval.
Azure AI Search
Azure AI Search with vector and hybrid retrieval.
▸ Voice & Realtime
Voice & Realtime
Realtime streaming, telephony, and voice-agent stacks.
LiveKit Agents
WebRTC voice agents with STT, LLM, and TTS pipelines.
Pipecat
Open-source voice and multimodal AI orchestration framework.
Vapi
Vapi voice agents — capture STT, LLM, and TTS spans end-to-end.
Retell
Retell voice agents — call analytics + per-turn evals.
▸ Protocols
Protocols
Cross-vendor agent and tool protocols (MCP, A2A, OpenAI Agents).
Model Context Protocol
Anthropic's MCP for tool, resource, and prompt servers.
A2A (Agent-to-Agent)
Google's Agent-to-Agent protocol for cross-vendor agent interop.
n8n
Future AGI nodes for n8n workflows — eval, log, and guard from any node.
▸ Gateways & Routers
Gateways & Routers
Model routers, proxies, and unified APIs.
▸ Guardrails & Safety
Guardrails & Safety
Input/output validation libraries that wrap LLM calls.
▸ Cloud Platforms
Cloud Platforms
Hosted ML infrastructure with private model endpoints.
Vertex AI
Google Cloud's hosted Gemini, Anthropic, and Llama endpoints.
AWS Bedrock
Amazon Bedrock invocation across Claude, Llama, Mistral, Nova, and Titan.
Azure OpenAI
Microsoft Azure's regulated OpenAI deployments and assistants.
IBM watsonx
IBM watsonx.ai foundation models for regulated workloads.
Replicate
Run open-source AI models on Replicate's serverless GPUs.
Nvidia NIM
Nvidia Inference Microservices on enterprise GPUs.
Cloudflare Workers AI
Edge LLM inference on Cloudflare's global GPU network.
▸ On the roadmap
Voting open
Integrations our community is asking for. Vote with a +1 on the GitHub issue and we'll prioritise the highest-voted ones in the next sprint.
ElevenLabs
Voice TTS · also shipped by LiteLLM
Deepgram
Voice STT · also shipped by LiteLLM
AssemblyAI
Voice STT · also shipped by —
Stability AI
Image generation · also shipped by Portkey, LiteLLM
Black Forest Labs (Flux)
Image generation · also shipped by LiteLLM
Databricks Foundation Model API
Cloud platform · also shipped by LiteLLM
Snowflake Cortex
Cloud platform · also shipped by Portkey
AI21
LLM provider · also shipped by Portkey, LiteLLM
Aleph Alpha
LLM provider · also shipped by Portkey, Traceloop
Marqo
Vector DB · also shipped by Traceloop
Dify
No-code workflow · also shipped by Helicone, Opik
Langflow
No-code workflow · also shipped by Opik
Flowise
No-code workflow · also shipped by Opik
Microsoft Agent Framework
Agent framework · also shipped by Opik
Temporal AI
Orchestration · also shipped by Opik
WRITER
LLM provider · also shipped by Traceloop
Don't see what you need? Open a new issue ▸
▸ Mission control
Don't see your stack?
traceAI is OpenTelemetry-native, so any OTel exporter works today. For first-class support, vote on an open GitHub issue or open a new one — most integrations land in under a week.