See every step
your agent takes

End-to-end request tracing for AI agents. Follow every request through retrieval, generation, tool calls, and guards - with timing, tokens, and cost at every step. Powered by traceAI, our open-source library with 30+ framework integrations built on OpenTelemetry.

‹ Back QA-Chatbot · trace-8f0a3b91
OK 2.34s 4,832 tokens $0.14
Waterfall
0ms500ms1000ms1500ms2000ms2340ms
QA-Chatbot
2340ms
├ handle-message
2100ms
├ retrieve-context
340ms
├ search_docs
230ms
├ ai.streamText
1420ms
└ guard-check
45ms
Recent Traces
Trace Trace ID Model Latency Tokens Cost Status Eval
QA-Chatbot 8f0a3b91-4c... gpt-4o 2.34s 4,832 $0.14 0.92
VectorSearch a2c8f914-7d... claude-sonnet 1.82s 3,210 $0.09 0.88
SQLQueryEngine d4f2e831-9a... gpt-4o-mini 0.94s 1,847 $0.03 0.41
DocRetrieve b7e1c042-3f... gpt-4o 3.12s 6,421 $0.19 0.73
Core Features

Full observability
for your AI pipeline

Trace Tree - QA-Chatbot
QA-Chatbot 2340ms
handle-message chain 2100ms
retrieve-context retriever 340ms
search_docs tool 230ms
ai.streamText llm 1420ms
guard-check guard 45ms

Every request produces a trace tree with full span hierarchy - from the root agent call through LLM generation, tool invocations, retrieval, chain steps, and guard checks. Each span captures input, output, latency, token counts (prompt + completion), cost, model name, provider, status, and custom attributes.

See tracing in action

Visualize execution as a nested waterfall timeline showing parallel and sequential operations. Click any span to see its full detail - input/output payloads, token breakdown, latency, evaluation scores, and annotations. Trace trees show the parent-child relationship between agent, LLM, chain, tool, and embedding spans.

Explore timeline view

Filter traces by trace name, trace ID, user, session, model, provider, status, span kind (agent, LLM, tool, chain, embedding), latency range, token count, cost, tags, prompt name/version, or any custom span attribute. Combine multiple filters with AND logic. Results update in real-time.

Learn about search

traceAI is our open-source instrumentation library built on OpenTelemetry. Install a framework-specific package (pip install traceAI-openai, traceAI-langchain, traceAI-anthropic...), call .instrument(), and every LLM call is traced automatically. 30+ integrations - OpenAI, Anthropic, Bedrock, Vertex AI, LangChain, LlamaIndex, CrewAI, AutoGen, DSPy, Haystack, MCP, Pipecat, VAPI, LiveKit, and more. Python and TypeScript. Vendor-neutral - works with any OTel-compatible backend.

View on GitHub
How It Works

From blind to
full visibility in minutes

Install traceAI Python
pip install traceAI-openai
from fi_instrumentation import register
provider = register()
OpenAIInstrumentor().instrument(
tracer_provider=provider)

Install traceAI for your framework

pip install traceAI-openai (or traceAI-langchain, traceAI-anthropic, traceAI-crewai...). Call .instrument() and every LLM call, tool use, retrieval, and chain step is auto-traced. 30+ framework packages. Built on OpenTelemetry.

Live Traces Streaming
QA-Chatbot 2.3s
VectorSearch 1.8s
SQLQueryEngine 0.9s

Traces flow in real-time

Every request produces a trace with full span hierarchy, timing, tokens, cost, and input/output at each step. Search by any attribute - user, session, model, status, latency, or custom tags.

Span Analysis ai.streamText
Latency 1,420ms
Tokens 3,847
Eval Score 0.92

Debug and optimize

Use the waterfall timeline to find bottlenecks. Click any span for full detail. Attach evaluation scores to traces. Feed insights into experiments to continuously improve your agent.

Powering teams from
prototype to production

From ambitious startups to global enterprises, teams trust Future AGI to ship AI agents confidently.