Trace AWS Bedrock
Cloud Platforms
Auto-instrument AWS Bedrock with traceAI in under 3 minutes. Every LLM call, tool use, retrieval, and chain step becomes an OpenTelemetry span you can search, replay, and debug.
Recipes for AWS Bedrock
Prerequisites
Before you start
- · A working AWS Bedrock app — local or already in production.
- · A free Future AGI account with
FI_API_KEYandFI_SECRET_KEY. - · Python 3.9+ / Node 18+ / Java 17+ depending on which SDK you're installing.
Install
pip install traceAI-bedrockTrace recipe
from fi_instrumentation import register
from fi_instrumentation.fi_types import ProjectType
from traceai_bedrock import BedrockInstrumentor
trace_provider = register(
project_type=ProjectType.OBSERVE,
project_name="BEDROCK_APP",
)
BedrockInstrumentor().instrument(tracer_provider=trace_provider)
# Your existing AWS Bedrock code runs unchanged from here.
# Every call is now an OpenTelemetry span in Future AGI.What Future AGI captures
Trace fields you'll see in the dashboard
-
Spans for every AWS Bedrock call: input, output, latency, tokens, cost, model name, errors
-
Trace tree across LLM, tool, retrieval, embedding, and chain spans
-
Custom attributes via `using_attributes` (session_id, user_id, prompt_template, tags, custom dicts)
-
Streaming-safe — partial chunks aggregated into a single span
Common gotchas
Read these before you ship
- 01
Set `FI_API_KEY` and `FI_SECRET_KEY` in env before calling `register()` — silent fallback otherwise.
- 02
Async frameworks: instantiate the instrumentor *before* you create the client, not after.
- 03
Streaming responses are aggregated into a single span only when you use the official SDK iterator.
Next: chain it with the other recipes
Trace is the first step. Most teams add an evaluator the same week, and start optimising or simulating once they have a baseline. Each recipe takes minutes to wire up.
Adjacent integrations
More integrations like AWS Bedrock
Vertex AI
Google Cloud's hosted Gemini, Anthropic, and Llama endpoints.
Azure OpenAI
Microsoft Azure's regulated OpenAI deployments and assistants.
IBM watsonx
IBM watsonx.ai foundation models for regulated workloads.
Replicate
Run open-source AI models on Replicate's serverless GPUs.