Trace · IBM watsonx
IBM watsonx logo

Trace IBM watsonx

Cloud Platforms

Auto-instrument IBM watsonx with traceAI in under 3 minutes. Every LLM call, tool use, retrieval, and chain step becomes an OpenTelemetry span you can search, replay, and debug.

Prerequisites

Before you start

  • · A working IBM watsonx app — local or already in production.
  • · A free Future AGI account with FI_API_KEY and FI_SECRET_KEY.
  • · Python 3.9+ / Node 18+ / Java 17+ depending on which SDK you're installing.

Install

<dependency>
  <groupId>ai.futureagi</groupId>
  <artifactId>traceai-java-watsonx</artifactId>
  <version>LATEST</version>
</dependency>

Trace recipe

import ai.futureagi.fi.instrumentation.TraceProvider;
import ai.futureagi.traceai.watsonx.WatsonxInstrumentor;

TraceProvider provider = TraceProvider.builder()
    .projectName("watsonx_app")
    .projectType("observe")
    .build();

new WatsonxInstrumentor().instrument(provider);

// Your existing IBM watsonx code runs unchanged.
// Every call is now an OpenTelemetry span in Future AGI.

What Future AGI captures

Trace fields you'll see in the dashboard

  • Spans for every IBM watsonx call: input, output, latency, tokens, cost, model name, errors

  • Trace tree across LLM, tool, retrieval, embedding, and chain spans

  • Custom attributes via `using_attributes` (session_id, user_id, prompt_template, tags, custom dicts)

  • Streaming-safe — partial chunks aggregated into a single span

Common gotchas

Read these before you ship

  1. 01

    Set `FI_API_KEY` and `FI_SECRET_KEY` in env before calling `register()` — silent fallback otherwise.

  2. 02

    Async frameworks: instantiate the instrumentor *before* you create the client, not after.

  3. 03

    Streaming responses are aggregated into a single span only when you use the official SDK iterator.

Next: chain it with the other recipes

Trace is the first step. Most teams add an evaluator the same week, and start optimising or simulating once they have a baseline. Each recipe takes minutes to wire up.