Baai Bge M3

Novita AI embedding

Baai Bge M3 is a Novita AI embedding model.It supports a 8,192-token context windowwith up to 96,000 output tokens.Input is priced at $0.0100/M tokens and output at $0.0100/M tokens. Route Baai Bge M3 via Future AGI's Agent Command Center for unified observability, caching, and 15 routing strategies including cost-optimized fallback.

Pricing source: litellm Last verified: May 12, 2026 View source ↗
Cost calculator

Estimate Baai Bge M3 spend

Pick a workload, fine-tune the sliders, and see the monthly bill.

~3K in / ~400 out · 5K req/day
3,000
08,192
400
096,000
5,000
01,000,000
Per request
$0.000034
in $0.000030 · out $0.000004
Per day
$0.1700
5,000 requests
Per month
$5.17
152,188 requests

Estimate uses $0.0100/M input · $0.0100/M output. Provider pricing changes. Production costs vary with retries, streaming overhead, and tool-call rounds.
Want this for free? Cache + route via Agent Command Center — first 100K requests and 100K cache hits free every month.

Pricing

Per-token rates, expressed in USD per 1M tokens. Verified May 12, 2026.

Input $0.0100/M
Output $0.0100/M

Limits

Context window
8,192 tokens
Max input
8,192 tokens
Max output
96,000 tokens
Modalities
embedding, text

Capabilities

  • Function calling — not advertised
  • Parallel tool calls — not advertised
  • Vision input — not advertised
  • Audio input — not advertised
  • Audio output — not advertised
  • PDF input — not advertised
  • Streaming ✓ supported
  • Structured output — not advertised
  • Prompt caching — not advertised
  • Reasoning — not advertised

Where it's strong

  • +long-form generation — 96,000-token max output, top-0% of peers

Watch out for

  • !small context (under 16K tokens)

Benchmarks pending

We haven't logged public benchmark scores for Baai Bge M3 yet. Have one to contribute? Submit a source — citations help us prioritise.

Try it

Call Baai Bge M3 via Agent Command Center

One OpenAI-compatible endpoint. Routing, fallback, semantic caching, guardrails, and cost tracking come along for the ride. First 100K requests + 100K cache hits free every month.

SDK
Native Future AGI client (agentcc / @agentcc/client). Per-call metadata — provider, cost, latency, cache hit, request id — is returned on x-agentcc-* response headers, so any HTTP client can read it.
# Baai Bge M3 via the Agent Command Center Python SDK
# pip install agentcc
import os
from agentcc import AgentCC

client = AgentCC(
    api_key=os.environ["AGENTCC_API_KEY"],   # from app.futureagi.com → Settings → API Keys
    base_url="https://gateway.futureagi.com/v1",
)

resp = client.chat.completions.create(
    model="novita-ai/baai-bge-m3",
    messages=[{"role": "user", "content": "Hello, Baai Bge M3!"}],
)

print(resp.choices[0].message.content)
print(f"Tokens: {resp.usage.total_tokens}")

# Per-call gateway metadata is returned on x-agentcc-* response headers.
# When you need it programmatically, use .with_raw_response to get them:
raw = client.chat.completions.with_raw_response.create(
    model="novita-ai/baai-bge-m3",
    messages=[{"role": "user", "content": "Same call, but I want the headers."}],
)
print("Provider:", raw.headers.get("x-agentcc-provider"))
print("Latency:", raw.headers.get("x-agentcc-latency-ms"), "ms")
print("Cost:   ", raw.headers.get("x-agentcc-cost"), "USD")
print("Cache:  ", raw.headers.get("x-agentcc-cache"))
Set AGENTCC_API_KEY with a key fromapp.futureagi.com.Gateway docs ↗

Compare with similar models

Baai Bge M3 doesn't have a public Arena ELO score yet, so we group by provider only — quality-tier comparisons need a benchmark.

FAQ

How much does Baai Bge M3 cost?

Input is priced at $0.0100 per 1M tokens and output at $0.0100 per 1M tokens (Novita AI, last verified May 12, 2026).

What is the context window of Baai Bge M3?

Baai Bge M3 supports a 8,192-token context window with up to 96,000 output tokens.

Does Baai Bge M3 support function calling?

Baai Bge M3 does not currently advertise function-calling support. For agentic workloads, prefer a tool-calling-capable model and route via Agent Command Center for fallback.

Is Baai Bge M3 good for production?

Baai Bge M3 is well-suited for long-form generation — 96,000-token max output, top-0% of peers. Consider alternatives if you need small context (under 16K tokens).

How can I route to Baai Bge M3 with fallback?

Use Agent Command Center: a single OpenAI-compatible endpoint that supports cost-optimized routing, latency-aware retries, model fallback, and shadow traffic. Configure once, swap models without app changes.

Useful links for Baai Bge M3

Official sources, independent benchmarks, and pricing aggregators — no random search-engine guesses.