Mistralai Mistral Small 3.1 24B Instruct 2503
IBM watsonx chatMistralai Mistral Small 3.1 24B Instruct 2503 is an IBM watsonx chat model.It supports a 32,000-token context windowwith up to 32,000 output tokens.Input is priced at $0.1000/M tokens and output at $0.300/M tokens. Capabilities include function calling. Route Mistralai Mistral Small 3.1 24B Instruct 2503 via Future AGI's Agent Command Center for unified observability, caching, and 15 routing strategies including cost-optimized fallback.
Estimate Mistralai Mistral Small 3.1 24B Instruct 2503 spend
Pick a workload, fine-tune the sliders, and see the monthly bill.
Estimate uses $0.1000/M input · $0.3000/M output. Provider pricing changes. Production costs vary with retries, streaming overhead, and tool-call rounds.
Want this for free? Cache + route via Agent Command Center — first 100K requests and 100K cache hits free every month.
Pricing
Per-token rates, expressed in USD per 1M tokens. Verified May 12, 2026.
| Input | $0.1000/M | |
| Output | $0.300/M |
Limits
- Context window
- 32,000 tokens
- Max input
- 32,000 tokens
- Max output
- 32,000 tokens
- Modalities
- text
Capabilities
- Function calling ✓ supported
- Parallel tool calls ✓ supported
- Vision input — not advertised
- Audio input — not advertised
- Audio output — not advertised
- PDF input — not advertised
- Streaming ✓ supported
- Structured output — not advertised
- Prompt caching — not advertised
- Reasoning — not advertised
Where it's strong
- +parallel tool calls — only 21% of chat models on Future AGI advertise this
Watch out for
- !limited context — 32,000-token window is in the bottom quartile; not ideal for long documents or large RAG
- !strict structured output — no JSON-schema enforcement, expect retry loops
Benchmarks pending
We haven't logged public benchmark scores for Mistralai Mistral Small 3.1 24B Instruct 2503 yet. Have one to contribute? Submit a source — citations help us prioritise.
Call Mistralai Mistral Small 3.1 24B Instruct 2503 via Agent Command Center
One OpenAI-compatible endpoint. Routing, fallback, semantic caching, guardrails, and cost tracking come along for the ride. First 100K requests + 100K cache hits free every month.
agentcc / @agentcc/client). Per-call metadata — provider, cost, latency, cache hit, request id — is returned on x-agentcc-* response headers, so any HTTP client can read it.# Mistralai Mistral Small 3.1 24B Instruct 2503 via the Agent Command Center Python SDK
# pip install agentcc
import os
from agentcc import AgentCC
client = AgentCC(
api_key=os.environ["AGENTCC_API_KEY"], # from app.futureagi.com → Settings → API Keys
base_url="https://gateway.futureagi.com/v1",
)
resp = client.chat.completions.create(
model="watsonx/mistralai-mistral-small-3-1-24b-instruct-2503",
messages=[{"role": "user", "content": "Hello, Mistralai Mistral Small 3.1 24B Instruct 2503!"}],
)
print(resp.choices[0].message.content)
print(f"Tokens: {resp.usage.total_tokens}")
# Per-call gateway metadata is returned on x-agentcc-* response headers.
# When you need it programmatically, use .with_raw_response to get them:
raw = client.chat.completions.with_raw_response.create(
model="watsonx/mistralai-mistral-small-3-1-24b-instruct-2503",
messages=[{"role": "user", "content": "Same call, but I want the headers."}],
)
print("Provider:", raw.headers.get("x-agentcc-provider"))
print("Latency:", raw.headers.get("x-agentcc-latency-ms"), "ms")
print("Cost: ", raw.headers.get("x-agentcc-cost"), "USD")
print("Cache: ", raw.headers.get("x-agentcc-cache"))AGENTCC_API_KEY with a key fromapp.futureagi.com.Gateway docs ↗Compare with similar models
Mistralai Mistral Small 3.1 24B Instruct 2503 doesn't have a public Arena ELO score yet, so we group by provider only — quality-tier comparisons need a benchmark.
FAQ
How much does Mistralai Mistral Small 3.1 24B Instruct 2503 cost?
Input is priced at $0.1000 per 1M tokens and output at $0.300 per 1M tokens (IBM watsonx, last verified May 12, 2026).
What is the context window of Mistralai Mistral Small 3.1 24B Instruct 2503?
Mistralai Mistral Small 3.1 24B Instruct 2503 supports a 32,000-token context window with up to 32,000 output tokens.
Does Mistralai Mistral Small 3.1 24B Instruct 2503 support function calling?
Yes — Mistralai Mistral Small 3.1 24B Instruct 2503 supports function (tool) calling, including parallel tool calls.
Is Mistralai Mistral Small 3.1 24B Instruct 2503 good for production?
Mistralai Mistral Small 3.1 24B Instruct 2503 is well-suited for parallel tool calls — only 21% of chat models on Future AGI advertise this. Consider alternatives if you need limited context — 32,000-token window is in the bottom quartile; not ideal for long documents or large RAG.
How can I route to Mistralai Mistral Small 3.1 24B Instruct 2503 with fallback?
Use Agent Command Center: a single OpenAI-compatible endpoint that supports cost-optimized routing, latency-aware retries, model fallback, and shadow traffic. Configure once, swap models without app changes.
Useful links for Mistralai Mistral Small 3.1 24B Instruct 2503
Official sources, independent benchmarks, and pricing aggregators — no random search-engine guesses.
Third-party evals — verify the marketing.
Cross-check our number against the rest of the ecosystem.