Mistral Large latest vs Mistral Small latest

Mistral Large latest vs Mistral Small latest: Mistral Small latest is cheaper by 97% on average. Mistral Large latest from Google Vertex AI (128,000-token context, tool calls) vs. Mistral Small latest from Mistral AI (131,072-token context, tool calls). Use Agent Command Center to A/B both in shadow mode and pick the winner per workload.

Side-by-side cost

Live workload comparison

Same workload run through both models. The cheaper one is highlighted.

3,000
0131,072
400
0131,072
5,000
01,000,000
Google Vertex AI
$1,278/mo
Input $2.00/M · Output $6.00/M
Mistral AI
$38.35/mo
Input $0.0600/M · Output $0.180/M
At this workload, Mistral Small latest is 97% cheaper than Mistral Large latest — a savings of $1,240/month ($14,880/year).
Production recipe — Agent Command Center
strategy: cost-optimized
primary:
  model: mistral-small-latest
  provider: mistral
fallback:
  model: mistral-large-latest
  provider: vertex-ai
shadow: { sample_rate: 0.05 }   # mirror 5% of traffic to compare quality live
Mistral Large latest Mistral Small latest
Input price $2.00/M $0.0600/M
Output price $6.00/M $0.180/M
Context window 128,000 131,072
Max output 8,191 131,072
Function calling
Vision
Audio input
Reasoning
Prompt caching
Structured output
Pricing verified May 12, 2026 May 12, 2026
Cheaper option
~97% cheaper than Mistral Large latest
Larger context
131,072 tokens
More capabilities
3 of 6 capability flags advertised