Mistral Large 2411 vs Mistral Small latest

Mistral Large 2411 vs Mistral Small latest: Mistral Small latest is cheaper by 97% on average. Mistral Large 2411 from Google Vertex AI (128,000-token context, tool calls) vs. Mistral Small latest from Mistral AI (131,072-token context, tool calls). Use Agent Command Center to A/B both in shadow mode and pick the winner per workload.

Side-by-side cost

Live workload comparison

Same workload run through both models. The cheaper one is highlighted.

3,000
0131,072
400
0131,072
5,000
01,000,000
Google Vertex AI
$1,278/mo
Input $2.00/M · Output $6.00/M
Mistral AI
$38.35/mo
Input $0.0600/M · Output $0.180/M
At this workload, Mistral Small latest is 97% cheaper than Mistral Large 2411 — a savings of $1,240/month ($14,880/year).
Production recipe — Agent Command Center
strategy: cost-optimized
primary:
  model: mistral-small-latest
  provider: mistral
fallback:
  model: mistral-large-2411
  provider: vertex-ai
shadow: { sample_rate: 0.05 }   # mirror 5% of traffic to compare quality live
Mistral Large 2411 Mistral Small latest
Input price $2.00/M $0.0600/M
Output price $6.00/M $0.180/M
Context window 128,000 131,072
Max output 8,191 131,072
Function calling
Vision
Audio input
Reasoning
Prompt caching
Structured output
Pricing verified May 12, 2026 May 12, 2026
Cheaper option
~97% cheaper than Mistral Large 2411
Larger context
131,072 tokens
More capabilities
3 of 6 capability flags advertised

Benchmark comparison

Side-by-side public benchmark scores. Greener bar = winner.

HumanEvalcode
Mistral Large 2411
92.0%
Mistral Small latest
MMLUgeneral
Mistral Large 2411
84.0%
Mistral Small latest
MATHmath
Mistral Large 2411
71.5%
Mistral Small latest
GPQAreasoning
Mistral Large 2411
40.9%
Mistral Small latest