Mistral Large 2411 vs Mistral Medium 2505

Mistral Large 2411 vs Mistral Medium 2505: Mistral Medium 2505 is cheaper by 80% on average. Mistral Large 2411 from Google Vertex AI (128,000-token context, tool calls) vs. Mistral Medium 2505 from Mistral AI (131,072-token context, tool calls). Use Agent Command Center to A/B both in shadow mode and pick the winner per workload.

Side-by-side cost

Live workload comparison

Same workload run through both models. The cheaper one is highlighted.

3,000
0131,072
400
08,191
5,000
01,000,000
Google Vertex AI
$1,278/mo
Input $2.00/M · Output $6.00/M
Mistral AI
$304/mo
Input $0.400/M · Output $2.00/M
At this workload, Mistral Medium 2505 is 76% cheaper than Mistral Large 2411 — a savings of $974/month ($11,688/year).
Production recipe — Agent Command Center
strategy: cost-optimized
primary:
  model: mistral-medium-2505
  provider: mistral
fallback:
  model: mistral-large-2411
  provider: vertex-ai
shadow: { sample_rate: 0.05 }   # mirror 5% of traffic to compare quality live
Mistral Large 2411 Mistral Medium 2505
Input price $2.00/M $0.400/M
Output price $6.00/M $2.00/M
Context window 128,000 131,072
Max output 8,191 8,191
Function calling
Vision
Audio input
Reasoning
Prompt caching
Structured output
Pricing verified May 12, 2026 May 12, 2026
Cheaper option
~80% cheaper than Mistral Large 2411
Larger context
131,072 tokens
More capabilities
2 of 6 capability flags advertised

Benchmark comparison

Side-by-side public benchmark scores. Greener bar = winner.

HumanEvalcode
Mistral Large 2411
92.0%
Mistral Medium 2505
MMLUgeneral
Mistral Large 2411
84.0%
Mistral Medium 2505
MATHmath
Mistral Large 2411
71.5%
Mistral Medium 2505
GPQAreasoning
Mistral Large 2411
40.9%
Mistral Medium 2505