Magistral Medium 2509 vs Mistral Large 2411

Magistral Medium 2509 vs Mistral Large 2411: Magistral Medium 2509 is cheaper by 17% on average. Magistral Medium 2509 from Mistral AI (40,000-token context, reasoning, tool calls) vs. Mistral Large 2411 from Google Vertex AI (128,000-token context, tool calls). Use Agent Command Center to A/B both in shadow mode and pick the winner per workload.

Side-by-side cost

Live workload comparison

Same workload run through both models. The cheaper one is highlighted.

3,000
0128,000
400
040,000
5,000
01,000,000
Mistral AI
$1,218/mo
Input $2.00/M · Output $5.00/M
Google Vertex AI
$1,278/mo
Input $2.00/M · Output $6.00/M
At this workload, Magistral Medium 2509 is 5% cheaper than Mistral Large 2411 — a savings of $60.88/month ($731/year).
Production recipe — Agent Command Center
strategy: cost-optimized
primary:
  model: magistral-medium-2509
  provider: mistral
fallback:
  model: mistral-large-2411
  provider: vertex-ai
shadow: { sample_rate: 0.05 }   # mirror 5% of traffic to compare quality live
Magistral Medium 2509 Mistral Large 2411
Input price $2.00/M $2.00/M
Output price $5.00/M $6.00/M
Context window 40,000 128,000
Max output 40,000 8,191
Function calling
Vision
Audio input
Reasoning
Prompt caching
Structured output
Pricing verified May 12, 2026 May 12, 2026
Cheaper option
~17% cheaper than Mistral Large 2411
Larger context
128,000 tokens
More capabilities
3 of 6 capability flags advertised

Benchmark comparison

Side-by-side public benchmark scores. Greener bar = winner.

HumanEvalcode
Magistral Medium 2509
Mistral Large 2411
92.0%
MMLUgeneral
Magistral Medium 2509
Mistral Large 2411
84.0%
MATHmath
Magistral Medium 2509
Mistral Large 2411
71.5%
GPQAreasoning
Magistral Medium 2509
Mistral Large 2411
40.9%