Mistral Large 2411 vs Mistral Large latest

Mistral Large 2411 vs Mistral Large latest: Mistral Large 2411 is cheaper by 0% on average. Mistral Large 2411 from Google Vertex AI (128,000-token context, tool calls) vs. Mistral Large latest from Google Vertex AI (128,000-token context, tool calls). Use Agent Command Center to A/B both in shadow mode and pick the winner per workload.

Side-by-side cost

Live workload comparison

Same workload run through both models. The cheaper one is highlighted.

3,000
0128,000
400
08,191
5,000
01,000,000
Google Vertex AI
$1,278/mo
Input $2.00/M · Output $6.00/M
Google Vertex AI
$1,278/mo
Input $2.00/M · Output $6.00/M
At this workload, Mistral Large latest is 0% cheaper than Mistral Large 2411 — a savings of $0.000000/month ($0.000000/year).
Production recipe — Agent Command Center
strategy: cost-optimized
primary:
  model: mistral-large-latest
  provider: vertex-ai
fallback:
  model: mistral-large-2411
  provider: vertex-ai
shadow: { sample_rate: 0.05 }   # mirror 5% of traffic to compare quality live
Mistral Large 2411 Mistral Large latest
Input price $2.00/M $2.00/M
Output price $6.00/M $6.00/M
Context window 128,000 128,000
Max output 8,191 8,191
Function calling
Vision
Audio input
Reasoning
Prompt caching
Structured output
Pricing verified May 12, 2026 May 12, 2026
Cheaper option
Larger context
128,000 tokens
More capabilities
1 of 6 capability flags advertised

Benchmark comparison

Side-by-side public benchmark scores. Greener bar = winner.

HumanEvalcode
Mistral Large 2411
92.0%
Mistral Large latest
MMLUgeneral
Mistral Large 2411
84.0%
Mistral Large latest
MATHmath
Mistral Large 2411
71.5%
Mistral Large latest
GPQAreasoning
Mistral Large 2411
40.9%
Mistral Large latest