DeepSeek R1 vs DeepSeek V3

DeepSeek R1 vs DeepSeek V3: DeepSeek V3 is cheaper by 16% on average. DeepSeek R1 from Azure AI Foundry (128,000-token context, reasoning) vs. DeepSeek V3 from Azure AI Foundry (128,000-token context). Use Agent Command Center to A/B both in shadow mode and pick the winner per workload.

Side-by-side cost

Live workload comparison

Same workload run through both models. The cheaper one is highlighted.

3,000
0128,000
400
08,192
5,000
01,000,000
Azure AI Foundry
$945/mo
Input $1.35/M · Output $5.40/M
Azure AI Foundry
$798/mo
Input $1.14/M · Output $4.56/M
At this workload, DeepSeek V3 is 16% cheaper than DeepSeek R1 — a savings of $147/month ($1,764/year).
Production recipe — Agent Command Center
strategy: cost-optimized
primary:
  model: deepseek-v3
  provider: azure-ai-foundry
fallback:
  model: deepseek-r1
  provider: azure-ai-foundry
shadow: { sample_rate: 0.05 }   # mirror 5% of traffic to compare quality live
DeepSeek R1 DeepSeek V3
Input price $1.35/M $1.14/M
Output price $5.40/M $4.56/M
Context window 128,000 128,000
Max output 8,192 8,192
Function calling
Vision
Audio input
Reasoning
Prompt caching
Structured output
Pricing verified May 12, 2026 May 12, 2026
Cheaper option
~16% cheaper than DeepSeek R1
Larger context
128,000 tokens
More capabilities
1 of 6 capability flags advertised

Benchmark comparison

Side-by-side public benchmark scores. Greener bar = winner.

Chatbot Arena ELOgeneral
DeepSeek R1
1,361
DeepSeek V3
1,310
MATH-500math
DeepSeek R1
97.3%
DeepSeek V3
MMLUgeneral
DeepSeek R1
90.8%
DeepSeek V3
88.5%
MATHmath
DeepSeek R1
DeepSeek V3
90.2%
HumanEvalcode
DeepSeek R1
89.7%
DeepSeek V3
82.6%
MMLU-Proreasoning
DeepSeek R1
84.0%
DeepSeek V3
75.9%
AIME 2024math
DeepSeek R1
79.8%
DeepSeek V3
39.6%
GPQA Diamondreasoning
DeepSeek R1
71.5%
DeepSeek V3
59.1%
LiveCodeBenchcode
DeepSeek R1
65.9%
DeepSeek V3
40.5%
Aider Polyglotcode
DeepSeek R1
57.0%
DeepSeek V3
SWE-bench Verifiedagent
DeepSeek R1
49.2%
DeepSeek V3
42.0%