DeepSeek models & pricing
DeepSeek hosts 6 models (6 with public pricing) covering 1 modalities. DeepSeek-R1 reasoning model and DeepSeek-V3 MoE — open-weight frontier-class inference. Cheapest input starts at $0.140/M tokens; the most premium goes up to $0.550/M. Use Future AGI's Agent Command Center to route any DeepSeek model with cost-optimized fallback and unified observability.
chat 6
| Model ↕ | Input / 1M ↕ | Output / 1M ↕ | Context ↕ | Caps |
|---|---|---|---|---|
| DeepSeek Coder | $0.140/M | $0.280/M | 128,000 | tools · cache |
| DeepSeek V3 | $0.270/M | $1.10/M | 65,536 | tools · cache |
| DeepSeek Chat | $0.280/M | $0.420/M | 131,072 | tools · cache |
| DeepSeek Reasoner | $0.280/M | $0.420/M | 131,072 | reasoning · cache |
| DeepSeek v3.2 | $0.280/M | $0.400/M | 163,840 | tools · reasoning · cache |
| DeepSeek R1 | $0.550/M | $2.19/M | 65,536 | tools · reasoning · cache |
FAQ
How many DeepSeek models are there?
6 DeepSeek models are listed across 1 modality on this page. 6 have public per-token pricing.
How is DeepSeek pricing verified?
Pricing is aggregated from BerriAI/litellm, models.dev, and OpenRouter and refreshed weekly. Each row shows a per-model "verified" date. If a price is wrong, click the row to open the model page and use the inline "suggest edit" link — submissions go into a public review queue.
Which DeepSeek model is cheapest?
Input pricing on DeepSeek starts at $0.140 per 1M tokens. Sort the table by price (or use the in-page filter at the top) to find the cheapest model that matches your capability requirements.
Can I route to DeepSeek via an OpenAI-compatible API?
Yes — point your OpenAI client at Future AGI's Agent Command Center, configure a DeepSeek target, and call DeepSeek models with the standard /v1/chat/completions surface. The same gateway can route to other providers as fallback. Free for the first 100K requests/month.