DeepSeek R1

SambaNova chat

DeepSeek R1 is a SambaNova chat model.It supports a 32,768-token context windowwith up to 32,768 output tokens.Input is priced at $5.00/M tokens and output at $7.00/M tokens. Route DeepSeek R1 via Future AGI's Agent Command Center for unified observability, caching, and 15 routing strategies including cost-optimized fallback.

Pricing source: litellm Last verified: May 12, 2026 View source ↗
Cost calculator

Estimate DeepSeek R1 spend

Pick a workload, fine-tune the sliders, and see the monthly bill.

~3K in / ~400 out · 5K req/day
3,000
032,768
400
032,768
5,000
01,000,000
Per request
$0.0178
in $0.0150 · out $0.002800
Per day
$89.00
5,000 requests
Per month
$2,709
152,188 requests

Estimate uses $5.00/M input · $7.00/M output. Provider pricing changes. Production costs vary with retries, streaming overhead, and tool-call rounds.
Want this for free? Cache + route via Agent Command Center — first 100K requests and 100K cache hits free every month.

Pricing

Per-token rates, expressed in USD per 1M tokens. Verified May 12, 2026.

Input $5.00/M
Output $7.00/M

Limits

Context window
32,768 tokens
Max input
32,768 tokens
Max output
32,768 tokens
Modalities
text

Capabilities

  • Function calling — not advertised
  • Parallel tool calls — not advertised
  • Vision input — not advertised
  • Audio input — not advertised
  • Audio output — not advertised
  • PDF input — not advertised
  • Streaming ✓ supported
  • Structured output — not advertised
  • Prompt caching — not advertised
  • Reasoning — not advertised

Where it's strong

  • +pricing — cheaper than 79% of priced chat models on Future AGI

Watch out for

  • !limited context — 32,768-token window is in the bottom quartile; not ideal for long documents or large RAG
  • !agentic workflows — no advertised function-calling; use a tool-capable model and route via Agent Command Center for fallback
  • !strict structured output — no JSON-schema enforcement, expect retry loops

Benchmark scores

Reported public benchmark numbers. Each row links to the source. Faded bar shows 6-peer average for context.

MATH-500math· 0-shot↓2% vs peers
Captured May 12, 2026
MMLUgeneral· 5-shot
Captured May 12, 2026
HumanEvalcode· 0-shot↓7% vs peers
Captured May 12, 2026
MMLU-Proreasoning· 5-shot CoT↓6% vs peers
Captured May 12, 2026
Chatbot Arena ELOgeneral· overall↓6% vs peers
Captured May 12, 2026
AIME 2024math· 0-shot↓19% vs peers
Captured May 12, 2026
GPQA Diamondreasoning· 0-shot↓18% vs peers
Captured May 12, 2026
LiveCodeBenchcode· pass@1↓27% vs peers
Captured May 12, 2026
Aider Polyglotcode· pass@1↓35% vs peers
Captured May 12, 2026
SWE-bench Verifiedagent· agentic↓35% vs peers
Captured May 12, 2026
Try it

Call DeepSeek R1 via Agent Command Center

One OpenAI-compatible endpoint. Routing, fallback, semantic caching, guardrails, and cost tracking come along for the ride. First 100K requests + 100K cache hits free every month.

SDK
Native Future AGI client (agentcc / @agentcc/client). Per-call metadata — provider, cost, latency, cache hit, request id — is returned on x-agentcc-* response headers, so any HTTP client can read it.
# DeepSeek R1 via the Agent Command Center Python SDK
# pip install agentcc
import os
from agentcc import AgentCC

client = AgentCC(
    api_key=os.environ["AGENTCC_API_KEY"],   # from app.futureagi.com → Settings → API Keys
    base_url="https://gateway.futureagi.com/v1",
)

resp = client.chat.completions.create(
    model="sambanova/deepseek-r1",
    messages=[{"role": "user", "content": "Hello, DeepSeek R1!"}],
)

print(resp.choices[0].message.content)
print(f"Tokens: {resp.usage.total_tokens}")

# Per-call gateway metadata is returned on x-agentcc-* response headers.
# When you need it programmatically, use .with_raw_response to get them:
raw = client.chat.completions.with_raw_response.create(
    model="sambanova/deepseek-r1",
    messages=[{"role": "user", "content": "Same call, but I want the headers."}],
)
print("Provider:", raw.headers.get("x-agentcc-provider"))
print("Latency:", raw.headers.get("x-agentcc-latency-ms"), "ms")
print("Cost:   ", raw.headers.get("x-agentcc-cost"), "USD")
print("Cache:  ", raw.headers.get("x-agentcc-cache"))
Set AGENTCC_API_KEY with a key fromapp.futureagi.com.Gateway docs ↗
Advanced: fallback + cache config (YAML)
strategy: cost-optimized
targets:
  - model: deepseek-r1
    provider: sambanova
    weight: 80
fallbacks:
  - model: grok-3
    provider: xai
  - model: gpt-5-mini
    provider: openai
guardrails: [pii, prompt-injection, secrets]
cache: { exact: true, semantic: true }

Same model on other providers

deepseek-r1 is also available via 3 other routes. Pricing, regions, and capabilities can differ — compare before routing production traffic.

ProviderInput / 1MOutput / 1MVerified
Azure AI Foundry$1.35/M$5.40/MMay 12, 2026
DeepSeek$0.550/M$2.19/MMay 12, 2026
Snowflake CortexMay 12, 2026

Compare with similar models

Grouped by Chatbot Arena tier (DeepSeek R1 sits at 1361 ELO).

FAQ

How much does DeepSeek R1 cost?

Input is priced at $5.00 per 1M tokens and output at $7.00 per 1M tokens (SambaNova, last verified May 12, 2026).

What is the context window of DeepSeek R1?

DeepSeek R1 supports a 32,768-token context window with up to 32,768 output tokens.

Does DeepSeek R1 support function calling?

DeepSeek R1 does not currently advertise function-calling support. For agentic workloads, prefer a tool-calling-capable model and route via Agent Command Center for fallback.

Is DeepSeek R1 good for production?

DeepSeek R1 is well-suited for pricing — cheaper than 79% of priced chat models on Future AGI. Consider alternatives if you need limited context — 32,768-token window is in the bottom quartile; not ideal for long documents or large RAG.

How can I route to DeepSeek R1 with fallback?

Use Agent Command Center: a single OpenAI-compatible endpoint that supports cost-optimized routing, latency-aware retries, model fallback, and shadow traffic. Configure once, swap models without app changes.

Useful links for DeepSeek R1

Official sources, independent benchmarks, and pricing aggregators — no random search-engine guesses.