DeepSeek R1

Snowflake Cortex chat

DeepSeek R1 is a Snowflake Cortex chat model.It supports a 32,768-token context windowwith up to 8,192 output tokens. Capabilities include reasoning. Route DeepSeek R1 via Future AGI's Agent Command Center for unified observability, caching, and 15 routing strategies including cost-optimized fallback.

Pricing source: unknown Last verified: May 12, 2026 View source ↗
Pricing not yet public

We don't have verified per-token pricing for DeepSeek R1 yet. If you have a source from Snowflake Cortex's documentation, help us add it — your submission gets reviewed within 48 hours.

Pricing

Per-token rates, expressed in USD per 1M tokens. Verified May 12, 2026.

Input
Output

Limits

Context window
32,768 tokens
Max input
32,768 tokens
Max output
8,192 tokens
Modalities
text

Capabilities

  • Function calling — not advertised
  • Parallel tool calls — not advertised
  • Vision input — not advertised
  • Audio input — not advertised
  • Audio output — not advertised
  • PDF input — not advertised
  • Streaming ✓ supported
  • Structured output — not advertised
  • Prompt caching — not advertised
  • Reasoning ✓ supported

Where it's strong

  • +multi-step reasoning and analysis tasks

Watch out for

  • !limited context — 32,768-token window is in the bottom quartile; not ideal for long documents or large RAG
  • !agentic workflows — no advertised function-calling; use a tool-capable model and route via Agent Command Center for fallback
  • !strict structured output — no JSON-schema enforcement, expect retry loops

Benchmark scores

Reported public benchmark numbers. Each row links to the source. Faded bar shows 6-peer average for context.

MATH-500math· 0-shot
Captured May 12, 2026
MMLUgeneral· 5-shot↑7% vs peers
Captured May 12, 2026
HumanEvalcode· 0-shot
Captured May 12, 2026
MMLU-Proreasoning· 5-shot CoT
Captured May 12, 2026
Chatbot Arena ELOgeneral· overall↓6% vs peers
Captured May 12, 2026
AIME 2024math· 0-shot↑117% vs peers
Captured May 12, 2026
GPQA Diamondreasoning· 0-shot
Captured May 12, 2026
LiveCodeBenchcode· pass@1
Captured May 12, 2026
Aider Polyglotcode· pass@1
Captured May 12, 2026
SWE-bench Verifiedagent· agentic↑29% vs peers
Captured May 12, 2026
Try it

Call DeepSeek R1 via Agent Command Center

One OpenAI-compatible endpoint. Routing, fallback, semantic caching, guardrails, and cost tracking come along for the ride. First 100K requests + 100K cache hits free every month.

SDK
Native Future AGI client (agentcc / @agentcc/client). Per-call metadata — provider, cost, latency, cache hit, request id — is returned on x-agentcc-* response headers, so any HTTP client can read it.
# DeepSeek R1 via the Agent Command Center Python SDK
# pip install agentcc
import os
from agentcc import AgentCC

client = AgentCC(
    api_key=os.environ["AGENTCC_API_KEY"],   # from app.futureagi.com → Settings → API Keys
    base_url="https://gateway.futureagi.com/v1",
)

resp = client.chat.completions.create(
    model="snowflake-cortex/deepseek-r1",
    messages=[{"role": "user", "content": "Hello, DeepSeek R1!"}],
)

print(resp.choices[0].message.content)
print(f"Tokens: {resp.usage.total_tokens}")

# Per-call gateway metadata is returned on x-agentcc-* response headers.
# When you need it programmatically, use .with_raw_response to get them:
raw = client.chat.completions.with_raw_response.create(
    model="snowflake-cortex/deepseek-r1",
    messages=[{"role": "user", "content": "Same call, but I want the headers."}],
)
print("Provider:", raw.headers.get("x-agentcc-provider"))
print("Latency:", raw.headers.get("x-agentcc-latency-ms"), "ms")
print("Cost:   ", raw.headers.get("x-agentcc-cost"), "USD")
print("Cache:  ", raw.headers.get("x-agentcc-cache"))
Set AGENTCC_API_KEY with a key fromapp.futureagi.com.Gateway docs ↗
Advanced: fallback + cache config (YAML)
strategy: cost-optimized
targets:
  - model: deepseek-r1
    provider: snowflake-cortex
    weight: 80
fallbacks:
  - model: grok-3
    provider: xai
  - model: gpt-5-mini
    provider: openai
guardrails: [pii, prompt-injection, secrets]
cache: { exact: true, semantic: true }

Same model on other providers

deepseek-r1 is also available via 3 other routes. Pricing, regions, and capabilities can differ — compare before routing production traffic.

ProviderInput / 1MOutput / 1MVerified
Azure AI Foundry$1.35/M$5.40/MMay 12, 2026
DeepSeek$0.550/M$2.19/MMay 12, 2026
SambaNova$5.00/M$7.00/MMay 12, 2026

Compare with similar models

Grouped by Chatbot Arena tier (DeepSeek R1 sits at 1361 ELO).

FAQ

How much does DeepSeek R1 cost?

Public per-token pricing for DeepSeek R1 is not yet published. Submit a source on this page to help us add it.

What is the context window of DeepSeek R1?

DeepSeek R1 supports a 32,768-token context window with up to 8,192 output tokens.

Does DeepSeek R1 support function calling?

DeepSeek R1 does not currently advertise function-calling support. For agentic workloads, prefer a tool-calling-capable model and route via Agent Command Center for fallback.

Is DeepSeek R1 good for production?

DeepSeek R1 is well-suited for multi-step reasoning and analysis tasks. Consider alternatives if you need limited context — 32,768-token window is in the bottom quartile; not ideal for long documents or large RAG.

How can I route to DeepSeek R1 with fallback?

Use Agent Command Center: a single OpenAI-compatible endpoint that supports cost-optimized routing, latency-aware retries, model fallback, and shadow traffic. Configure once, swap models without app changes.

Useful links for DeepSeek R1

Official sources, independent benchmarks, and pricing aggregators — no random search-engine guesses.