Grok 4.20 beta 0309 Reasoning

xAI chat

Grok 4.20 beta 0309 Reasoning is a xAI chat model.It supports a 2,000,000-token context windowwith up to 2,000,000 output tokens.Input is priced at $2.00/M tokens and output at $6.00/M tokens. Capabilities include function calling, vision, reasoning, prompt caching. Route Grok 4.20 beta 0309 Reasoning via Future AGI's Agent Command Center for unified observability, caching, and 15 routing strategies including cost-optimized fallback.

Pricing source: litellm Last verified: May 12, 2026 View source ↗
Cost calculator

Estimate Grok 4.20 beta 0309 Reasoning spend

Pick a workload, fine-tune the sliders, and see the monthly bill.

~3K in / ~400 out · 5K req/day
3,000
02,000,000
400
0200,000
5,000
01,000,000
cached @ $0.2000/M
Per request
$0.008400
in $0.006000 · out $0.002400
Per day
$42.00
5,000 requests
Per month
$1,278
152,188 requests

Estimate uses $2.00/M input · $6.00/M output. Provider pricing changes. Production costs vary with retries, streaming overhead, and tool-call rounds.
Want this for free? Cache + route via Agent Command Center — first 100K requests and 100K cache hits free every month.

Cheaper at a similar Arena tier

Grok 4.3 (xAI) sits in the same Chatbot Arena tier (±80 ELO of Grok 4.20 beta 0309 Reasoning) and runs ~43% cheaper for a typical RAG workload — $723/mo vs $1,278/mo at 3K in / 400 out · 5K reqs/day.

Quality match is gated on real benchmarks; the CTA disappears when no comparable peer exists.

Compare side-by-side →

Pricing

Per-token rates, expressed in USD per 1M tokens. Verified May 12, 2026.

Input $2.00/M
Output $6.00/M
Cached input $0.200/M

Limits

Context window
2,000,000 tokens
Max input
2,000,000 tokens
Max output
2,000,000 tokens
Modalities
vision, text

Capabilities

  • Function calling ✓ supported
  • Parallel tool calls — not advertised
  • Vision input ✓ supported
  • Audio input — not advertised
  • Audio output — not advertised
  • PDF input — not advertised
  • Streaming ✓ supported
  • Structured output — not advertised
  • Prompt caching ✓ supported
  • Reasoning ✓ supported

Where it's strong

  • +long-context tasks — context window in the top 1% of peers
  • +long-form generation — 2,000,000-token max output, top-0% of peers
  • +prompt caching — only 23% of chat models on Future AGI advertise this

Watch out for

  • !strict structured output — no JSON-schema enforcement, expect retry loops

Benchmark scores

Reported public benchmark numbers. Each row links to the source.

Chatbot Arena ELOgeneral· overall
Captured May 12, 2026
Try it

Call Grok 4.20 beta 0309 Reasoning via Agent Command Center

One OpenAI-compatible endpoint. Routing, fallback, semantic caching, guardrails, and cost tracking come along for the ride. First 100K requests + 100K cache hits free every month.

SDK
Native Future AGI client (agentcc / @agentcc/client). Per-call metadata — provider, cost, latency, cache hit, request id — is returned on x-agentcc-* response headers, so any HTTP client can read it.
# Grok 4.20 beta 0309 Reasoning via the Agent Command Center Python SDK
# pip install agentcc
import os
from agentcc import AgentCC

client = AgentCC(
    api_key=os.environ["AGENTCC_API_KEY"],   # from app.futureagi.com → Settings → API Keys
    base_url="https://gateway.futureagi.com/v1",
)

resp = client.chat.completions.create(
    model="xai/grok-4-20-beta-0309-reasoning",
    messages=[{"role": "user", "content": "Hello, Grok 4.20 beta 0309 Reasoning!"}],
)

print(resp.choices[0].message.content)
print(f"Tokens: {resp.usage.total_tokens}")

# Per-call gateway metadata is returned on x-agentcc-* response headers.
# When you need it programmatically, use .with_raw_response to get them:
raw = client.chat.completions.with_raw_response.create(
    model="xai/grok-4-20-beta-0309-reasoning",
    messages=[{"role": "user", "content": "Same call, but I want the headers."}],
)
print("Provider:", raw.headers.get("x-agentcc-provider"))
print("Latency:", raw.headers.get("x-agentcc-latency-ms"), "ms")
print("Cost:   ", raw.headers.get("x-agentcc-cost"), "USD")
print("Cache:  ", raw.headers.get("x-agentcc-cache"))
Set AGENTCC_API_KEY with a key fromapp.futureagi.com.Gateway docs ↗
Advanced: fallback + cache config (YAML)
strategy: cost-optimized
targets:
  - model: grok-4-20-beta-0309-reasoning
    provider: xai
    weight: 80
fallbacks:
  - model: grok-4-3
    provider: xai
  - model: claude-opus-4-6-20260205
    provider: anthropic
guardrails: [pii, prompt-injection, secrets]
cache: { exact: true, semantic: true }

Compare with similar models

Grouped by Chatbot Arena tier (Grok 4.20 beta 0309 Reasoning sits at 1477 ELO).

FAQ

How much does Grok 4.20 beta 0309 Reasoning cost?

Input is priced at $2.00 per 1M tokens and output at $6.00 per 1M tokens (xAI, last verified May 12, 2026).

What is the context window of Grok 4.20 beta 0309 Reasoning?

Grok 4.20 beta 0309 Reasoning supports a 2,000,000-token context window with up to 2,000,000 output tokens.

Does Grok 4.20 beta 0309 Reasoning support function calling?

Yes — Grok 4.20 beta 0309 Reasoning supports function (tool) calling.

Is Grok 4.20 beta 0309 Reasoning good for production?

Grok 4.20 beta 0309 Reasoning is well-suited for long-context tasks — context window in the top 1% of peers and long-form generation — 2,000,000-token max output, top-0% of peers. Consider alternatives if you need strict structured output — no JSON-schema enforcement, expect retry loops.

How can I route to Grok 4.20 beta 0309 Reasoning with fallback?

Use Agent Command Center: a single OpenAI-compatible endpoint that supports cost-optimized routing, latency-aware retries, model fallback, and shadow traffic. Configure once, swap models without app changes.

Useful links for Grok 4.20 beta 0309 Reasoning

Official sources, independent benchmarks, and pricing aggregators — no random search-engine guesses.