Gemini 2.0 Flash Thinking exp 01.21

Google Vertex AI chat Deprecated 162d ago
Heads-up: Google Vertex AI has scheduled Gemini 2.0 Flash Thinking exp 01.21 for deprecation on Dec 2, 2025. Plan a migration. Use Agent Command Center's model fallback routing to swap models without code changes.

Gemini 2.0 Flash Thinking exp 01.21 is a Google Vertex AI chat model.It supports a 1,048,576-token context windowwith up to 65,536 output tokens. Capabilities include vision, reasoning, prompt caching. Route Gemini 2.0 Flash Thinking exp 01.21 via Future AGI's Agent Command Center for unified observability, caching, and 15 routing strategies including cost-optimized fallback.

Pricing source: unknown Last verified: May 7, 2026 View source ↗
Pricing not yet public

We don't have verified per-token pricing for Gemini 2.0 Flash Thinking exp 01.21 yet. If you have a source from Google Vertex AI's documentation, help us add it — your submission gets reviewed within 48 hours.

Pricing

Per-token rates, expressed in USD per 1M tokens. Verified May 7, 2026.

0 0
Input
Output

Limits

Context window
1,048,576 tokens
Max input
1,048,576 tokens
Max output
65,536 tokens
Modalities
vision, text

Capabilities

  • Function calling — not advertised
  • Parallel tool calls ✓ supported
  • Vision input ✓ supported
  • Audio input — not advertised
  • Audio output — not advertised
  • PDF input — not advertised
  • Streaming ✓ supported
  • Structured output — not advertised
  • Prompt caching ✓ supported
  • Reasoning ✓ supported

Where it's strong

  • +long-context tasks — context window in the top 2% of peers
  • +parallel tool calls — only 21% of chat models on Future AGI advertise this
  • +prompt caching — only 23% of chat models on Future AGI advertise this

Watch out for

  • !agentic workflows — no advertised function-calling; use a tool-capable model and route via Agent Command Center for fallback
  • !strict structured output — no JSON-schema enforcement, expect retry loops
  • !already deprecated — provider stopped accepting new traffic 162 days ago

Benchmarks pending

We haven't logged public benchmark scores for Gemini 2.0 Flash Thinking exp 01.21 yet. Have one to contribute? Submit a source — citations help us prioritise.

Try it

Call Gemini 2.0 Flash Thinking exp 01.21 via Agent Command Center

One OpenAI-compatible endpoint. Routing, fallback, semantic caching, guardrails, and cost tracking come along for the ride. First 100K requests + 100K cache hits free every month.

SDK
Native Future AGI client (agentcc / @agentcc/client). Per-call metadata — provider, cost, latency, cache hit, request id — is returned on x-agentcc-* response headers, so any HTTP client can read it.
# Gemini 2.0 Flash Thinking exp 01.21 via the Agent Command Center Python SDK
# pip install agentcc
import os
from agentcc import AgentCC

client = AgentCC(
    api_key=os.environ["AGENTCC_API_KEY"],   # from app.futureagi.com → Settings → API Keys
    base_url="https://gateway.futureagi.com/v1",
)

resp = client.chat.completions.create(
    model="vertex-ai/gemini-2-0-flash-thinking-exp-01-21",
    messages=[{"role": "user", "content": "Hello, Gemini 2.0 Flash Thinking exp 01.21!"}],
)

print(resp.choices[0].message.content)
print(f"Tokens: {resp.usage.total_tokens}")

# Per-call gateway metadata is returned on x-agentcc-* response headers.
# When you need it programmatically, use .with_raw_response to get them:
raw = client.chat.completions.with_raw_response.create(
    model="vertex-ai/gemini-2-0-flash-thinking-exp-01-21",
    messages=[{"role": "user", "content": "Same call, but I want the headers."}],
)
print("Provider:", raw.headers.get("x-agentcc-provider"))
print("Latency:", raw.headers.get("x-agentcc-latency-ms"), "ms")
print("Cost:   ", raw.headers.get("x-agentcc-cost"), "USD")
print("Cache:  ", raw.headers.get("x-agentcc-cache"))
Set AGENTCC_API_KEY with a key fromapp.futureagi.com.Gateway docs ↗

Same model on other providers

gemini-2-0-flash-thinking-exp-01-21 is also available via 1 other route. Pricing, regions, and capabilities can differ — compare before routing production traffic.

ProviderInput / 1MOutput / 1MVerified
Google AIMay 7, 2026

Compare with similar models

Gemini 2.0 Flash Thinking exp 01.21 doesn't have a public Arena ELO score yet, so we group by provider only — quality-tier comparisons need a benchmark.

FAQ

How much does Gemini 2.0 Flash Thinking exp 01.21 cost?

Public per-token pricing for Gemini 2.0 Flash Thinking exp 01.21 is not yet published. Submit a source on this page to help us add it.

What is the context window of Gemini 2.0 Flash Thinking exp 01.21?

Gemini 2.0 Flash Thinking exp 01.21 supports a 1,048,576-token context window with up to 65,536 output tokens.

Does Gemini 2.0 Flash Thinking exp 01.21 support function calling?

Gemini 2.0 Flash Thinking exp 01.21 does not currently advertise function-calling support. For agentic workloads, prefer a tool-calling-capable model and route via Agent Command Center for fallback.

Is Gemini 2.0 Flash Thinking exp 01.21 good for production?

Gemini 2.0 Flash Thinking exp 01.21 is well-suited for long-context tasks — context window in the top 2% of peers and parallel tool calls — only 21% of chat models on Future AGI advertise this. Consider alternatives if you need agentic workflows — no advertised function-calling; use a tool-capable model and route via Agent Command Center for fallback.

How can I route to Gemini 2.0 Flash Thinking exp 01.21 with fallback?

Use Agent Command Center: a single OpenAI-compatible endpoint that supports cost-optimized routing, latency-aware retries, model fallback, and shadow traffic. Configure once, swap models without app changes.

Useful links for Gemini 2.0 Flash Thinking exp 01.21

Official sources, independent benchmarks, and pricing aggregators — no random search-engine guesses.