Gemini 2.0 Flash Live preview 04.09

Google Vertex AI chat

Gemini 2.0 Flash Live preview 04.09 is a Google Vertex AI chat model.It supports a 1,048,576-token context windowwith up to 65,535 output tokens.Input is priced at $0.500/M tokens and output at $2.00/M tokens. Capabilities include function calling, vision, reasoning, prompt caching. Route Gemini 2.0 Flash Live preview 04.09 via Future AGI's Agent Command Center for unified observability, caching, and 15 routing strategies including cost-optimized fallback.

Pricing source: litellm Last verified: May 7, 2026 View source ↗
Cost calculator

Estimate Gemini 2.0 Flash Live preview 04.09 spend

Pick a workload, fine-tune the sliders, and see the monthly bill.

~3K in / ~400 out · 5K req/day
3,000
01,048,576
400
065,535
5,000
01,000,000
cached @ $0.0750/M
Per request
$0.002300
in $0.001500 · out $0.000800
Per day
$11.50
5,000 requests
Per month
$350
152,188 requests

Estimate uses $0.5000/M input · $2.00/M output. Provider pricing changes. Production costs vary with retries, streaming overhead, and tool-call rounds.
Want this for free? Cache + route via Agent Command Center — first 100K requests and 100K cache hits free every month.

Pricing

Per-token rates, expressed in USD per 1M tokens. Verified May 7, 2026.

Input $0.500/M
Output $2.00/M
Cached input $0.0750/M
Per image $0.000003

Limits

Context window
1,048,576 tokens
Max input
1,048,576 tokens
Max output
65,535 tokens
Modalities
vision, audio_out, pdf, text

Capabilities

  • Function calling ✓ supported
  • Parallel tool calls — not advertised
  • Vision input ✓ supported
  • Audio input — not advertised
  • Audio output ✓ supported
  • PDF input ✓ supported
  • Streaming ✓ supported
  • Structured output ✓ supported
  • Prompt caching ✓ supported
  • Reasoning ✓ supported

Where it's strong

  • +long-context tasks — context window in the top 2% of peers
  • +audio output — only 3% of chat models on Future AGI advertise this
  • +PDF input — only 16% of chat models on Future AGI advertise this
  • +prompt caching — only 23% of chat models on Future AGI advertise this

Watch out for

  • No major caveats flagged from public spec.

Benchmarks pending

We haven't logged public benchmark scores for Gemini 2.0 Flash Live preview 04.09 yet. Have one to contribute? Submit a source — citations help us prioritise.

Try it

Call Gemini 2.0 Flash Live preview 04.09 via Agent Command Center

One OpenAI-compatible endpoint. Routing, fallback, semantic caching, guardrails, and cost tracking come along for the ride. First 100K requests + 100K cache hits free every month.

SDK
Native Future AGI client (agentcc / @agentcc/client). Per-call metadata — provider, cost, latency, cache hit, request id — is returned on x-agentcc-* response headers, so any HTTP client can read it.
# Gemini 2.0 Flash Live preview 04.09 via the Agent Command Center Python SDK
# pip install agentcc
import os
from agentcc import AgentCC

client = AgentCC(
    api_key=os.environ["AGENTCC_API_KEY"],   # from app.futureagi.com → Settings → API Keys
    base_url="https://gateway.futureagi.com/v1",
)

resp = client.chat.completions.create(
    model="vertex-ai/gemini-2-0-flash-live-preview-04-09",
    messages=[{"role": "user", "content": "Hello, Gemini 2.0 Flash Live preview 04.09!"}],
)

print(resp.choices[0].message.content)
print(f"Tokens: {resp.usage.total_tokens}")

# Per-call gateway metadata is returned on x-agentcc-* response headers.
# When you need it programmatically, use .with_raw_response to get them:
raw = client.chat.completions.with_raw_response.create(
    model="vertex-ai/gemini-2-0-flash-live-preview-04-09",
    messages=[{"role": "user", "content": "Same call, but I want the headers."}],
)
print("Provider:", raw.headers.get("x-agentcc-provider"))
print("Latency:", raw.headers.get("x-agentcc-latency-ms"), "ms")
print("Cost:   ", raw.headers.get("x-agentcc-cost"), "USD")
print("Cache:  ", raw.headers.get("x-agentcc-cache"))
Set AGENTCC_API_KEY with a key fromapp.futureagi.com.Gateway docs ↗

Compare with similar models

Gemini 2.0 Flash Live preview 04.09 doesn't have a public Arena ELO score yet, so we group by provider only — quality-tier comparisons need a benchmark.

FAQ

How much does Gemini 2.0 Flash Live preview 04.09 cost?

Input is priced at $0.500 per 1M tokens and output at $2.00 per 1M tokens (Google Vertex AI, last verified May 7, 2026).

What is the context window of Gemini 2.0 Flash Live preview 04.09?

Gemini 2.0 Flash Live preview 04.09 supports a 1,048,576-token context window with up to 65,535 output tokens.

Does Gemini 2.0 Flash Live preview 04.09 support function calling?

Yes — Gemini 2.0 Flash Live preview 04.09 supports function (tool) calling.

Is Gemini 2.0 Flash Live preview 04.09 good for production?

Gemini 2.0 Flash Live preview 04.09 is well-suited for long-context tasks — context window in the top 2% of peers and audio output — only 3% of chat models on Future AGI advertise this.

How can I route to Gemini 2.0 Flash Live preview 04.09 with fallback?

Use Agent Command Center: a single OpenAI-compatible endpoint that supports cost-optimized routing, latency-aware retries, model fallback, and shadow traffic. Configure once, swap models without app changes.

Useful links for Gemini 2.0 Flash Live preview 04.09

Official sources, independent benchmarks, and pricing aggregators — no random search-engine guesses.