Gemini 1.5 Pro 001

Google Vertex AI chat Deprecated 354d ago
Heads-up: Google Vertex AI has scheduled Gemini 1.5 Pro 001 for deprecation on May 24, 2025. Plan a migration. Use Agent Command Center's model fallback routing to swap models without code changes.

Gemini 1.5 Pro 001 is a Google Vertex AI chat model.It supports a 1,000,000-token context windowwith up to 8,192 output tokens.Input is priced at $1.25/M tokens and output at $5.00/M tokens. Capabilities include function calling, vision. Route Gemini 1.5 Pro 001 via Future AGI's Agent Command Center for unified observability, caching, and 15 routing strategies including cost-optimized fallback.

Pricing source: litellm Last verified: May 7, 2026 View source ↗
Cost calculator

Estimate Gemini 1.5 Pro 001 spend

Pick a workload, fine-tune the sliders, and see the monthly bill.

~3K in / ~400 out · 5K req/day
3,000
01,000,000
400
08,192
5,000
01,000,000
Per request
$0.005750
in $0.003750 · out $0.002000
Per day
$28.75
5,000 requests
Per month
$875
152,188 requests

Estimate uses $1.25/M input · $5.00/M output. Provider pricing changes. Production costs vary with retries, streaming overhead, and tool-call rounds.
Want this for free? Cache + route via Agent Command Center — first 100K requests and 100K cache hits free every month.

Pricing

Per-token rates, expressed in USD per 1M tokens. Verified May 7, 2026.

Input $1.25/M
Output $5.00/M
Per image $0.000329
Per minute (audio) $0.001875

Limits

Context window
1,000,000 tokens
Max input
1,000,000 tokens
Max output
8,192 tokens
Modalities
vision, text

Capabilities

  • Function calling ✓ supported
  • Parallel tool calls ✓ supported
  • Vision input ✓ supported
  • Audio input — not advertised
  • Audio output — not advertised
  • PDF input — not advertised
  • Streaming ✓ supported
  • Structured output ✓ supported
  • Prompt caching — not advertised
  • Reasoning — not advertised

Where it's strong

  • +long-context tasks — context window in the top 8% of peers
  • +parallel tool calls — only 21% of chat models on Future AGI advertise this

Watch out for

  • !already deprecated — provider stopped accepting new traffic 354 days ago

Benchmark scores

Reported public benchmark numbers. Each row links to the source.

Captured May 12, 2026
Try it

Call Gemini 1.5 Pro 001 via Agent Command Center

One OpenAI-compatible endpoint. Routing, fallback, semantic caching, guardrails, and cost tracking come along for the ride. First 100K requests + 100K cache hits free every month.

SDK
Native Future AGI client (agentcc / @agentcc/client). Per-call metadata — provider, cost, latency, cache hit, request id — is returned on x-agentcc-* response headers, so any HTTP client can read it.
# Gemini 1.5 Pro 001 via the Agent Command Center Python SDK
# pip install agentcc
import os
from agentcc import AgentCC

client = AgentCC(
    api_key=os.environ["AGENTCC_API_KEY"],   # from app.futureagi.com → Settings → API Keys
    base_url="https://gateway.futureagi.com/v1",
)

resp = client.chat.completions.create(
    model="vertex-ai/gemini-1-5-pro-001",
    messages=[{"role": "user", "content": "Hello, Gemini 1.5 Pro 001!"}],
)

print(resp.choices[0].message.content)
print(f"Tokens: {resp.usage.total_tokens}")

# Per-call gateway metadata is returned on x-agentcc-* response headers.
# When you need it programmatically, use .with_raw_response to get them:
raw = client.chat.completions.with_raw_response.create(
    model="vertex-ai/gemini-1-5-pro-001",
    messages=[{"role": "user", "content": "Same call, but I want the headers."}],
)
print("Provider:", raw.headers.get("x-agentcc-provider"))
print("Latency:", raw.headers.get("x-agentcc-latency-ms"), "ms")
print("Cost:   ", raw.headers.get("x-agentcc-cost"), "USD")
print("Cache:  ", raw.headers.get("x-agentcc-cache"))
Set AGENTCC_API_KEY with a key fromapp.futureagi.com.Gateway docs ↗

Same model on other providers

gemini-1-5-pro-001 is also available via 1 other route. Pricing, regions, and capabilities can differ — compare before routing production traffic.

ProviderInput / 1MOutput / 1MVerified
Google AI$3.50/M$10.50/MMay 7, 2026

Compare with similar models

Gemini 1.5 Pro 001 doesn't have a public Arena ELO score yet, so we group by provider only — quality-tier comparisons need a benchmark.

FAQ

How much does Gemini 1.5 Pro 001 cost?

Input is priced at $1.25 per 1M tokens and output at $5.00 per 1M tokens (Google Vertex AI, last verified May 7, 2026).

What is the context window of Gemini 1.5 Pro 001?

Gemini 1.5 Pro 001 supports a 1,000,000-token context window with up to 8,192 output tokens.

Does Gemini 1.5 Pro 001 support function calling?

Yes — Gemini 1.5 Pro 001 supports function (tool) calling, including parallel tool calls.

Is Gemini 1.5 Pro 001 good for production?

Gemini 1.5 Pro 001 is well-suited for long-context tasks — context window in the top 8% of peers and parallel tool calls — only 21% of chat models on Future AGI advertise this. Consider alternatives if you need already deprecated — provider stopped accepting new traffic 354 days ago.

How can I route to Gemini 1.5 Pro 001 with fallback?

Use Agent Command Center: a single OpenAI-compatible endpoint that supports cost-optimized routing, latency-aware retries, model fallback, and shadow traffic. Configure once, swap models without app changes.

Useful links for Gemini 1.5 Pro 001

Official sources, independent benchmarks, and pricing aggregators — no random search-engine guesses.