GPT-4o

OpenAI chat

GPT-4o is an OpenAI chat model.It supports a 128,000-token context windowwith up to 16,384 output tokens.Input is priced at $2.50/M tokens and output at $10.00/M tokens. Capabilities include function calling, vision, prompt caching. Route GPT-4o via Future AGI's Agent Command Center for unified observability, caching, and 15 routing strategies including cost-optimized fallback.

Pricing source: litellm Last verified: May 12, 2026 View source ↗
Cost calculator

Estimate GPT-4o spend

Pick a workload, fine-tune the sliders, and see the monthly bill.

~3K in / ~400 out · 5K req/day
3,000
0128,000
400
016,384
5,000
01,000,000
cached @ $1.25/M
Per request
$0.0115
in $0.007500 · out $0.004000
Per day
$57.50
5,000 requests
Per month
$1,750
152,188 requests

Estimate uses $2.50/M input · $10.00/M output. Provider pricing changes. Production costs vary with retries, streaming overhead, and tool-call rounds.
Want this for free? Cache + route via Agent Command Center — first 100K requests and 100K cache hits free every month.

Pricing

Per-token rates, expressed in USD per 1M tokens. Verified May 12, 2026.

Input $2.50/M
Output $10.00/M
Cached input $1.25/M
Batch input $1.25/M
Batch output $5.00/M

Limits

Context window
128,000 tokens
Max input
128,000 tokens
Max output
16,384 tokens
Modalities
vision, pdf, text

Capabilities

  • Function calling ✓ supported
  • Parallel tool calls ✓ supported
  • Vision input ✓ supported
  • Audio input — not advertised
  • Audio output — not advertised
  • PDF input ✓ supported
  • Streaming ✓ supported
  • Structured output ✓ supported
  • Prompt caching ✓ supported
  • Reasoning — not advertised

Where it's strong

  • +pricing — cheaper than 81% of priced chat models on Future AGI
  • +PDF input — only 16% of chat models on Future AGI advertise this
  • +parallel tool calls — only 21% of chat models on Future AGI advertise this
  • +prompt caching — only 23% of chat models on Future AGI advertise this

Watch out for

  • No major caveats flagged from public spec.

Benchmark scores

Reported public benchmark numbers. Each row links to the source.

HumanEvalcode· 0-shot
Captured May 12, 2026
MMLUgeneral· 5-shot
Captured May 12, 2026
IFEvalgeneral· 0-shot
Captured May 12, 2026
MATHmath· 0-shot
Captured May 12, 2026
MMMUmultimodal· 0-shot
Captured May 12, 2026
Chatbot Arena ELOgeneral· overall
Captured May 12, 2026
GPQAreasoning· 0-shot
Captured May 12, 2026
Try it

Call GPT-4o via Agent Command Center

One OpenAI-compatible endpoint. Routing, fallback, semantic caching, guardrails, and cost tracking come along for the ride. First 100K requests + 100K cache hits free every month.

SDK
Native Future AGI client (agentcc / @agentcc/client). Per-call metadata — provider, cost, latency, cache hit, request id — is returned on x-agentcc-* response headers, so any HTTP client can read it.
# GPT-4o via the Agent Command Center Python SDK
# pip install agentcc
import os
from agentcc import AgentCC

client = AgentCC(
    api_key=os.environ["AGENTCC_API_KEY"],   # from app.futureagi.com → Settings → API Keys
    base_url="https://gateway.futureagi.com/v1",
)

resp = client.chat.completions.create(
    model="openai/gpt-4o",
    messages=[{"role": "user", "content": "Hello, GPT-4o!"}],
)

print(resp.choices[0].message.content)
print(f"Tokens: {resp.usage.total_tokens}")

# Per-call gateway metadata is returned on x-agentcc-* response headers.
# When you need it programmatically, use .with_raw_response to get them:
raw = client.chat.completions.with_raw_response.create(
    model="openai/gpt-4o",
    messages=[{"role": "user", "content": "Same call, but I want the headers."}],
)
print("Provider:", raw.headers.get("x-agentcc-provider"))
print("Latency:", raw.headers.get("x-agentcc-latency-ms"), "ms")
print("Cost:   ", raw.headers.get("x-agentcc-cost"), "USD")
print("Cache:  ", raw.headers.get("x-agentcc-cache"))
Set AGENTCC_API_KEY with a key fromapp.futureagi.com.Gateway docs ↗
Advanced: fallback + cache config (YAML)
strategy: cost-optimized
targets:
  - model: gpt-4o
    provider: openai
    weight: 80
fallbacks:
  - model: gpt-5-nano
    provider: azure-openai
  - model: deepseek-v3
    provider: azure-ai-foundry
guardrails: [pii, prompt-injection, secrets]
cache: { exact: true, semantic: true }

Same model on other providers

gpt-4o is also available via 2 other routes. Pricing, regions, and capabilities can differ — compare before routing production traffic.

ProviderInput / 1MOutput / 1MVerified
Azure OpenAI$2.50/M$10.00/MMay 12, 2026
GitHub CopilotMay 12, 2026

Compare with similar models

Grouped by Chatbot Arena tier (GPT-4o sits at 1265 ELO).

FAQ

How much does GPT-4o cost?

Input is priced at $2.50 per 1M tokens and output at $10.00 per 1M tokens (OpenAI, last verified May 12, 2026).

What is the context window of GPT-4o?

GPT-4o supports a 128,000-token context window with up to 16,384 output tokens.

Does GPT-4o support function calling?

Yes — GPT-4o supports function (tool) calling, including parallel tool calls.

Is GPT-4o good for production?

GPT-4o is well-suited for pricing — cheaper than 81% of priced chat models on Future AGI and PDF input — only 16% of chat models on Future AGI advertise this.

How can I route to GPT-4o with fallback?

Use Agent Command Center: a single OpenAI-compatible endpoint that supports cost-optimized routing, latency-aware retries, model fallback, and shadow traffic. Configure once, swap models without app changes.

Useful links for GPT-4o

Official sources, independent benchmarks, and pricing aggregators — no random search-engine guesses.