GPT 5.4 mini

OpenAI chat

GPT 5.4 mini is an OpenAI chat model.It supports a 272,000-token context windowwith up to 128,000 output tokens.Input is priced at $0.750/M tokens and output at $4.50/M tokens. Capabilities include function calling, vision, reasoning, prompt caching. Route GPT 5.4 mini via Future AGI's Agent Command Center for unified observability, caching, and 15 routing strategies including cost-optimized fallback.

Pricing source: litellm Last verified: May 12, 2026 View source ↗
Cost calculator

Estimate GPT 5.4 mini spend

Pick a workload, fine-tune the sliders, and see the monthly bill.

~3K in / ~400 out · 5K req/day
3,000
0272,000
400
0128,000
5,000
01,000,000
cached @ $0.0750/M
Per request
$0.004050
in $0.002250 · out $0.001800
Per day
$20.25
5,000 requests
Per month
$616
152,188 requests

Estimate uses $0.7500/M input · $4.50/M output. Provider pricing changes. Production costs vary with retries, streaming overhead, and tool-call rounds.
Want this for free? Cache + route via Agent Command Center — first 100K requests and 100K cache hits free every month.

Pricing

Per-token rates, expressed in USD per 1M tokens. Verified May 12, 2026.

Input $0.750/M
Output $4.50/M
Cached input $0.0750/M
Batch input $0.375/M
Batch output $2.25/M

Limits

Context window
272,000 tokens
Max input
272,000 tokens
Max output
128,000 tokens
Modalities
vision, pdf, text

Capabilities

  • Function calling ✓ supported
  • Parallel tool calls ✓ supported
  • Vision input ✓ supported
  • Audio input — not advertised
  • Audio output — not advertised
  • PDF input ✓ supported
  • Streaming ✓ supported
  • Structured output ✓ supported
  • Prompt caching ✓ supported
  • Reasoning ✓ supported

Where it's strong

  • +long-context tasks — context window in the top 15% of peers
  • +PDF input — only 16% of chat models on Future AGI advertise this
  • +parallel tool calls — only 21% of chat models on Future AGI advertise this
  • +prompt caching — only 23% of chat models on Future AGI advertise this

Watch out for

  • No major caveats flagged from public spec.

Benchmark scores

Reported public benchmark numbers. Each row links to the source. Faded bar shows 6-peer average for context.

Chatbot Arena ELOgeneral· overall
Captured May 12, 2026
GPQA Diamondreasoning· xhigh↑33% vs peers
Captured May 12, 2026
MMMU-Promultimodal· w/ Python, xhigh
Captured May 12, 2026
SWE-benchagent· xhigh; Pro (Public)
Captured May 12, 2026
Humanity's Last Examreasoning· w/ tool, xhigh
Captured May 12, 2026
Try it

Call GPT 5.4 mini via Agent Command Center

One OpenAI-compatible endpoint. Routing, fallback, semantic caching, guardrails, and cost tracking come along for the ride. First 100K requests + 100K cache hits free every month.

SDK
Native Future AGI client (agentcc / @agentcc/client). Per-call metadata — provider, cost, latency, cache hit, request id — is returned on x-agentcc-* response headers, so any HTTP client can read it.
# GPT 5.4 mini via the Agent Command Center Python SDK
# pip install agentcc
import os
from agentcc import AgentCC

client = AgentCC(
    api_key=os.environ["AGENTCC_API_KEY"],   # from app.futureagi.com → Settings → API Keys
    base_url="https://gateway.futureagi.com/v1",
)

resp = client.chat.completions.create(
    model="openai/gpt-5-4-mini",
    messages=[{"role": "user", "content": "Hello, GPT 5.4 mini!"}],
)

print(resp.choices[0].message.content)
print(f"Tokens: {resp.usage.total_tokens}")

# Per-call gateway metadata is returned on x-agentcc-* response headers.
# When you need it programmatically, use .with_raw_response to get them:
raw = client.chat.completions.with_raw_response.create(
    model="openai/gpt-5-4-mini",
    messages=[{"role": "user", "content": "Same call, but I want the headers."}],
)
print("Provider:", raw.headers.get("x-agentcc-provider"))
print("Latency:", raw.headers.get("x-agentcc-latency-ms"), "ms")
print("Cost:   ", raw.headers.get("x-agentcc-cost"), "USD")
print("Cache:  ", raw.headers.get("x-agentcc-cache"))
Set AGENTCC_API_KEY with a key fromapp.futureagi.com.Gateway docs ↗
Advanced: fallback + cache config (YAML)
strategy: cost-optimized
targets:
  - model: gpt-5-4-mini
    provider: openai
    weight: 80
fallbacks:
  - model: claude-opus-4-6
    provider: azure-ai-foundry
  - model: claude-opus-4-6-20260205
    provider: anthropic
guardrails: [pii, prompt-injection, secrets]
cache: { exact: true, semantic: true }

Same model on other providers

gpt-5-4-mini is also available via 1 other route. Pricing, regions, and capabilities can differ — compare before routing production traffic.

ProviderInput / 1MOutput / 1MVerified
Azure OpenAI$0.750/M$4.50/MMay 12, 2026

Compare with similar models

Grouped by Chatbot Arena tier (GPT 5.4 mini sits at 1456 ELO).

FAQ

How much does GPT 5.4 mini cost?

Input is priced at $0.750 per 1M tokens and output at $4.50 per 1M tokens (OpenAI, last verified May 12, 2026).

What is the context window of GPT 5.4 mini?

GPT 5.4 mini supports a 272,000-token context window with up to 128,000 output tokens.

Does GPT 5.4 mini support function calling?

Yes — GPT 5.4 mini supports function (tool) calling, including parallel tool calls.

Is GPT 5.4 mini good for production?

GPT 5.4 mini is well-suited for long-context tasks — context window in the top 15% of peers and PDF input — only 16% of chat models on Future AGI advertise this.

How can I route to GPT 5.4 mini with fallback?

Use Agent Command Center: a single OpenAI-compatible endpoint that supports cost-optimized routing, latency-aware retries, model fallback, and shadow traffic. Configure once, swap models without app changes.

Useful links for GPT 5.4 mini

Official sources, independent benchmarks, and pricing aggregators — no random search-engine guesses.