Glm 4.5 Flash
Z.ai chatGlm 4.5 Flash is a Z.ai chat model.It supports a 128,000-token context windowwith up to 32,000 output tokens. Capabilities include function calling. Route Glm 4.5 Flash via Future AGI's Agent Command Center for unified observability, caching, and 15 routing strategies including cost-optimized fallback.
We don't have verified per-token pricing for Glm 4.5 Flash yet. If you have a source from Z.ai's documentation, help us add it — your submission gets reviewed within 48 hours.
Pricing
Per-token rates, expressed in USD per 1M tokens. Verified May 12, 2026.
| Input | — | |
| Output | — |
Limits
- Context window
- 128,000 tokens
- Max input
- 128,000 tokens
- Max output
- 32,000 tokens
- Modalities
- text
Capabilities
- Function calling ✓ supported
- Parallel tool calls — not advertised
- Vision input — not advertised
- Audio input — not advertised
- Audio output — not advertised
- PDF input — not advertised
- Streaming ✓ supported
- Structured output — not advertised
- Prompt caching — not advertised
- Reasoning — not advertised
Where it's strong
- +agentic workflows that depend on reliable tool calls
Watch out for
- !strict structured output — no JSON-schema enforcement, expect retry loops
Benchmarks pending
We haven't logged public benchmark scores for Glm 4.5 Flash yet. Have one to contribute? Submit a source — citations help us prioritise.
Call Glm 4.5 Flash via Agent Command Center
One OpenAI-compatible endpoint. Routing, fallback, semantic caching, guardrails, and cost tracking come along for the ride. First 100K requests + 100K cache hits free every month.
agentcc / @agentcc/client). Per-call metadata — provider, cost, latency, cache hit, request id — is returned on x-agentcc-* response headers, so any HTTP client can read it.# Glm 4.5 Flash via the Agent Command Center Python SDK
# pip install agentcc
import os
from agentcc import AgentCC
client = AgentCC(
api_key=os.environ["AGENTCC_API_KEY"], # from app.futureagi.com → Settings → API Keys
base_url="https://gateway.futureagi.com/v1",
)
resp = client.chat.completions.create(
model="zai/glm-4-5-flash",
messages=[{"role": "user", "content": "Hello, Glm 4.5 Flash!"}],
)
print(resp.choices[0].message.content)
print(f"Tokens: {resp.usage.total_tokens}")
# Per-call gateway metadata is returned on x-agentcc-* response headers.
# When you need it programmatically, use .with_raw_response to get them:
raw = client.chat.completions.with_raw_response.create(
model="zai/glm-4-5-flash",
messages=[{"role": "user", "content": "Same call, but I want the headers."}],
)
print("Provider:", raw.headers.get("x-agentcc-provider"))
print("Latency:", raw.headers.get("x-agentcc-latency-ms"), "ms")
print("Cost: ", raw.headers.get("x-agentcc-cost"), "USD")
print("Cache: ", raw.headers.get("x-agentcc-cache"))AGENTCC_API_KEY with a key fromapp.futureagi.com.Gateway docs ↗Compare with similar models
Glm 4.5 Flash doesn't have a public Arena ELO score yet, so we group by provider only — quality-tier comparisons need a benchmark.
FAQ
How much does Glm 4.5 Flash cost?
Public per-token pricing for Glm 4.5 Flash is not yet published. Submit a source on this page to help us add it.
What is the context window of Glm 4.5 Flash?
Glm 4.5 Flash supports a 128,000-token context window with up to 32,000 output tokens.
Does Glm 4.5 Flash support function calling?
Yes — Glm 4.5 Flash supports function (tool) calling.
Is Glm 4.5 Flash good for production?
Glm 4.5 Flash is well-suited for agentic workflows that depend on reliable tool calls. Consider alternatives if you need strict structured output — no JSON-schema enforcement, expect retry loops.
How can I route to Glm 4.5 Flash with fallback?
Use Agent Command Center: a single OpenAI-compatible endpoint that supports cost-optimized routing, latency-aware retries, model fallback, and shadow traffic. Configure once, swap models without app changes.
Useful links for Glm 4.5 Flash
Official sources, independent benchmarks, and pricing aggregators — no random search-engine guesses.
Third-party evals — verify the marketing.
Cross-check our number against the rest of the ecosystem.