Qwen Qwen3 VL 235B A22b Instruct Fp8

GMI Cloud chat

Qwen Qwen3 VL 235B A22b Instruct Fp8 is a GMI Cloud chat model.It supports a 262,144-token context windowwith up to 16,384 output tokens.Input is priced at $0.300/M tokens and output at $1.40/M tokens. Capabilities include vision. Route Qwen Qwen3 VL 235B A22b Instruct Fp8 via Future AGI's Agent Command Center for unified observability, caching, and 15 routing strategies including cost-optimized fallback.

Pricing source: litellm Last verified: May 12, 2026 View source ↗
Cost calculator

Estimate Qwen Qwen3 VL 235B A22b Instruct Fp8 spend

Pick a workload, fine-tune the sliders, and see the monthly bill.

~3K in / ~400 out · 5K req/day
3,000
0262,144
400
016,384
5,000
01,000,000
Per request
$0.001460
in $0.000900 · out $0.000560
Per day
$7.30
5,000 requests
Per month
$222
152,188 requests

Estimate uses $0.3000/M input · $1.40/M output. Provider pricing changes. Production costs vary with retries, streaming overhead, and tool-call rounds.
Want this for free? Cache + route via Agent Command Center — first 100K requests and 100K cache hits free every month.

Pricing

Per-token rates, expressed in USD per 1M tokens. Verified May 12, 2026.

Input $0.300/M
Output $1.40/M

Limits

Context window
262,144 tokens
Max input
262,144 tokens
Max output
16,384 tokens
Modalities
vision, text

Capabilities

  • Function calling — not advertised
  • Parallel tool calls — not advertised
  • Vision input ✓ supported
  • Audio input — not advertised
  • Audio output — not advertised
  • PDF input — not advertised
  • Streaming ✓ supported
  • Structured output — not advertised
  • Prompt caching — not advertised
  • Reasoning — not advertised

Where it's strong

  • +document, chart, and screenshot understanding

Watch out for

  • !agentic workflows — no advertised function-calling; use a tool-capable model and route via Agent Command Center for fallback
  • !strict structured output — no JSON-schema enforcement, expect retry loops

Benchmarks pending

We haven't logged public benchmark scores for Qwen Qwen3 VL 235B A22b Instruct Fp8 yet. Have one to contribute? Submit a source — citations help us prioritise.

Try it

Call Qwen Qwen3 VL 235B A22b Instruct Fp8 via Agent Command Center

One OpenAI-compatible endpoint. Routing, fallback, semantic caching, guardrails, and cost tracking come along for the ride. First 100K requests + 100K cache hits free every month.

SDK
Native Future AGI client (agentcc / @agentcc/client). Per-call metadata — provider, cost, latency, cache hit, request id — is returned on x-agentcc-* response headers, so any HTTP client can read it.
# Qwen Qwen3 VL 235B A22b Instruct Fp8 via the Agent Command Center Python SDK
# pip install agentcc
import os
from agentcc import AgentCC

client = AgentCC(
    api_key=os.environ["AGENTCC_API_KEY"],   # from app.futureagi.com → Settings → API Keys
    base_url="https://gateway.futureagi.com/v1",
)

resp = client.chat.completions.create(
    model="gmi-cloud/qwen-qwen3-vl-235b-a22b-instruct-fp8",
    messages=[{"role": "user", "content": "Hello, Qwen Qwen3 VL 235B A22b Instruct Fp8!"}],
)

print(resp.choices[0].message.content)
print(f"Tokens: {resp.usage.total_tokens}")

# Per-call gateway metadata is returned on x-agentcc-* response headers.
# When you need it programmatically, use .with_raw_response to get them:
raw = client.chat.completions.with_raw_response.create(
    model="gmi-cloud/qwen-qwen3-vl-235b-a22b-instruct-fp8",
    messages=[{"role": "user", "content": "Same call, but I want the headers."}],
)
print("Provider:", raw.headers.get("x-agentcc-provider"))
print("Latency:", raw.headers.get("x-agentcc-latency-ms"), "ms")
print("Cost:   ", raw.headers.get("x-agentcc-cost"), "USD")
print("Cache:  ", raw.headers.get("x-agentcc-cache"))
Set AGENTCC_API_KEY with a key fromapp.futureagi.com.Gateway docs ↗

FAQ

How much does Qwen Qwen3 VL 235B A22b Instruct Fp8 cost?

Input is priced at $0.300 per 1M tokens and output at $1.40 per 1M tokens (GMI Cloud, last verified May 12, 2026).

What is the context window of Qwen Qwen3 VL 235B A22b Instruct Fp8?

Qwen Qwen3 VL 235B A22b Instruct Fp8 supports a 262,144-token context window with up to 16,384 output tokens.

Does Qwen Qwen3 VL 235B A22b Instruct Fp8 support function calling?

Qwen Qwen3 VL 235B A22b Instruct Fp8 does not currently advertise function-calling support. For agentic workloads, prefer a tool-calling-capable model and route via Agent Command Center for fallback.

Is Qwen Qwen3 VL 235B A22b Instruct Fp8 good for production?

Qwen Qwen3 VL 235B A22b Instruct Fp8 is well-suited for document, chart, and screenshot understanding. Consider alternatives if you need agentic workflows — no advertised function-calling; use a tool-capable model and route via Agent Command Center for fallback.

How can I route to Qwen Qwen3 VL 235B A22b Instruct Fp8 with fallback?

Use Agent Command Center: a single OpenAI-compatible endpoint that supports cost-optimized routing, latency-aware retries, model fallback, and shadow traffic. Configure once, swap models without app changes.

Useful links for Qwen Qwen3 VL 235B A22b Instruct Fp8

Official sources, independent benchmarks, and pricing aggregators — no random search-engine guesses.