Llama 3.1 Sonar Small 128k Chat
Perplexity chat Deprecated 445d agomodel fallback routing to swap models without code changes.
Llama 3.1 Sonar Small 128k Chat is a Perplexity chat model.It supports a 131,072-token context windowwith up to 131,072 output tokens.Input is priced at $0.200/M tokens and output at $0.200/M tokens. Route Llama 3.1 Sonar Small 128k Chat via Future AGI's Agent Command Center for unified observability, caching, and 15 routing strategies including cost-optimized fallback.
Estimate Llama 3.1 Sonar Small 128k Chat spend
Pick a workload, fine-tune the sliders, and see the monthly bill.
Estimate uses $0.2000/M input · $0.2000/M output. Provider pricing changes. Production costs vary with retries, streaming overhead, and tool-call rounds.
Want this for free? Cache + route via Agent Command Center — first 100K requests and 100K cache hits free every month.
Pricing
Per-token rates, expressed in USD per 1M tokens. Verified May 7, 2026.
| Input | $0.200/M | |
| Output | $0.200/M |
Limits
- Context window
- 131,072 tokens
- Max input
- 131,072 tokens
- Max output
- 131,072 tokens
- Modalities
- text
Capabilities
- Function calling — not advertised
- Parallel tool calls — not advertised
- Vision input — not advertised
- Audio input — not advertised
- Audio output — not advertised
- PDF input — not advertised
- Streaming ✓ supported
- Structured output — not advertised
- Prompt caching — not advertised
- Reasoning — not advertised
Where it's strong
- +long-form generation — 131,072-token max output, top-10% of peers
Watch out for
- !agentic workflows — no advertised function-calling; use a tool-capable model and route via Agent Command Center for fallback
- !strict structured output — no JSON-schema enforcement, expect retry loops
- !already deprecated — provider stopped accepting new traffic 445 days ago
Benchmarks pending
We haven't logged public benchmark scores for Llama 3.1 Sonar Small 128k Chat yet. Have one to contribute? Submit a source — citations help us prioritise.
Call Llama 3.1 Sonar Small 128k Chat via Agent Command Center
One OpenAI-compatible endpoint. Routing, fallback, semantic caching, guardrails, and cost tracking come along for the ride. First 100K requests + 100K cache hits free every month.
agentcc / @agentcc/client). Per-call metadata — provider, cost, latency, cache hit, request id — is returned on x-agentcc-* response headers, so any HTTP client can read it.# Llama 3.1 Sonar Small 128k Chat via the Agent Command Center Python SDK
# pip install agentcc
import os
from agentcc import AgentCC
client = AgentCC(
api_key=os.environ["AGENTCC_API_KEY"], # from app.futureagi.com → Settings → API Keys
base_url="https://gateway.futureagi.com/v1",
)
resp = client.chat.completions.create(
model="perplexity/llama-3-1-sonar-small-128k-chat",
messages=[{"role": "user", "content": "Hello, Llama 3.1 Sonar Small 128k Chat!"}],
)
print(resp.choices[0].message.content)
print(f"Tokens: {resp.usage.total_tokens}")
# Per-call gateway metadata is returned on x-agentcc-* response headers.
# When you need it programmatically, use .with_raw_response to get them:
raw = client.chat.completions.with_raw_response.create(
model="perplexity/llama-3-1-sonar-small-128k-chat",
messages=[{"role": "user", "content": "Same call, but I want the headers."}],
)
print("Provider:", raw.headers.get("x-agentcc-provider"))
print("Latency:", raw.headers.get("x-agentcc-latency-ms"), "ms")
print("Cost: ", raw.headers.get("x-agentcc-cost"), "USD")
print("Cache: ", raw.headers.get("x-agentcc-cache"))AGENTCC_API_KEY with a key fromapp.futureagi.com.Gateway docs ↗Compare with similar models
Llama 3.1 Sonar Small 128k Chat doesn't have a public Arena ELO score yet, so we group by provider only — quality-tier comparisons need a benchmark.
FAQ
How much does Llama 3.1 Sonar Small 128k Chat cost?
Input is priced at $0.200 per 1M tokens and output at $0.200 per 1M tokens (Perplexity, last verified May 7, 2026).
What is the context window of Llama 3.1 Sonar Small 128k Chat?
Llama 3.1 Sonar Small 128k Chat supports a 131,072-token context window with up to 131,072 output tokens.
Does Llama 3.1 Sonar Small 128k Chat support function calling?
Llama 3.1 Sonar Small 128k Chat does not currently advertise function-calling support. For agentic workloads, prefer a tool-calling-capable model and route via Agent Command Center for fallback.
Is Llama 3.1 Sonar Small 128k Chat good for production?
Llama 3.1 Sonar Small 128k Chat is well-suited for long-form generation — 131,072-token max output, top-10% of peers. Consider alternatives if you need agentic workflows — no advertised function-calling; use a tool-capable model and route via Agent Command Center for fallback.
How can I route to Llama 3.1 Sonar Small 128k Chat with fallback?
Use Agent Command Center: a single OpenAI-compatible endpoint that supports cost-optimized routing, latency-aware retries, model fallback, and shadow traffic. Configure once, swap models without app changes.
Useful links for Llama 3.1 Sonar Small 128k Chat
Official sources, independent benchmarks, and pricing aggregators — no random search-engine guesses.
Third-party evals — verify the marketing.
Cross-check our number against the rest of the ecosystem.