Llama 3.1 Sonar Large 128k Online

Perplexity chat Deprecated 445d ago
Heads-up: Perplexity has scheduled Llama 3.1 Sonar Large 128k Online for deprecation on Feb 22, 2025. Plan a migration. Use Agent Command Center's model fallback routing to swap models without code changes.

Llama 3.1 Sonar Large 128k Online is a Perplexity chat model.It supports a 127,072-token context windowwith up to 127,072 output tokens.Input is priced at $1.00/M tokens and output at $1.00/M tokens. Route Llama 3.1 Sonar Large 128k Online via Future AGI's Agent Command Center for unified observability, caching, and 15 routing strategies including cost-optimized fallback.

Pricing source: litellm Last verified: May 7, 2026 View source ↗
Cost calculator

Estimate Llama 3.1 Sonar Large 128k Online spend

Pick a workload, fine-tune the sliders, and see the monthly bill.

~3K in / ~400 out · 5K req/day
3,000
0127,072
400
0127,072
5,000
01,000,000
Per request
$0.003400
in $0.003000 · out $0.000400
Per day
$17.00
5,000 requests
Per month
$517
152,188 requests

Estimate uses $1.00/M input · $1.00/M output. Provider pricing changes. Production costs vary with retries, streaming overhead, and tool-call rounds.
Want this for free? Cache + route via Agent Command Center — first 100K requests and 100K cache hits free every month.

Pricing

Per-token rates, expressed in USD per 1M tokens. Verified May 7, 2026.

Input $1.00/M
Output $1.00/M

Limits

Context window
127,072 tokens
Max input
127,072 tokens
Max output
127,072 tokens
Modalities
text

Capabilities

  • Function calling — not advertised
  • Parallel tool calls — not advertised
  • Vision input — not advertised
  • Audio input — not advertised
  • Audio output — not advertised
  • PDF input — not advertised
  • Streaming ✓ supported
  • Structured output — not advertised
  • Prompt caching — not advertised
  • Reasoning — not advertised

Where it's strong

Watch out for

  • !agentic workflows — no advertised function-calling; use a tool-capable model and route via Agent Command Center for fallback
  • !strict structured output — no JSON-schema enforcement, expect retry loops
  • !already deprecated — provider stopped accepting new traffic 445 days ago

Benchmarks pending

We haven't logged public benchmark scores for Llama 3.1 Sonar Large 128k Online yet. Have one to contribute? Submit a source — citations help us prioritise.

Try it

Call Llama 3.1 Sonar Large 128k Online via Agent Command Center

One OpenAI-compatible endpoint. Routing, fallback, semantic caching, guardrails, and cost tracking come along for the ride. First 100K requests + 100K cache hits free every month.

SDK
Native Future AGI client (agentcc / @agentcc/client). Per-call metadata — provider, cost, latency, cache hit, request id — is returned on x-agentcc-* response headers, so any HTTP client can read it.
# Llama 3.1 Sonar Large 128k Online via the Agent Command Center Python SDK
# pip install agentcc
import os
from agentcc import AgentCC

client = AgentCC(
    api_key=os.environ["AGENTCC_API_KEY"],   # from app.futureagi.com → Settings → API Keys
    base_url="https://gateway.futureagi.com/v1",
)

resp = client.chat.completions.create(
    model="perplexity/llama-3-1-sonar-large-128k-online",
    messages=[{"role": "user", "content": "Hello, Llama 3.1 Sonar Large 128k Online!"}],
)

print(resp.choices[0].message.content)
print(f"Tokens: {resp.usage.total_tokens}")

# Per-call gateway metadata is returned on x-agentcc-* response headers.
# When you need it programmatically, use .with_raw_response to get them:
raw = client.chat.completions.with_raw_response.create(
    model="perplexity/llama-3-1-sonar-large-128k-online",
    messages=[{"role": "user", "content": "Same call, but I want the headers."}],
)
print("Provider:", raw.headers.get("x-agentcc-provider"))
print("Latency:", raw.headers.get("x-agentcc-latency-ms"), "ms")
print("Cost:   ", raw.headers.get("x-agentcc-cost"), "USD")
print("Cache:  ", raw.headers.get("x-agentcc-cache"))
Set AGENTCC_API_KEY with a key fromapp.futureagi.com.Gateway docs ↗

Compare with similar models

Llama 3.1 Sonar Large 128k Online doesn't have a public Arena ELO score yet, so we group by provider only — quality-tier comparisons need a benchmark.

FAQ

How much does Llama 3.1 Sonar Large 128k Online cost?

Input is priced at $1.00 per 1M tokens and output at $1.00 per 1M tokens (Perplexity, last verified May 7, 2026).

What is the context window of Llama 3.1 Sonar Large 128k Online?

Llama 3.1 Sonar Large 128k Online supports a 127,072-token context window with up to 127,072 output tokens.

Does Llama 3.1 Sonar Large 128k Online support function calling?

Llama 3.1 Sonar Large 128k Online does not currently advertise function-calling support. For agentic workloads, prefer a tool-calling-capable model and route via Agent Command Center for fallback.

Is Llama 3.1 Sonar Large 128k Online good for production?

Llama 3.1 Sonar Large 128k Online is best evaluated against your own production traces. Pipe traffic through Agent Command Center to compare it head-to-head against alternatives in shadow mode.

How can I route to Llama 3.1 Sonar Large 128k Online with fallback?

Use Agent Command Center: a single OpenAI-compatible endpoint that supports cost-optimized routing, latency-aware retries, model fallback, and shadow traffic. Configure once, swap models without app changes.

Useful links for Llama 3.1 Sonar Large 128k Online

Official sources, independent benchmarks, and pricing aggregators — no random search-engine guesses.