GPT 3.5 Turbo 0613

GitHub Copilot chat

GPT 3.5 Turbo 0613 is a GitHub Copilot chat model.It supports a 16,384-token context windowwith up to 4,096 output tokens. Capabilities include function calling. Route GPT 3.5 Turbo 0613 via Future AGI's Agent Command Center for unified observability, caching, and 15 routing strategies including cost-optimized fallback.

Pricing source: unknown Last verified: May 12, 2026 View source ↗
Pricing not yet public

We don't have verified per-token pricing for GPT 3.5 Turbo 0613 yet. If you have a source from GitHub Copilot's documentation, help us add it — your submission gets reviewed within 48 hours.

Pricing

Per-token rates, expressed in USD per 1M tokens. Verified May 12, 2026.

Input
Output

Limits

Context window
16,384 tokens
Max input
16,384 tokens
Max output
4,096 tokens
Modalities
text

Capabilities

  • Function calling ✓ supported
  • Parallel tool calls — not advertised
  • Vision input — not advertised
  • Audio input — not advertised
  • Audio output — not advertised
  • PDF input — not advertised
  • Streaming ✓ supported
  • Structured output — not advertised
  • Prompt caching — not advertised
  • Reasoning — not advertised

Where it's strong

  • +agentic workflows that depend on reliable tool calls

Watch out for

  • !limited context — 16,384-token window is in the bottom quartile; not ideal for long documents or large RAG
  • !strict structured output — no JSON-schema enforcement, expect retry loops

Benchmark scores

Reported public benchmark numbers. Each row links to the source. Faded bar shows 6-peer average for context.

MT-Benchgeneral· judge=gpt-4
Captured May 12, 2026
Try it

Call GPT 3.5 Turbo 0613 via Agent Command Center

One OpenAI-compatible endpoint. Routing, fallback, semantic caching, guardrails, and cost tracking come along for the ride. First 100K requests + 100K cache hits free every month.

SDK
Native Future AGI client (agentcc / @agentcc/client). Per-call metadata — provider, cost, latency, cache hit, request id — is returned on x-agentcc-* response headers, so any HTTP client can read it.
# GPT 3.5 Turbo 0613 via the Agent Command Center Python SDK
# pip install agentcc
import os
from agentcc import AgentCC

client = AgentCC(
    api_key=os.environ["AGENTCC_API_KEY"],   # from app.futureagi.com → Settings → API Keys
    base_url="https://gateway.futureagi.com/v1",
)

resp = client.chat.completions.create(
    model="github-copilot/gpt-3-5-turbo-0613",
    messages=[{"role": "user", "content": "Hello, GPT 3.5 Turbo 0613!"}],
)

print(resp.choices[0].message.content)
print(f"Tokens: {resp.usage.total_tokens}")

# Per-call gateway metadata is returned on x-agentcc-* response headers.
# When you need it programmatically, use .with_raw_response to get them:
raw = client.chat.completions.with_raw_response.create(
    model="github-copilot/gpt-3-5-turbo-0613",
    messages=[{"role": "user", "content": "Same call, but I want the headers."}],
)
print("Provider:", raw.headers.get("x-agentcc-provider"))
print("Latency:", raw.headers.get("x-agentcc-latency-ms"), "ms")
print("Cost:   ", raw.headers.get("x-agentcc-cost"), "USD")
print("Cache:  ", raw.headers.get("x-agentcc-cache"))
Set AGENTCC_API_KEY with a key fromapp.futureagi.com.Gateway docs ↗

Same model on other providers

gpt-3-5-turbo-0613 is also available via 1 other route. Pricing, regions, and capabilities can differ — compare before routing production traffic.

ProviderInput / 1MOutput / 1MVerified
OpenAI$1.50/M$2.00/MMay 7, 2026

FAQ

How much does GPT 3.5 Turbo 0613 cost?

Public per-token pricing for GPT 3.5 Turbo 0613 is not yet published. Submit a source on this page to help us add it.

What is the context window of GPT 3.5 Turbo 0613?

GPT 3.5 Turbo 0613 supports a 16,384-token context window with up to 4,096 output tokens.

Does GPT 3.5 Turbo 0613 support function calling?

Yes — GPT 3.5 Turbo 0613 supports function (tool) calling.

Is GPT 3.5 Turbo 0613 good for production?

GPT 3.5 Turbo 0613 is well-suited for agentic workflows that depend on reliable tool calls. Consider alternatives if you need limited context — 16,384-token window is in the bottom quartile; not ideal for long documents or large RAG.

How can I route to GPT 3.5 Turbo 0613 with fallback?

Use Agent Command Center: a single OpenAI-compatible endpoint that supports cost-optimized routing, latency-aware retries, model fallback, and shadow traffic. Configure once, swap models without app changes.

Useful links for GPT 3.5 Turbo 0613

Official sources, independent benchmarks, and pricing aggregators — no random search-engine guesses.