Guides

How to Get an OpenAI API Key in 2026: A 5-Minute Setup Guide

Generate an OpenAI API key in 2026 with GPT-5 access. Step-by-step setup, secure storage, billing limits, curl + Python examples, and eval add-ons.

·
Updated
·
7 min read
openai tutorial llms api
How to Use the OpenAI API Key for Your Applications
Table of Contents

TL;DR

StepWhat to doTime
1. Sign upplatform.openai.com, verify email + phone1 min
2. Add billingLoad a prepaid balance and set a hard usage limit1 min
3. Create keyAPI keys page, Create new secret key, scope to a project30 sec
4. Store securelySave as OPENAI_API_KEY env var, add .env to .gitignore30 sec
5. First callcurl or pip install openai then client.responses.create2 min
Optional: evaluationtraceAI auto-instruments OpenAI SDK, ships traces to FAGI+5 min

Most new accounts will need to add billing or load a small prepaid balance before the first call returns 200. Check the Settings > Billing page for your account.

What an OpenAI API key actually is

An OpenAI API key is a long random string (commonly with an sk- style prefix) that authenticates server-side calls to the OpenAI REST API. Treat it as a bearer secret. Every call your application makes carries the key in an Authorization: Bearer $OPENAI_API_KEY header. OpenAI uses the key to identify the account and project to bill, the rate limits to apply, and the model access to grant. It is the equivalent of a password for your OpenAI compute budget, which is why it must never appear in client-side code, public repositories, or screenshots.

Keys are not encrypted by themselves. They are bearer tokens, which means anyone who holds the key can use it. Treat them with the same care you would a database admin password.

Step 1: Create a Platform account

Open platform.openai.com and sign up. The Platform account is separate from the ChatGPT consumer subscription you may already have. You can sign up with email, Google, Microsoft, or Apple. After email verification, OpenAI may require a phone number for SMS verification depending on your account, region, or risk checks.

Once inside, switch to the project switcher in the top left and create a new project for the workload you are building. Project names are the unit of cost reporting, rate limiting, and key scoping, so name them clearly (for example, customer-support-bot or eval-harness-dev).

Step 2: Add billing and set a hard usage limit

Go to Settings > Billing and add a card. Load a prepaid balance, USD 5 is enough to start. Then open Usage limits and set:

  • A hard monthly limit that stops API calls once hit.
  • A soft alert that emails you at 50% and 80% of the hard limit.

This is the single most important step. A runaway prompt loop on gpt-5 can spend hundreds of dollars in minutes. The hard limit caps the blast radius.

Step 3: Create a project API key

Open platform.openai.com/api-keys and click Create new secret key. Choose:

  • Owned by: Project (not User, unless you have a specific reason).
  • Project: the one you just created.
  • Name: descriptive, for example support-bot-prod.
  • Permissions: restrict to the endpoints the app actually uses.

Click Create. The dialog shows the full key exactly once. Paste it into your password manager or directly into your environment file before closing the dialog. If you lose the key value, you cannot recover it; you have to create a new one.

Step 4: Store the key as an environment variable

Never hardcode the key. The minimum acceptable practice on a developer laptop is a .env file plus .gitignore.

# .env
OPENAI_API_KEY=sk-...your-key-here...
# .gitignore
.env
.env.local
.env.*.local

For production, use a managed secret store: AWS Secrets Manager, Google Secret Manager, Azure Key Vault, HashiCorp Vault, Doppler, or 1Password Secrets Automation. These rotate keys, audit access, and inject the secret at runtime so it never lives on disk.

If you accidentally push a key to a public repository, revoke it from the Platform dashboard immediately. GitHub secret scanning will also detect it and notify OpenAI, but assume the key is already compromised.

Step 5: Make your first request

curl

curl https://api.openai.com/v1/responses \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
    "model": "gpt-5",
    "input": "Write a haiku about a developer who finally remembered to rotate their API key."
  }'

Python

Install the SDK:

pip install openai python-dotenv

Then:

import os
from dotenv import load_dotenv
from openai import OpenAI

load_dotenv()
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

response = client.responses.create(
    model="gpt-5",
    input="Summarize the difference between GPT-5 and GPT-5 mini in two sentences.",
)
print(response.output_text)

The responses endpoint is the recommended path in modern Python SDK versions. Legacy code that calls openai.ChatCompletion.create (from the pre-1.x Python SDK) does not work in modern SDK versions and should be migrated to client.responses.create or client.chat.completions.create.

Node.js

import "dotenv/config";
import OpenAI from "openai";

const client = new OpenAI();

const response = await client.responses.create({
  model: "gpt-5",
  input: "Give me three test ideas for an OpenAI integration.",
});

console.log(response.output_text);

Step 6: Add evaluation and observability

A key gets you tokens. It does not tell you whether the model is producing correct answers, hallucinating, drifting after a model update, or quietly burning budget on retries. In 2026 the gap between “the API call succeeded” and “the response was good” is where most production AI bugs hide.

Future AGI ships two open source packages that close this gap and pair directly with the openai SDK:

  • traceAI is an OpenTelemetry-based auto-instrumentation library. One import wraps the openai client and emits a structured trace per call, including prompt, response, model, tokens, latency, cost, and tool calls.
  • ai-evaluation runs evaluators against those traces, scoring faithfulness, context adherence, completeness, tone, toxicity, and PII leakage in production.

Both are Apache 2.0. Create FI_API_KEY and FI_SECRET_KEY in the Future AGI dashboard, and add them alongside OPENAI_API_KEY in your .env:

# .env
OPENAI_API_KEY=sk-...
FI_API_KEY=...
FI_SECRET_KEY=...

Then load them and register the tracer:

import os
from dotenv import load_dotenv

load_dotenv()  # loads OPENAI_API_KEY, FI_API_KEY, FI_SECRET_KEY

from fi_instrumentation import register
from fi_instrumentation.fi_types import ProjectType
from traceai_openai import OpenAIInstrumentor

trace_provider = register(
    project_type=ProjectType.OBSERVE,
    project_name="openai-getting-started",
)

OpenAIInstrumentor().instrument(tracer_provider=trace_provider)

# Use openai normally; every call is now traced
from openai import OpenAI
client = OpenAI()
response = client.responses.create(
    model="gpt-5",
    input="Explain prompt caching in one paragraph.",
)
print(response.output_text)

To score the output, add an evaluator:

from fi.evals import evaluate

result = evaluate(
    "context_adherence",
    output=response.output_text,
    context="GPT-5 supports prompt caching for repeated prefixes.",
)
print(result.score)
print(result.reason)

The dashboard route for production traffic is /platform/monitor/command-center. From there you set evaluators to run on every trace, alert on regressions, and ship a guardrail layer in front of the OpenAI call when content needs to be blocked or rewritten.

Rate limits, retries, and cost control

OpenAI returns HTTP 429 when you exceed requests per minute (RPM) or tokens per minute (TPM) for your tier. New accounts start on Tier 1 and graduate after they spend over a usage threshold without disputes.

SymptomCauseFix
401 UnauthorizedBad or revoked keyGenerate a new key, update env var
403 ForbiddenModel not in your project’s allow listEnable model in project settings
429 Rate limit exceededToo many RPM or TPMExponential backoff, raise tier, batch calls
5xx Server errorTransient OpenAI outageRetry with backoff, check status.openai.com
Empty output_textRefusal or stopped generationInspect response.output[0].content

Use exponential backoff with jitter for 429 and 5xx, and read x-request-id from the response headers so OpenAI support can trace a failure if you escalate.

For cost, the most effective controls are:

  • Use the cheapest model that passes your eval. GPT-5 nano often handles classification and extraction at a fraction of GPT-5 cost.
  • Cache prompt prefixes. OpenAI prompt caching cuts input token cost on repeated system prompts.
  • Batch where latency allows. The batch endpoint discounts non-urgent jobs.
  • Trim history. Long conversation history is the single biggest hidden cost in chatbot apps.
  • Set per-key spending alerts. Catch a runaway loop within minutes, not days.

Security and compliance checklist

  • Use project keys, not user keys, for any application.
  • Rotate keys on a schedule (90 days is a common cadence) and immediately after any departure or suspicious activity.
  • Store keys in a managed secret store in production. Inject at runtime, never write to disk.
  • Restrict each key to the endpoints the app actually uses.
  • Log only the prompt, response, and metadata you have a legal basis to retain; never log the key itself.
  • Send data only over HTTPS, which the SDK does by default.
  • Match your data handling to OpenAI’s usage policies, especially around prohibited content and PII.
  • Run a guardrail layer (Future AGI Agent Command Center, NeMo Guardrails, or your own) when the model output goes to end users.

Common errors and fixes

”Invalid API key” right after creating one

You may have pasted a truncated copy. The full key is the entire string shown in the Create dialog. Recreate the key and copy the full value before closing the dialog.

”You exceeded your current quota”

The prepaid balance is empty or the hard limit is hit. Top up under Settings > Billing.

”The model gpt-5 does not exist or you do not have access”

The project does not have GPT-5 enabled. Open Settings > Limits for the project and enable the model. Some models require additional verification.

Leaked key in a public repo

Revoke immediately, generate a new one, and audit usage for unexpected spend. GitHub secret scanning will notify OpenAI but do not rely on that as the only line of defense.

Where to go next

Once the key is working and traced, the OpenAI dashboard tracks tokens and dollars, and the Future AGI dashboard tracks quality, drift, and failure modes. Together they cover the cost and correctness sides of a production OpenAI deployment.

Frequently asked questions

What is an OpenAI API key and why do I need one?
An OpenAI API key is a secret token starting with sk- that authenticates server-side calls to OpenAI's REST API. It is required for any program that talks to models like GPT-5, GPT-5 mini, GPT-5 nano, or the OpenAI image and audio endpoints. The key identifies which OpenAI account and project gets billed for the request, and it carries the rate limits and access scopes tied to that project. Without a valid key the API returns a 401 error and refuses to respond.
How do I get an OpenAI API key in 2026?
Sign up at platform.openai.com, verify your email and phone number, then add a payment method under Settings > Billing. Once billing is active, open the API keys page, click Create new secret key, scope it to a project, and copy the value immediately. The full key is shown exactly once, so paste it into a password manager or environment variable before closing the dialog. Free trial credits are no longer offered, so a small prepaid balance is required to make calls.
How much does it cost to use the OpenAI API in 2026?
OpenAI charges per million tokens, with separate input and output pricing. GPT-5 sits at the top of the standard tier, GPT-5 mini is the mid tier, and GPT-5 nano is the cheapest text model. Embeddings, image generation, and audio endpoints each have their own pricing. Check platform.openai.com/docs/pricing for current numbers because pricing changes through the year. Caching, batch mode, and prompt deduplication can each cut costs significantly on production workloads.
Where should I store my OpenAI API key?
Store the key as an environment variable rather than hardcoding it. On a laptop, place OPENAI_API_KEY in a .env file that is listed in .gitignore. In production, use a managed secret store such as AWS Secrets Manager, Google Secret Manager, HashiCorp Vault, Doppler, or 1Password Secrets Automation. Rotate the key on a schedule, and never paste it into a chat, screenshot, or public Git repository. GitHub secret scanning will revoke keys it detects, but assume any leaked key is already compromised.
What is the difference between a project key and a user key?
Project keys are scoped to a specific project inside your OpenAI organization and carry that project's rate limits, models, and billing. User keys are tied to an individual account. For production work, always use project keys because they can be rotated and revoked without disturbing other workloads, and they make it easier to track which application is responsible for spend. Service account keys are a third option for fully programmatic workloads where no human owner should be tied to the key.
How do I handle OpenAI API errors and rate limits?
OpenAI returns 401 for invalid keys, 403 for missing model access, 429 for rate or quota exhaustion, and 5xx for transient server issues. For 429 and 5xx errors, retry with exponential backoff and jitter. Use the x-ratelimit-remaining-requests and x-ratelimit-remaining-tokens headers to throttle proactively. The official Python and Node SDKs handle retries automatically when you pass max_retries. Log the request id from x-request-id so OpenAI support can trace failures.
Can I share one OpenAI API key across my whole team?
No. Shared keys make it impossible to attribute spend, revoke access for a single departing engineer, or set per-project rate limits. Best practice in 2026 is one project per workload, separate keys for local development and production, and service account keys for automated systems. The OpenAI dashboard tracks usage by key so you can spot a runaway integration. If a key leaks, you can revoke that one key without breaking other parts of the team's stack.
How do I evaluate the output quality of my OpenAI calls?
An API key gets you tokens, but it does not tell you whether the model is producing correct answers, hallucinating, or drifting over time. Pair every production OpenAI deployment with an evaluation layer that scores faithfulness, context adherence, completeness, and tone. Future AGI's traceAI package auto-instruments the openai SDK and ships every call with prompt, response, and latency to the Future AGI dashboard, where you run the same evaluators on production traffic that you run on test sets.
Related Articles
View all
Stay updated on AI observability

Get weekly insights on building reliable AI systems. No spam.