Guides

AWS Bedrock in 2026: Models, Bedrock Agents, Knowledge Bases, Guardrails, and Evaluation

AWS Bedrock in 2026 guide. Claude on Bedrock, Titan, Llama 4, Mistral, Cohere, AI21, Bedrock Agents, Knowledge Bases, Guardrails, plus eval and tracing.

·
Updated
·
7 min read
agents evaluations llms integrations
AWS Bedrock in 2026 guide cover image
Table of Contents

AWS Bedrock in 2026: what it is and how to think about it

AWS Bedrock is the managed foundation model service inside AWS. In 2026 it is not just an API gateway. It bundles the model catalog (Claude, Llama 4, Mistral, Cohere, AI21, Titan, Nova, Stability), the higher-level building blocks (Bedrock Agents, Knowledge Bases, Guardrails, Prompt Management), and tight integration with the rest of AWS (IAM, KMS, VPC endpoints, CloudWatch, CloudTrail). If you build GenAI on AWS, Bedrock is usually the control plane and your application code lives on Lambda, ECS, or EKS around it.

This post covers the 2026 model lineup, the agent and RAG features, the Guardrails control surface, and how to pair Bedrock with Future AGI for evaluation and tracing.

TL;DR

What you wantBedrock primitivePairs well with
Call a foundation modelInvokeModel and Converse APIsCloudWatch, CloudTrail, traceAI spans
Build an agentBedrock AgentsLambda action groups, Future AGI evals
Run RAG over your docsBedrock Knowledge BasesS3, OpenSearch Serverless, Aurora pgvector
Add safety policyBedrock GuardrailsProvider-native filters and external evals
Compare modelsBedrock model evaluationai-evaluation library for custom metrics

Foundation models on Bedrock in 2026

Bedrock exposes models from multiple providers behind a single API. Common 2026 picks:

  • Anthropic Claude family (Sonnet, Opus, Haiku generations). Strong on tool use, long context, and document reasoning. Often the default for agents.
  • Meta Llama 4 (Maverick, Scout). Open weight option for cost-sensitive workloads or when you want to compare against a fine-tunable base.
  • Mistral Large 2 and Mixtral. Strong European option, often used for multilingual workloads.
  • Cohere Command R+. Tuned for RAG and tool use.
  • AI21 Jamba. Hybrid Mamba-Transformer architecture with long context.
  • Amazon Titan and Nova families. Titan Text Embeddings v2 is the standard Bedrock embedding model. Nova Pro and Nova Lite cover general text generation; Nova Micro targets low latency.
  • Stability AI image models for image generation workloads.

Model availability differs by region, and snapshot versions update. Pin the exact model ID and inference profile in your code (for example, an inference profile ARN that points to a cross-region model) so deployments stay reproducible.

Bedrock Agents

Bedrock Agents wrap a model with planning, tool use, and optional memory. You build an agent by defining:

  • A base foundation model (commonly a Claude or Nova snapshot).
  • An instructions prompt that defines the agent’s role and constraints.
  • One or more action groups, each backed by a Lambda function with an OpenAPI schema, or by a returned-control flow your application implements.
  • Optional Knowledge Base attachments for grounded retrieval.
  • Optional session memory for multi-turn conversations.

At runtime, the agent receives a user prompt, plans steps, calls tools through your action groups, and composes a final response. Each tool call runs under an IAM role you control, so the agent never sees credentials or services it is not authorized for.

For production agents, capture every step as an OpenTelemetry span so you can debug failures. Future AGI’s traceAI integrations include Bedrock instrumentation that emits agent, tool, and chain spans you can visualize side by side.

Bedrock Knowledge Bases

Bedrock Knowledge Bases is managed RAG. You point it at an S3 bucket of source documents, choose a parser and chunking strategy, pick an embedding model (Titan Text Embeddings v2, Cohere Embed, or an open weight model), and select a vector store:

  • Amazon OpenSearch Serverless: default option, fully managed.
  • Aurora pgvector: when you already run PostgreSQL.
  • Pinecone, Redis Enterprise Cloud, MongoDB Atlas: external managed vector stores.

You query the knowledge base directly through RetrieveAndGenerate or attach it to a Bedrock Agent. Citations come back attached to the response so downstream apps can render source links.

Quality usually depends on chunking strategy, embedding choice, and retrieval parameters more than the LLM. Treat RAG as an evaluation problem from day one: score faithfulness, retrieval precision, and answer relevance on a regression set every time you change the corpus or the embedding.

Bedrock Guardrails

Bedrock Guardrails is a policy layer that sits between your app and any model in the catalog. A single guardrail applies the same rules across Claude, Llama 4, Titan, and Mistral. Policy types include:

  • Content filters: harassment, hate, sexual content, violence, misconduct, prompt attacks.
  • Denied topics: free-form descriptions of topics to block.
  • Word filters: explicit blocklists.
  • Sensitive information filters: PII detection and masking (mask, block, or anonymize).
  • Contextual grounding checks: compare model output against retrieved context to catch hallucinations.

Guardrails return a structured response when triggered, so your application can surface a safe fallback message. Combine Guardrails with downstream evals like ai-evaluation’s hallucination and faithfulness metrics for layered defense.

Evaluating Bedrock models and agents

Bedrock includes built-in model evaluation jobs for quick comparisons (accuracy, robustness, toxicity) against a labeled dataset you upload. This is fine for one-off snapshot comparisons, but production-grade evaluation usually wants a continuous, programmatic loop.

Pair Bedrock with Future AGI’s open source ai-evaluation library (Apache 2.0) for LLM-judge and metric-model scoring on live traffic and offline regression sets.

from fi.evals import evaluate

# Score a Bedrock response for faithfulness against retrieved context.
score = evaluate(
    "faithfulness",
    output="Refund policy is 30 days from purchase date.",
    context="Our refund window is thirty days after purchase, no exceptions.",
    model="turing_flash",
)

print(score)

turing_flash returns in roughly 1 to 2 seconds in the cloud, turing_small in 2 to 3 seconds, and turing_large in 3 to 5 seconds. Pick the cheapest judge that meets your accuracy bar.

For LLM-as-judge with your own rubric:

from fi.evals.metrics import CustomLLMJudge
from fi.evals.llm import LiteLLMProvider

judge = CustomLLMJudge(
    name="bedrock_response_quality",
    grading_criteria="Is the response grounded in the retrieved context and free of speculation?",
    provider=LiteLLMProvider(model="gpt-4o-mini"),
)

result = judge.evaluate(
    output="The annual fee is $99 and the trial period is 14 days.",
    context="Annual fee: $99. Trial period: 14 days, then automatic renewal.",
)

print(result)

Tracing Bedrock calls with traceAI

traceAI is Future AGI’s open source (Apache 2.0) OpenTelemetry instrumentation for LLMs and agents. Instrumenting Bedrock takes a few lines.

from fi_instrumentation import register, FITracer

register(project_name="bedrock-prod")
tracer = FITracer(__name__)

@tracer.chain
def answer_question(question: str) -> str:
    # call Bedrock InvokeModel / Converse here, return the response text
    ...

Every call lands in your Future AGI project with prompt, response, latency, cost, and any nested tool calls. Failures become replayable by example, which is essential when an agent goes off the rails three steps deep.

Industry use cases

Retail

Bedrock Agents power conversational shopping assistants that browse the catalog, compare items, and check inventory through action groups bound to internal APIs. Pair with Knowledge Bases for product spec retrieval and Guardrails to block pricing leaks or off-brand statements.

Healthcare

Healthcare deployments use Bedrock with strict IAM scoping, VPC endpoints, and KMS-managed encryption. Common patterns: clinical document summarization, draft note generation for clinician review, and patient education content with Guardrails enforcing scope. Any clinical decision support requires regulatory review and human-in-the-loop sign-off.

Finance

Banks and fintechs use Bedrock for fraud triage, document understanding (KYC, statements), and customer service agents. Guardrails enforce denied topics like investment advice, and CloudTrail plus traceAI spans give regulators a full audit trail of every model call.

Bedrock vs Azure OpenAI vs Vertex AI

CapabilityAWS BedrockAzure OpenAIGoogle Vertex AI
Model varietyMulti-provider (Anthropic, Meta, Mistral, Cohere, AI21, Amazon)OpenAI catalog plus a few partnersGemini primary, plus partner models
Native vectorOpenSearch Serverless, Aurora pgvectorAzure AI SearchVertex AI Vector Search
Agent frameworkBedrock AgentsAzure AI AgentsVertex AI Agent Builder
Safety layerBedrock GuardrailsAzure AI Content SafetyVertex AI Safety filters
Identity and policyIAM, KMS, VPC endpointsMicrosoft Entra ID, Key VaultIAM, KMS, VPC Service Controls
Best fitAWS-native workloadsMicrosoft 365 + Azure-nativeGoogle Cloud + BigQuery shops

Pick the platform that matches your existing identity, data, and security tooling. Switching is expensive once you wire IAM, encryption, and VPC paths into your AI stack.

Security and compliance

Bedrock inherits the broader AWS posture: ISO 27001, SOC 2, PCI DSS, and HIPAA-eligible coverage on supported configurations. Bedrock can support GDPR-aligned architectures when configured with appropriate data controls, and teams that need FedRAMP must verify current Bedrock authorization status, region, and service boundary in the AWS compliance documentation. AWS explicitly does not use customer prompts or responses to train the underlying foundation models. Customer data flowing through Bedrock can stay inside your VPC via VPC endpoints, and all traffic is encrypted in transit and at rest with KMS-managed keys.

For regulated workloads, combine:

  • Private VPC endpoints to keep traffic off the public internet.
  • KMS customer-managed keys for encryption.
  • IAM role-based access scoped per agent and per tool.
  • CloudTrail for an immutable audit trail of every API call.
  • External eval and observability (Future AGI) for continuous hallucination and policy-compliance checks.

How Future AGI pairs with AWS Bedrock

Future AGI plugs into Bedrock at three layers:

  1. Evaluation. The open source ai-evaluation library scores Bedrock responses for faithfulness, hallucination, instruction-following, and custom LLM-judge rubrics on offline datasets and live traffic.
  2. Tracing. traceAI (Apache 2.0) emits OpenTelemetry spans for Bedrock model calls, Bedrock Agent steps, tool calls, and any RAG retrievals. Spans flow into your Future AGI project for replay, regression, and drift detection.
  3. Governance gateway. The Future AGI Agent Command Center at /platform/monitor/command-center provides a BYOK gateway for centralized prompt versioning, model fallback policies, rate limits, and per-tenant guardrails. Useful when you mix Bedrock with non-AWS providers in the same product.

Configuration uses standard environment variables FI_API_KEY and FI_SECRET_KEY so secrets stay in AWS Secrets Manager or Parameter Store.

When to choose Bedrock

Bedrock is the right default when:

  • Your workloads already live in AWS and you value IAM, KMS, and VPC-endpoint primitives.
  • You want one API for many providers without committing to a single model family.
  • You need managed agents, RAG, and guardrails without building each component from scratch.
  • You have compliance requirements (HIPAA, FedRAMP, SOC 2) that benefit from AWS’s baseline.

Pair Bedrock with continuous evaluation, OpenTelemetry tracing, and a governance gateway so you ship AI features your team and your auditors can trust.

Frequently asked questions

What is AWS Bedrock in 2026?
AWS Bedrock is a fully managed service that provides API access to foundation models from Anthropic, Meta, Mistral, Cohere, AI21, Stability AI, and Amazon, plus higher-level features like Bedrock Agents, Knowledge Bases, Guardrails, and managed model evaluation. In 2026 it sits as the central GenAI control plane for teams that already run on AWS, with deep ties into IAM, KMS, VPC endpoints, CloudWatch, and CloudTrail.
Which models are available on Bedrock in 2026?
The catalog includes the Claude family from Anthropic (latest generations like Sonnet and Opus), Llama 4 from Meta, Mistral Large 2 and Mixtral, Cohere Command R+, AI21 Jamba, Stability AI image models, and Amazon's own Titan and Nova families. Available models and snapshot versions change frequently, so check the AWS Bedrock console or the model catalog API for the live list in your region before you architect against a specific snapshot.
How do Bedrock Agents work?
Bedrock Agents wrap a chosen foundation model with a planner, an action group registry, and optional memory and knowledge base attachments. You define OpenAPI tool schemas or Lambda functions, the agent decides which to call to satisfy the user goal, and AWS handles the orchestration loop. Each step is scoped to an IAM role so the agent only sees what its policy allows.
What does Bedrock Knowledge Bases do?
Bedrock Knowledge Bases provides managed retrieval-augmented generation. You point it at an S3 bucket of documents, choose a vector store (OpenSearch Serverless, Aurora pgvector, Pinecone, Redis, MongoDB Atlas), and the service handles chunking, embedding, indexing, retrieval, and citation. Apps query the knowledge base directly or wire it into an agent for grounded answers without a custom RAG pipeline.
How do Bedrock Guardrails compare to provider-native safety?
Bedrock Guardrails layer on top of any model in the catalog, so the same policy applies whether you call Claude, Llama 4, or Titan. Policies cover content categories, denied topics, PII detection and masking, contextual grounding checks for hallucination, and prompt-attack heuristics. Provider-native safety still runs underneath, but Guardrails give you a portable, audited control plane.
How do I evaluate models on Bedrock?
Use Bedrock's built-in evaluation jobs for quick accuracy and toxicity comparisons across snapshots. For continuous production evaluation, pair Bedrock with an external observability and eval stack like Future AGI's ai-evaluation library and traceAI tracing. ai-evaluation supplies LLM-judge metrics (faithfulness, hallucination, instruction-following), and traceAI captures every Bedrock invocation as an OpenTelemetry span you can replay.
How does Bedrock compare to Azure OpenAI and Google Vertex AI?
Bedrock leans on multi-provider model choice and deep AWS integration (IAM, VPC endpoints, KMS, CloudWatch). Azure OpenAI is tighter with the OpenAI catalog and Microsoft 365 surfaces. Google Vertex AI leans on Gemini, native vector search, and BigQuery integration. The right choice usually follows the rest of the cloud footprint: pick the platform where your data, identity, and security tooling already live.
Where does Future AGI fit with AWS Bedrock?
Future AGI complements Bedrock as the evaluation and observability layer. Use the ai-evaluation library to score Bedrock responses for faithfulness, groundedness, and instruction-following. Instrument Bedrock and Bedrock Agents calls with traceAI, the open source OpenTelemetry instrumentation, then send spans to your Future AGI project for replay, regression testing, and drift detection. Optionally route Bedrock traffic through Future AGI's Agent Command Center for centralized prompt governance and BYOK control.
Related Articles
View all
Stay updated on AI observability

Get weekly insights on building reliable AI systems. No spam.