Research

What is LangGraph? Stateful Agent Graphs Explained in 2026

LangGraph is LangChain's graph-based orchestration library for stateful agents. Nodes, edges, state, checkpointers, and how it differs from CrewAI.

·
8 min read
langgraph langchain agent-framework stateful-agents agent-orchestration python open-source 2026
Editorial cover image on a pure black starfield background with faint white grid. Bold all-caps white headline WHAT IS LANGGRAPH fills the left half. The right half shows a wireframe state-machine graph with five nodes connected by directed arrows forming a small cycle, with a soft white halo behind the cycle, drawn in pure white outlines.
Table of Contents

A team needs an agent that researches a topic, drafts a report, asks the user for approval before publishing, and routes to either a “publish” or “revise” branch based on the response. CrewAI handles this with a Flow class that mixes deterministic and agentic steps; the OpenAI Agents SDK and Claude Agent SDK have built-in human-in-the-loop primitives but the branching logic still has to live in your code. The LangGraph version is a small graph with an interrupt node and a conditional edge that reads the resume payload. The graph is the most natural representation of the workflow, and LangGraph’s primitives match it one to one.

This is the niche LangGraph fills in 2026. Where CrewAI is opinionated about role-based workflows and the OpenAI and Claude Agent SDKs are opinionated about single-agent loops, LangGraph is opinionated about state machines. You declare nodes and edges, share a typed state, and let the framework execute the graph until it terminates. This guide covers what LangGraph is, how its primitives work, how it compares to alternatives, and when to pick it.

TL;DR: What LangGraph is

LangGraph is an open-source MIT-licensed Python and TypeScript library from LangChain Inc. for building stateful, multi-actor applications as graphs. The Python repo at github.com/langchain-ai/langgraph has approximately 31,000 GitHub stars as of mid-2026. The primitives are nodes, edges, a typed state schema, checkpointers for persistence, and interrupts for human-in-the-loop. The library is independent of LangChain proper but interoperates with LangChain’s model wrappers and tool abstractions. The hosted product, LangSmith Deployment (formerly LangGraph Platform), layers managed deployment, persistent state storage, and admin features on top of the open-source library.

Why LangGraph matters in 2026

Three forces pushed graph-based orchestration into the mainstream.

First, agents stopped being linear. A 2024 agent was a ReAct loop over a tool list. A 2026 agent is a graph with branching, looping, retries, persistence, and human-in-the-loop checkpoints. Linear chain abstractions could not express these workflows; the orchestration layer needed graph semantics.

Second, persistence became a production requirement. A long-running agent workflow that fails halfway through and has to restart from scratch is operationally unacceptable. Checkpointing every state transition turns failure recovery from a debugging exercise into a resume-from-last-good-state operation. LangGraph’s checkpointer abstraction made this primitive cheap to use.

Third, human-in-the-loop became table stakes for enterprise deployments. An agent that takes irreversible actions (wires money, sends emails, modifies infrastructure) needs an approval pause. LangGraph’s interrupt primitive pauses the graph mid-execution, persists state, and waits for an external resume signal. Without graph-native pause semantics, every agent stack rolls its own approval queue.

The anatomy of a LangGraph application

LangGraph’s primitives map closely to graph theory.

StateGraph. The builder. You instantiate StateGraph with a state schema, add nodes, add edges, set an entry point, and compile.

Node. A function that receives the current state and returns a partial update to it. Nodes can be plain Python functions, LangChain Runnables, or compiled subgraphs. The node’s return value is merged into the state by reducers defined on the schema.

Edge. A transition. Edges can be unconditional (after node A, go to node B) or conditional (after node A, evaluate a routing function and pick the next node from a set). Conditional edges are how branching works.

State. A typed dictionary (TypedDict or Pydantic model) shared across all nodes. Each field can declare a reducer that controls how multiple updates are merged. The default reducer is “last write wins”; for accumulating fields like message lists, the reducer concatenates.

Checkpointer. A persistence layer attached to the compiled graph. After every node transition, the checkpointer writes the new state. The InMemorySaver ships in langgraph-checkpoint for testing; install langgraph-checkpoint-sqlite for single-process SQLite persistence and langgraph-checkpoint-postgres for production Postgres. LangSmith Deployment offers a managed Postgres backend.

Interrupt. A primitive that pauses graph execution mid-node. Code calls interrupt(value) and the graph blocks; an external caller resumes with Command(resume=...). The state is checkpointed on pause, so the resume can come hours or days later.

Send. A primitive for parallel fan-out. A node returns a list of Send(node_name, payload) calls and the graph dispatches each in parallel.

LangGraph in 30 lines

from typing import TypedDict
from langgraph.graph import StateGraph, START, END
from langchain_openai import ChatOpenAI

class State(TypedDict):
    question: str
    answer: str

def research(state: State) -> dict:
    # Return a partial state update; LangGraph merges it into State via reducers.
    return {"answer": f"researched: {state['question']}"}

def write(state: State) -> dict:
    model = ChatOpenAI(model="gpt-4o")
    response = model.invoke(f"Polish this draft: {state['answer']}")
    return {"answer": response.content}

builder = StateGraph(State)
builder.add_node("research", research)
builder.add_node("write", write)
builder.add_edge(START, "research")
builder.add_edge("research", "write")
builder.add_edge("write", END)

graph = builder.compile()
result = graph.invoke({"question": "What is OpenTelemetry?", "answer": ""})

The graph compiles into a runnable that accepts a partial state, executes the nodes in topological order with the typed state flowing through, and returns the final state.

How LangGraph compares to alternatives

FrameworkPrimitiveBest forMaintainer
LangGraphStateful graph (nodes, edges, conditional routing)Arbitrary state machines, persistence, human-in-the-loopLangChain Inc. (MIT)
CrewAIRole + task + crewRole-decomposable workflowsCrewAI Inc. (MIT)
AutoGen (legacy)Conversational agent in a GroupChatExisting AutoGen stacks; new builds should start with the Microsoft Agent FrameworkMicrosoft (MIT code, in maintenance mode in 2026)
OpenAI Agents SDKAgent loop with tools, handoffs, guardrails, and HITLSingle- or multi-agent workflows on OpenAIOpenAI (MIT)
Claude Agent SDKSingle-agent loop with tool use and subagentsAnthropic-native single-agent workflowsPython SDK MIT; SDK use governed by Anthropic Commercial Terms

LangGraph’s flexibility is its strength and its tax. The graph builder API is more verbose than CrewAI’s role-and-task declaration. In return, you can express workflows that CrewAI cannot, including arbitrary state transitions, parallel fan-out, conditional looping, and stateful pause-and-resume. The procurement question is whether your workflow needs that flexibility. If yes, LangGraph. If no, CrewAI is faster to write.

Production patterns with LangGraph

Three patterns recur.

Pattern 1: ReAct agent with create_agent. The lowest-friction starting point in 2026. Pass a model, a list of tools, and a system prompt to langchain.agents.create_agent (the modern replacement for the deprecated create_react_agent). The compiled graph loops between LLM and tool calls until the model emits a final answer. For most single-agent workflows this is the right shape.

Pattern 2: Multi-agent supervisor. A supervisor node receives the user request and conditionally routes to one of several specialist subgraphs (research, code, summarize, etc.). Each specialist is a compiled subgraph with its own state. The supervisor’s routing function reads the request, picks a specialist, and the graph loops back to the supervisor after each specialist completes. This pattern replaces CrewAI’s hierarchical process when more control is needed.

Pattern 3: Long-running workflow with human approval. A graph with a node that calls interrupt(payload) before an irreversible action. The checkpointer persists state to Postgres. An admin UI reads the pending interrupt and resumes the graph with an approval or rejection. The Postgres checkpointer makes this pattern production-grade.

Common mistakes when adopting LangGraph

  • Skipping the typed state schema. Untyped state is a debugging nightmare. Use TypedDict or Pydantic for the state schema and define reducers explicitly.
  • Using InMemorySaver in production. It is for tests. PostgresSaver is the production checkpointer; SqliteSaver works for low-volume single-process services.
  • Building everything as one giant graph. Subgraphs are first-class. Compile each specialist into its own subgraph and compose them.
  • Forgetting to set recursion limits. A buggy conditional edge can loop forever. Pass recursion_limit at run time via the config to invoke or stream (e.g. graph.invoke(inputs, {"recursion_limit": 100})); set it to a reasonable upper bound for your workflow.
  • Ignoring the streaming API. Graph execution can stream node-level updates as they happen. For real-time UIs, use the stream method instead of invoke.
  • Hand-rolling state persistence. The checkpointer abstraction is the production-grade way to persist state. Rolling your own database writes inside nodes leads to drift between graph state and your custom store.
  • Using LangGraph for trivially-linear workflows. A three-step sequential pipeline does not need a graph. Use a small Python function and skip the orchestration overhead.

How to trace LangGraph with FutureAGI

LangGraph can be instrumented to emit OpenTelemetry-compatible spans through OpenInference, traceAI, and LangSmith. To ship traces to FutureAGI’s observability platform or any other OTel backend with traceAI, install the langgraph extra and register both instrumentors:

pip install "traceAI-langchain[langgraph]"
from fi_instrumentation import register
from fi_instrumentation.fi_types import ProjectType
from traceai_langchain import LangChainInstrumentor, LangGraphInstrumentor

trace_provider = register(
    project_type=ProjectType.OBSERVE,
    project_name="research-agent-graph",
)
LangChainInstrumentor().instrument(tracer_provider=trace_provider)
LangGraphInstrumentor().instrument(tracer_provider=trace_provider)

The resulting trace tree shows the graph invocation at the root, each node call as a child span with its input and output state, every LLM call and tool call as a deeper child, and conditional edge decisions captured as span events.

How FutureAGI implements LangGraph observability and evaluation

FutureAGI is the production-grade observability and evaluation platform for LangGraph built around the closed reliability loop that other LangGraph stacks stitch together by hand. The full stack runs on one Apache 2.0 self-hostable plane:

  • Graph tracing, traceAI (Apache 2.0) auto-wraps StateGraph nodes, conditional edges, interrupts, checkpointer reads and writes, and Send fan-out across Python, TypeScript, Java, and C#; LangChainInstrumentor and LangGraphInstrumentor combine to give state-transition spans plus LLM and tool call children in the same tree.
  • Node and edge evals, 50+ first-party metrics (Tool Correctness, Plan Adherence, Task Completion, Faithfulness, Hallucination, Argument Correctness, Step Efficiency) attach as span attributes on every node; BYOK lets any LLM serve as the judge at zero platform fee, and turing_flash runs the same rubrics at 50 to 70 ms p95.
  • Simulation, persona-driven text and voice scenarios exercise the graph against synthetic users in pre-prod with the same scorer contract that judges production traces; interrupts are simulated for human-in-the-loop testing.
  • Gateway and guardrails, the Agent Command Center fronts 100+ providers with BYOK routing for the model wrappers inside each node; 18+ runtime guardrails (PII, prompt injection, jailbreak, tool-call enforcement) enforce policy on the same plane.

Beyond the four axes, FutureAGI also ships six prompt-optimization algorithms that consume failing trajectories as training data so production regressions feed back into versioned node prompts. Pricing starts free with a 50 GB tracing tier; Boost is $250 per month, Scale is $750 per month with HIPAA, and Enterprise from $2,000 per month with SOC 2 Type II.

Most teams running LangGraph in production end up running three or four tools alongside it: one for traces, one for evals, one for the gateway, one for guardrails. FutureAGI is the recommended pick because tracing, evals, simulation, gateway, and guardrails all live on one self-hostable runtime; the loop closes without stitching. For a deeper walk-through of the tracing model, read What is LLM Tracing?.

Sources

Related: What is CrewAI?, What is AutoGen?, Best Multi-Agent Frameworks in 2026, What is LLM Tracing?

Frequently asked questions

What is LangGraph in plain terms?
LangGraph is an open-source Python and TypeScript library from LangChain Inc. for building stateful, multi-actor applications as graphs. You define nodes (units of work) and edges (transitions between nodes), share a typed state object across nodes, and run the graph. The graph can loop, branch conditionally, persist state to a checkpoint, pause for human approval, and resume. It is the lower-level orchestration primitive in the LangChain ecosystem and the most flexible choice when CrewAI's role-and-task framing is too rigid.
Is LangGraph the same thing as LangChain?
No. LangChain is the broader ecosystem of model integrations, document loaders, retrievers, and chain abstractions. LangGraph is a separate library focused on stateful graph execution for agent workflows. LangGraph depends on LangChain for some types but can be used standalone. The maintainers split them because LangChain's chain abstraction was too restrictive for the cyclical, stateful, multi-actor patterns that production agents needed.
Who maintains LangGraph and what license is it under?
LangGraph is maintained by LangChain Inc. The Python repo at langchain-ai/langgraph is MIT-licensed. The JavaScript port at langchain-ai/langgraphjs is also MIT-licensed. The Python repo has approximately 31,000 GitHub stars as of mid-2026. LangChain Inc. also runs a paid hosted product called LangSmith Deployment (formerly LangGraph Platform) with managed deployment, persistent state storage, and admin features on top of the open-source library.
How is LangGraph different from CrewAI?
LangGraph is graph-based and lower-level. You declare explicit nodes, edges, and a shared state schema; the framework runs the graph until a node returns END. CrewAI is role-based and opinionated. You declare agents with roles and tasks and let the framework run a sequential or hierarchical process. LangGraph wins when you need arbitrary topology, state persistence, human-in-the-loop, or fine-grained control over transitions. CrewAI wins when the workflow maps cleanly to a small team of role-defined agents and you want fewer lines of code.
What is a checkpointer in LangGraph?
A checkpointer is a persistence layer that saves the graph's state at every node transition. With a checkpointer, you can pause execution mid-graph (for human approval, for example), resume from where it stopped, time-travel back to a previous state, and replay execution with a different prompt or model. The InMemorySaver ships in `langgraph-checkpoint` for testing; the SQLite and Postgres backends are separate installable packages (`langgraph-checkpoint-sqlite`, `langgraph-checkpoint-postgres`). The checkpointer is the production primitive that makes long-running, interruptible agent workflows tractable.
What is the prebuilt create_react_agent function?
create_react_agent was LangGraph's original high-level helper for the ReAct (reasoning + acting) pattern. You passed a model, a list of tools, and an optional prompt, and it returned a compiled graph that looped between an LLM call and a tool call until the model emitted a final answer. As of 2026 it is deprecated in favor of `create_agent` in the `langchain` package, which covers the same use case and is the recommended starting point for new ReAct-style agents. For more complex topologies (multiple agents, conditional routing, human-in-the-loop), drop down to the StateGraph API and build the graph explicitly.
How do you trace a LangGraph execution?
LangGraph can be instrumented to emit OpenTelemetry-compatible spans through several paths. OpenInference ships an openinference-instrumentation-langchain package that auto-wraps LangGraph nodes, edges, and tool calls. traceAI ships traceAI-langchain with a langgraph extra (`pip install 'traceAI-langchain[langgraph]'`) that exposes a LangGraphInstrumentor for graph topology, node, and conditional-edge spans. LangSmith provides native instrumentation that auto-records every node transition. The trace tree shows the graph invoke, each node invocation with its input state and output state, every LLM call, and every conditional edge decision.
When should I not use LangGraph?
Skip LangGraph when your workflow is a clean sequence of role-based tasks; CrewAI is faster to write. Skip it when your workload is a single LLM call with a few tools and no looping; the OpenAI Agents SDK or Claude Agent SDK is simpler (both ship handoffs and human-in-the-loop primitives directly). Skip it for latency-critical inline paths where the per-node state-passing overhead measurably hurts p95. LangGraph earns its weight when explicit graph topology, persistent checkpoints, time-travel debugging, and graph-native human-in-the-loop are first-class requirements.
Related Articles
View all
Stay updated on AI observability

Get weekly insights on building reliable AI systems. No spam.