LangChain and LangGraph: The Orchestration Layer Your LLM Apps Have Been Missing

An AI app is not a prompt. It is a system. Orchestration is the difference between a demo and a product.


This is the first article in a seven-part series called AI to Web3. Each piece introduces one layer of a technology stack, with a real use case for DeFi and blockchain builders. By the final article, every piece will converge into a proposed open-source system called Hydra — a sovereign, self-hosted multi-agent DeFi intelligence mesh.

We are starting with the foundation: LangChain and LangGraph.


Why we are writing about this

At Hyperdrift we have been watching the LLM tooling space consolidate for two years. Most frameworks were experimental. The abstraction layers leaked. The APIs changed every week. Building production AI systems on top of them felt like building on sand.

That changed in October 2025, when LangChain released version 1.0. And it changed again in March 2026, when LangGraph hit 1.1 with type-safe streaming and full Model Context Protocol (MCP) support.

The orchestration layer has stabilized. That is the signal to build.


What LangChain actually is

LangChain is a framework for building applications powered by language models. At its core it gives you:

  • Chat model abstractions that work identically across OpenAI, Anthropic, Google, Groq, local Ollama instances, and hundreds of other providers
  • Tool calling — letting models decide when and how to invoke functions, APIs, or external services
  • Memory and state — maintaining context across multi-turn conversations or multi-step pipelines
  • LCEL (LangChain Expression Language) — a composable interface for chaining components: retriever | prompt | model | parser

The 1.0 release formalized these primitives:

Standardized content blocks. Every model response now surfaces the same structure regardless of provider — reasoning traces, citations, tool calls, and raw text are all typed consistently. You stop writing provider-specific parsing code.

Middleware system. Production-grade middleware for model retries with exponential backoff, content moderation, and context-aware summarization. Wire it once, apply it everywhere.

Model profiles. A .profile attribute on chat models enables dynamic feature detection — your code can ask a model what it supports before calling it, rather than encoding capabilities as hardcoded assumptions.


LangGraph: stateful agents as graphs

LangGraph is the part of the ecosystem that matters most for serious agent work. Version 1.1, released this month, is the reason we are writing this now.

LangGraph lets you define agents as directed graphs — nodes are functions that process state, edges are the transitions between them. The graph can be cyclic (enabling loops, retries, and self-correction) or acyclic (linear pipelines). The model is explicit: you see the full execution flow before anything runs.

What 1.1 adds that changes the production picture:

Durable execution. Agent state is checkpointed automatically. If a long-running agent fails mid-way — an API timeout, a rate limit, a crash — it can resume from the last checkpoint rather than starting over. For agents doing multi-hour research or multi-step financial analysis, this is not a nice-to-have.

Human-in-the-loop as a first-class primitive. Any node in the graph can pause execution and wait for human input before continuing. The agent proposes; the human approves or redirects; the agent resumes. This pattern is essential for any agent touching financial transactions.

Sub-graph composition. Complex multi-agent systems can be nested — a sub-graph representing a specialized agent team can be embedded inside a larger orchestration graph. This is how you build Hydra.

MCP (Model Context Protocol) support. Agents can now connect to any MCP server — a standardized protocol for giving agents access to tools, databases, and APIs. Anthropic's MCP has become the de facto standard for agent-to-tool communication.

Type-safe streaming with version="v2". Automatic Pydantic and dataclass coercion on streaming outputs. Your agents no longer return untyped blobs.


Deep Agents: async subagents and multimodal files

Alongside LangGraph 1.1, LangChain shipped Deep Agents v0.5.0 — an async subagent framework adding multimodal file support directly in the read_file tool. Agents can now natively process PDFs, audio, and video alongside text. For DeFi research agents parsing whitepapers, audit reports, and protocol documentation, this matters.


LangSmith: observability for agents in production

LangSmith is the observability layer. Every invocation of every chain and agent is logged as a structured trace — prompts, responses, token usage, latency, tool calls. In 2026, LangSmith added multimodal tracing (images, audio) and deep LangGraph integration that shows the full state machine execution alongside the LLM traces.

89% of teams running LLM agents in production now use some form of observability tooling. LangSmith is the path-of-least-resistance entry point if you are already in the LangChain ecosystem. We will cover open-source alternatives (LangFuse, W&B Weave) in a dedicated article later in this series.


What teams are building with LangGraph in production

The LangGraph production showcase is worth reading in full. A few cases that illustrate what production actually looks like:

Klarna uses LangGraph for customer service automation — complex branching workflows that route queries, retrieve context from multiple systems, and escalate to humans when confidence is low. The durable execution model was critical: a dropped connection mid-flow no longer means the customer starts over.

LinkedIn uses LangGraph for recruiter tooling — multi-step research agents that gather context about candidates and roles before generating responses.

Uber uses LangGraph for internal tooling and data analysis pipelines.

The 2026 State of Agent Engineering report puts LangGraph as S-tier among multi-agent frameworks — 57% of surveyed teams now report having agents in production, up from 12% in 2024.


The Web3 angle: blockchain-aware agents

LangChain ships a BlockchainDocumentLoader in langchain_community — it loads NFT data from ERC721 and ERC1155 smart contracts on Ethereum and Polygon via the Alchemy API. A DeFi research agent can ingest on-chain data as documents and reason over it natively.

Beyond the built-in loader, three integrations stand out:

BNB Chain MCP. The BNB Chain AI Agent toolkit provides a Model Context Protocol server giving LangGraph agents direct access to BNB Chain and EVM network interactions — balance queries, transaction reads, contract calls.

AxonFi SDK. A LangChain-native integration for non-custodial treasury and payment operations. Agents can propose and execute on-chain transactions across Base and Arbitrum without exposing private keys to the application layer.

LangGraph + Hedera. A documented tutorial for building blockchain-aware AI agents on Hedera using the LangGraph state machine model.


Hydra — what we are building across this series

Starting here, each article will contribute one layer to an open-source system we are calling Hydra: a self-hosted, sovereign multi-agent mesh for DeFi portfolio intelligence and security.

The goal of Hydra is to demonstrate that autonomous AI agents can be applied to on-chain finance without custodial risk, vendor lock-in, or opaque black-box reasoning.

Article 1 contribution: the orchestration scaffold.

The core of Hydra is a LangGraph state machine. Here is the minimal scaffold — the shape of the system before the other layers are added:

View orchestrator code
# hydra/orchestrator.py
from langgraph.graph import StateGraph, START, END
from langgraph.checkpoint.postgres import PostgresSaver
from typing import TypedDict, Annotated
import operator

class HydraState(TypedDict):
    """Shared state flowing through the Hydra agent mesh."""
    portfolio: dict          # current portfolio positions
    signals: list[dict]      # market and on-chain signals
    risks: list[dict]        # identified risk flags
    decisions: list[dict]    # proposed actions
    human_approved: bool     # approval gate for execution
    messages: Annotated[list, operator.add]

def strategist_node(state: HydraState) -> HydraState:
    """
    Orchestrator: decomposes portfolio goals, synthesizes agent outputs,
    proposes decisions. Uses a frontier model — this is where reasoning
    quality matters most.
    """
    # Populated in Article 6 with full multi-agent coordination
    return state

def build_hydra_graph(checkpointer=None):
    graph = StateGraph(HydraState)
    graph.add_node("strategist", strategist_node)
    graph.add_edge(START, "strategist")
    graph.add_edge("strategist", END)
    return graph.compile(checkpointer=checkpointer)

# Entry point with durable execution via PostgreSQL checkpointing
if __name__ == "__main__":
    from psycopg_pool import ConnectionPool
    pool = ConnectionPool("postgresql://localhost/hydra")
    checkpointer = PostgresSaver(pool)
    hydra = build_hydra_graph(checkpointer=checkpointer)
    print("Hydra scaffold initialized.")

This graph starts with a single node. Each subsequent article adds a new agent node — Sentinel (RAG), Analyst (fine-tuning), Executor (n8n), Observer (LangFuse), Oracle, Guardian (SOAR). By Article 7 the graph is fully populated.

# Project structure bootstrapped in this article
hydra/
├── orchestrator.py      # LangGraph state machine (this article)
├── requirements.txt
└── .env.example
# hydra/requirements.txt
langgraph>=1.1.0
langchain>=1.0.0
langchain-openrouter
psycopg[binary]
psycopg-pool
python-dotenv

What you could build with this today

The LangGraph scaffold above is not a toy. You can run it locally against any LangChain-compatible model, point it at your portfolio data, and have a working agent loop in under an hour. The checkpoint system means it survives restarts. The human-in-the-loop primitive means it never executes without your explicit approval.

A practical starting point: extend strategist_node to call the BlockchainDocumentLoader with your wallet address, load your positions as documents, and produce a structured summary using a structured output model. That is already more useful than most DeFi dashboards.

As the on-chain data layer stabilizes with more indexers and MCP servers, the gap between "agent that reads your portfolio" and "agent that manages it" is closing fast.


The stack so far

LayerTechnologyStatus
OrchestrationLangGraph 1.1Done — this article
Automationn8n 2.0Article 2
KnowledgeRAG + GraphRAGArticle 3
ObservabilityLangFuse + W&B WeaveArticle 4
SpecializationFine-tuned SLMArticle 5
CoordinationMulti-agent swarm + routingArticle 6
SecuritySOAR + GuardianArticle 7
ResilienceStructured logging · Tenacity retries · LangFuse self-hostedArticle 8

Next in this series: n8n 2.0 — how to give your LangGraph agents the ability to trigger real-world workflows, monitor DeFi pools, and respond to on-chain events without writing a custom event system.


AI to Web3 series — building Hydra, a sovereign multi-agent DeFi intelligence mesh:

1 — LangChain orchestration · 2 — n8n execution · 3 — RAG at scale · 4 — LLM observability · 5 — Fine-tuning · 6 — Agent swarms · 7 — SOAR · 8 — Production resilience

Get weekly intel — courtesy of intel.hyperdrift.io