n8n 2.0: The Open-Source Automation Layer That Turns AI Agents Into Real Systems
An agent that can think but cannot act is a consultant. An agent that can act is a system.
This is part two of the AI to Web3 series. Last week we set up the LangGraph orchestration scaffold for Hydra, our sovereign multi-agent DeFi intelligence mesh. Today we wire in the execution layer: n8n.
Why we are writing about n8n
There is a gap between "my LLM agent made a decision" and "something happened in the world because of it." Closing that gap is harder than it looks. You need event triggers, API integrations, retry logic, webhook handling, error notifications, and human approval flows — before you have shipped a single feature.
Most teams build all of this from scratch. They should not have to.
n8n is an open-source workflow automation platform with 500+ integrations, a visual builder, and — as of version 2.0 — a native AI Agent node that is the cleanest bridge between LLM orchestration and real-world execution we have found.
The fact that it is fully self-hostable is not a minor detail. For any system touching private keys, wallet data, or on-chain transactions, keeping the execution layer inside your own infrastructure is not optional.
What changed in n8n 2.0
n8n has existed since 2019 as a developer-friendly alternative to Zapier. The 2.0 release, which shipped in late 2025, is a qualitative shift — not just a feature update.
Task runners. Code nodes now execute in isolated sandbox environments, not the main n8n process. This matters for security: a buggy or malicious custom function cannot take down the workflow engine.
SQLite pooling. Database performance improved by up to 10x for high-volume workflow execution. If you are polling DeFi pools on short intervals, this is the difference between reliable and flaky.
Visual workflow versioning and diff. You can now see exactly what changed between workflow versions — a critical feature when debugging a workflow that was working yesterday. Audit trails are built in.
Sub-workflow composition. Large workflows can be broken into reusable sub-workflows. You build a "monitor DeFi pool" sub-workflow once, then invoke it from a dozen different parent workflows.
Built-in data tables. State management without an external database for simple use cases — track whether an alert has been sent, log agent decisions, maintain a queue of pending actions.
AI workflow builder. Describe what you want in natural language, n8n scaffolds the workflow nodes. The generated output is always editable — it is a starting point, not a black box.
The AI Agent node
The most important addition for this series is the AI Agent node. This is not a wrapper around a single LLM call. It is a tool-calling agent loop that can:
- Call any of the 500+ n8n integration nodes as tools (Slack, Postgres, HTTP requests, Google Sheets, Discord, Telegram, and so on)
- Maintain conversational memory across executions via PostgreSQL-backed context
- Connect to LangChain-compatible vector stores (Pinecone, Qdrant, Supabase pgvector, Weaviate, Milvus) for RAG
- Use OpenAI, Claude, Gemini, Groq, Vertex AI, or Ollama as the underlying model
- Pause for human approval before executing sensitive actions
The LangChain integration is direct — n8n's AI Agent node uses LangChain under the hood for the agent loop. This means the LangGraph agents we built in Article 1 can be called as tools from within n8n workflows, and n8n workflows can be triggered by LangGraph nodes via HTTP. The two systems compose cleanly.
How n8n compares to the alternatives
The workflow automation space has become crowded. Here is the honest picture:
| Platform | Best for | AI story | Pricing model | Self-host? |
|---|---|---|---|---|
| n8n | Developers, privacy-sensitive | Native AI Agent node, LangChain, RAG | Free (self-hosted) | Yes, MIT-licensed core |
| Zapier | Non-technical users | Zapier AI actions, limited | Per-task, expensive at scale | No |
| Make | Intermediate users | AI modules (OpenAI, etc.) | Per-operation | No |
| Flowise | LLM prototyping | Purpose-built for LangChain RAG | Free (self-hosted) | Yes |
| Gumloop | AI-native no-code | AI-first design, "Gummie" assistant | SaaS only | No |
| Dynamiq | Enterprise / regulated | Air-gapped deployment | Enterprise | Yes |
n8n's moat is the combination of open-source, self-hosting, and genuine AI-native orchestration. Flowise is excellent for pure RAG pipelines but weak on the automation side. Zapier and Make are expensive at scale and cannot be self-hosted.
For crypto workflows specifically: self-hosting is not a preference, it is a requirement. You do not route private key operations through third-party SaaS infrastructure.
DeFi automation with n8n
The n8n crypto integration page lists the built-in tooling. Beyond that, HTTP Request nodes give you direct access to any EVM RPC endpoint. The practical patterns we use:
Pool monitoring. Poll CoinGecko or a direct RPC node on a schedule (every 60 seconds), compare against thresholds, and trigger downstream agents or notifications when conditions are met.
Wallet event tracking. Subscribe to transaction events for a set of addresses via an Alchemy or Infura webhook. n8n receives the webhook, parses the event, and routes it to the appropriate agent.
DeFi operation execution. A Function node containing ethers.js can construct and sign transactions. Pair this with the Human-in-the-Loop Chat node so no transaction executes without explicit approval.
DAO governance monitoring. Poll Snapshot via HTTP for new proposals, summarize them via an LLM node, and route summaries to a Discord or Telegram notification.
Price alert pipelines. The 600+ community RAG workflow templates include ready-made patterns for document ingestion, embedding generation, and vector retrieval — you do not start from scratch.
A key community resource: OVHcloud's sovereign RAG architecture demonstrates a fully self-hosted pipeline using n8n orchestration, BGE-M3 embeddings, and PostgreSQL + pgvector. This is the template for Hydra's data ingestion layer.
Hydra — Article 2 contribution: the Executor agent
In last week's scaffold, the HydraState contained a list of decisions — proposed actions from the Strategist agent. This week we add the Executor: an n8n workflow that receives those decisions via webhook and carries them out.
The n8n side is a webhook-triggered workflow:
[Webhook] → [Validate decision] → [Human approval gate] → [Execute action] → [Log result]
The LangGraph side adds an Executor node that calls the n8n webhook:
View Hydra code
# hydra/executor.py
import httpx
from hydra.orchestrator import HydraState
EXECUTOR_WEBHOOK_URL = "http://localhost:5678/webhook/hydra-execute"
async def executor_node(state: HydraState) -> HydraState:
"""
Sends approved decisions to the n8n Executor workflow.
n8n handles validation, human-in-the-loop approval, and
the actual on-chain or off-chain action.
"""
if not state["decisions"]:
return state
async with httpx.AsyncClient() as client:
response = await client.post(
EXECUTOR_WEBHOOK_URL,
json={
"decisions": state["decisions"],
"portfolio": state["portfolio"],
"require_approval": True, # always true until human overrides
},
timeout=30.0,
)
result = response.json()
return {
**state,
"messages": state["messages"] + [
{"role": "executor", "content": f"Submitted {len(state['decisions'])} decisions. Result: {result['status']}"}
],
}
And the updated graph:
View orchestrator code
# hydra/orchestrator.py (updated)
from hydra.executor import executor_node
def build_hydra_graph(checkpointer=None):
graph = StateGraph(HydraState)
graph.add_node("strategist", strategist_node)
graph.add_node("executor", executor_node)
graph.add_edge(START, "strategist")
graph.add_conditional_edges(
"strategist",
lambda s: "executor" if s["decisions"] else END,
{"executor": "executor", END: END},
)
graph.add_edge("executor", END)
return graph.compile(checkpointer=checkpointer)
The n8n workflow JSON for the Executor (importable directly into your n8n instance):
View json code
{
"name": "Hydra Executor",
"nodes": [
{
"name": "Webhook",
"type": "n8n-nodes-base.webhook",
"parameters": { "path": "hydra-execute", "httpMethod": "POST" }
},
{
"name": "Validate",
"type": "n8n-nodes-base.code",
"parameters": {
"jsCode": "const { decisions, require_approval } = $input.first().json;\nif (!decisions || decisions.length === 0) throw new Error('No decisions to execute');\nreturn decisions.map(d => ({ json: { ...d, require_approval } }));"
}
},
{
"name": "Human Approval",
"type": "@n8n/n8n-nodes-langchain.chatTrigger",
"parameters": { "options": { "requireApproval": true } }
},
{
"name": "Log to DB",
"type": "n8n-nodes-base.postgres",
"parameters": { "operation": "insert", "table": "hydra_executions" }
}
]
}
Updated project structure:
hydra/
├── orchestrator.py # LangGraph state machine
├── executor.py # Executor node → n8n webhook (this article)
├── n8n/
│ └── hydra-executor.workflow.json # importable n8n workflow
├── requirements.txt
└── .env.example
The stack so far
| Layer | Technology | Status |
|---|---|---|
| Orchestration | LangGraph 1.1 | Done — Article 1 |
| Automation | n8n 2.0 | Done — this article |
| Knowledge | RAG + GraphRAG | Article 3 |
| Observability | LangFuse + W&B Weave | Article 4 |
| Specialization | Fine-tuned SLM | Article 5 |
| Coordination | Multi-agent swarm + routing | Article 6 |
| Security | SOAR + Guardian | Article 7 |
| Resilience | Structured logging · Tenacity retries · LangFuse self-hosted | Article 8 |
Next in this series: RAG at scale — how to give Hydra's agents deep, real-time knowledge about on-chain state, protocol documentation, and market conditions — without hallucinating positions that do not exist.
AI to Web3 series — building Hydra, a sovereign multi-agent DeFi intelligence mesh:
1 — LangChain orchestration · 2 — n8n execution · 3 — RAG at scale · 4 — LLM observability · 5 — Fine-tuning · 6 — Agent swarms · 7 — SOAR · 8 — Production resilience
Get weekly intel — courtesy of intel.hyperdrift.io