tutorial

Building AI Agents with n8n in 2026: Tools, RAG, and Deployment

n8n is a fair-code workflow engine that ships a native AI Agent node wrapping LangChain tools, memory, and vector stores. This tutorial covers agent design patterns, retrieval-augmented generation with Pinecone or pgvector, deployment options (Cloud vs self-hosted), and operational guardrails as of May 2026.

Why n8n for AI Agents

n8n is a fair-code workflow engine, founded in 2019 and headquartered in Berlin, that pairs a visual editor with native code blocks. As of May 2026, n8n ships an AI Agent node that wraps LangChain primitives (tools, memory, output parsers) inside the standard workflow canvas, allowing both visual and JavaScript construction of agentic flows. The combination matters because most production agent work is glue: parsing inputs, calling models, branching on outputs, persisting state, retrying on failure, and notifying humans on exception. n8n already provides those primitives.

The AI Agent Node

The AI Agent node accepts a chat model, an optional vector store, and a list of "tools" (which are themselves n8n sub-workflows or HTTP requests). Internally it runs a ReAct or function-calling loop until the model emits a stop signal or hits a step cap. As of May 2026, supported model providers include OpenAI, Anthropic, Mistral, Google Vertex AI, Ollama (for local models), and any OpenAI-compatible endpoint via the generic node.

Practical agent patterns implemented inside this node include:

  • A research agent that searches the web (SerpAPI tool), reads pages (HTTP Request tool), and writes a summary to Notion
  • A triage agent that reads a Zendesk ticket, classifies it (function-calling), and either replies, escalates, or creates a Linear issue
  • A scheduling agent that reads a calendar invite, extracts attendees and intent, and books follow-ups

Retrieval-Augmented Generation (RAG)

n8n integrates with Pinecone, Weaviate, Qdrant, Supabase pgvector, Postgres pgvector, and Milvus through dedicated vector store nodes. A typical RAG pipeline looks like:

  1. Ingest: a workflow watches a Drive folder or webhook, splits documents with the Recursive Character Text Splitter node, embeds with OpenAI or Cohere, and writes vectors to the chosen store.
  2. Query: the AI Agent node loads the same vector store as a retriever tool, so the model can fetch relevant chunks at inference time.
  3. Citations: an output parser extracts citation IDs that the workflow then resolves back to source URLs before returning the answer.

Document chunk sizes of 512-1024 tokens with 64-128 token overlap perform well for support and policy corpora. Larger chunks reduce retrieval calls but increase context cost.

Memory and State

For multi-turn agents, n8n offers Window Buffer Memory (last N messages), Summary Memory (rolling summary), and external memory backed by Redis or Postgres. Long-running agents typically use a Postgres table keyed by session ID with messages stored as JSONB plus a summary column updated every K turns.

Deployment Options

n8n Cloud (Starter $24/month, Pro $60/month, Enterprise custom as of May 2026) provides a managed runtime with execution-based pricing. The free Community Edition runs on Docker, Kubernetes, or a single binary on any Linux host. For agent workloads specifically, self-hosting is often preferred because:

  • Long-running model calls (10-60 seconds) consume cloud execution time
  • Vector store latency depends on co-location with the n8n runtime
  • Local models via Ollama require a self-hosted node with a GPU

A common production topology is n8n + Postgres + Redis + Qdrant on a single Kubernetes namespace, with the AI Agent node calling Anthropic or OpenAI for the heavy reasoning model and a local Ollama deployment for cheap embedding and classification calls.

Operational Considerations

Three failure modes dominate agent workloads in production: model timeouts, tool errors, and infinite loops. n8n addresses each with built-in mechanisms:

  • The AI Agent node exposes a max iterations parameter (default 10) that hard-caps the ReAct loop
  • Tool calls inherit standard n8n retry policies (exponential backoff, max attempts)
  • Workflow timeouts can be set globally and per-execution via the Wait node

For observability, the n8n Execution Log records each tool call, model output, and intermediate state. Pairing this with Langfuse or Helicone via the HTTP Request node gives a per-conversation trace including token cost, latency, and tool error rates.

When n8n Is and Is Not the Right Fit

n8n suits agent workloads where the agent is one node inside a broader business workflow (CRM updates, ticket routing, internal tools). It is less ideal as a standalone consumer chat surface; for that, frameworks like LangGraph or CrewAI plus a dedicated frontend offer more control over the conversation loop. For internal automation with audit trails, integrated triggers, and a UI accessible to non-engineers, n8n is consistently faster to build and easier to operate than code-only alternatives.

Editor's Note: We deployed an n8n AI agent for a 60-person support team in early 2026 to triage inbound tickets. The setup ran on a single self-hosted n8n instance plus Qdrant for the knowledge-base retriever. After three weeks of tuning prompts and tool selection, the agent auto-resolved 31 percent of tier-one tickets and forwarded the rest to humans with a one-paragraph summary. The honest caveat: the win required iterating on the system prompt and the retriever twelve times, and the agent still occasionally hallucinates policy references when the underlying KB article is ambiguous, so a final human review step on auto-resolves remains essential.

Last updated: | By Rafal Fila

Tools Mentioned

Related Guides

Related Rankings

Common Questions

What is pgvector in Supabase?

pgvector is an open-source Postgres extension that adds a `vector` column type and similarity search operators (cosine, L2, inner product) for high-dimensional embeddings. Supabase enables pgvector with a single SQL command and as of May 2026 supports both IVFFlat and HNSW indexes for sub-100ms similarity search inside the same database that holds application data.

Can you build AI agents in n8n?

Yes. As of May 2026, n8n ships an AI Agent node that wraps LangChain tools, memory, and vector stores, allowing visual or code-based construction of ReAct-style agents with branching, retries, and human-in-the-loop steps. The free Community Edition supports the AI Agent node with no usage cap when self-hosted.

How to set up Supabase Edge Functions for AI workloads

Create the function with `supabase functions new ai-handler`, write a Deno handler that reads the user JWT, calls a model provider, and writes results back via the Supabase client with row-level security. Deploy with `supabase functions deploy ai-handler` and call from the frontend using `supabase.functions.invoke()` with the user's session token.

What is SOAR and which platforms lead in 2026?

SOAR (Security Orchestration, Automation and Response) is a category of platforms that connect security tools and automate analyst workflows like triage, enrichment, and containment. As of May 2026, market leaders include Tines, Torq, Swimlane, Splunk SOAR (formerly Phantom), and Palo Alto Cortex XSOAR (formerly Demisto), with vendor-bundled options inside Microsoft Sentinel and Google Chronicle filling the SIEM-attached segment.