How do you set up AI agents in n8n?
Quick Answer: AI agents are built in n8n using the AI Agent node (based on LangChain), which combines an LLM, tools, and memory. Connect an LLM node (OpenAI, Anthropic, or Ollama), add tool nodes for capabilities (HTTP requests, database queries, calendar), attach memory (buffer or vector store), and trigger via chat, webhook, or schedule. The AI Agent reasons over user input and calls tools as needed.
Setting Up AI Agents in n8n
n8n added native LangChain integration in 2024, with the AI Agent node enabling function-calling agents that use tools to accomplish multi-step tasks. As of April 2026, n8n is one of the most widely used self-hosted AI agent platforms.
Core Nodes
- AI Agent: Orchestrator that plans and calls tools
- Chat Model (sub-node): OpenAI Chat Model, Anthropic, Ollama, Google Gemini
- Memory (sub-node): Window Buffer Memory, Vector Store Memory, Motorhead
- Tools (sub-nodes): HTTP Request, Calculator, Wikipedia, custom workflow tools
Step-by-Step Setup
1. Trigger
Choose a trigger for the agent:
- Chat Trigger: Built-in chat UI for testing
- Webhook: HTTP endpoint for integration
- Schedule Trigger: Run on interval
- Telegram/Slack Trigger: Accept messages from chat platforms
2. Add AI Agent Node
The AI Agent node expects:
- Input: The user message or task
- System Message: Instructions defining the agent's role
- Chat Model: Connected LLM node
- Memory: Optional conversation memory
- Tools: Array of connected tool nodes
3. Connect a Chat Model
As a sub-node, add:
- OpenAI Chat Model (gpt-4o, gpt-4o-mini, o1)
- Anthropic Chat Model (claude-3-7-sonnet, claude-4-sonnet)
- Ollama Chat Model (for local models like Llama, Mistral)
- Google Vertex Chat Model
4. Add Tools
Tools extend the agent's capabilities:
- HTTP Request Tool: Call any API
- Workflow Tool: Execute another n8n workflow as a tool
- Calculator: Math operations
- Code Tool: Execute JavaScript
- Wikipedia/SerpAPI: Web search
- Custom: Build tools from any n8n node
5. Configure Memory
- Window Buffer Memory: Keep last N messages
- Vector Store Memory: Semantic memory using embeddings (e.g., Pinecone, Qdrant)
- Motorhead: Summarization-based memory
Example: Customer Support Agent
Chat Trigger
→ AI Agent
- System: "You are a customer support agent..."
- Chat Model: OpenAI gpt-4o-mini
- Memory: Window Buffer (k=10)
- Tools:
- HTTP Request: Lookup order by ID
- Workflow Tool: Create support ticket
- SerpAPI: Search docs
Testing
Use the Chat Trigger for interactive testing:
- Open the workflow in editor
- Click the Chat Trigger node
- Open the chat window
- Send test messages and observe agent reasoning
Self-Hosted Considerations
- Local LLMs: Use Ollama for privacy-sensitive deployments
- Vector stores: Self-host Qdrant or use managed Pinecone
- Cost control: Log token usage to Airtable for monitoring
Production Tips
- Set max iterations (usually 5-10) to prevent infinite loops
- Use structured output for deterministic downstream workflows
- Monitor tool call failures and implement retries
- Version prompts in a dedicated node with environment variables
Related Questions
Related Tools
Activepieces
No-code workflow automation with self-hosting and AI-powered features
Workflow AutomationAutomatisch
Open-source Zapier alternative
Workflow AutomationBardeen
AI-powered browser automation via Chrome extension
Workflow AutomationCalendly
Scheduling automation platform for booking meetings without email back-and-forth, with CRM integrations and routing forms for lead qualification.
Workflow AutomationRelated Rankings
Best Open-Source Workflow Engines for Engineers in 2026
A ranked list of the best open-source workflow engines for engineers in 2026. This ranking evaluates code-first workflow orchestration platforms that engineers can self-host, extend, and embed inside existing software stacks. The ranking differs from the broader Best Open-Source Automation 2026 list by focusing specifically on workflow engines intended for developers: platforms that prioritize SDK coverage, durable execution, scalability, and operational controls over visual SaaS-connector automation. It includes durable execution engines (Temporal), data and task orchestrators (Apache Airflow, Prefect), low-code workflow builders with strong self-host stories (n8n, Windmill, Activepieces), and historical agent-based tools (Huginn).
Best Automation Tools for Healthcare in 2026
A ranked list of the best automation tools for healthcare organisations in 2026. This ranking evaluates platforms across HIPAA readiness, audit logging, PHI handling, on-premise or private-cloud deployment options, and integration with clinical and administrative systems. The ranking includes enterprise RPA (UiPath, Automation Anywhere), Microsoft-native automation (Power Automate), general-purpose workflow automation (Zapier on Business tier, Make, n8n self-hosted), and enterprise iPaaS (Boomi). Each entry is evaluated against the specific compliance, data-residency, and clinical-integration requirements that distinguish healthcare from other industries.
Dive Deeper
Temporal vs Apache Airflow 2026: Durable Workflows vs DAG Orchestration
Temporal and Apache Airflow are open-source workflow engines that solve different problems. Temporal is a durable execution platform for long-running backend workflows written in application code, while Apache Airflow is a Python-based DAG scheduler for batch data pipelines. This 2026 comparison covers execution models, pricing, and when each engine is the correct choice.
Temporal vs n8n 2026: Code-First Workflows vs Visual Automation
Temporal and n8n are workflow tools with different audiences. Temporal is a durable execution SDK for backend engineers building fault-tolerant distributed systems in Go, Java, TypeScript, Python, and .NET. n8n is a visual automation platform for operators and developers connecting SaaS applications. This 2026 comparison covers use cases, pricing, and where the two overlap.
Camunda vs Zeebe 2026: Camunda 7 Platform vs Camunda 8 Cloud-Native Engine
Zeebe is the cloud-native BPMN workflow engine that powers Camunda 8, while Camunda 7 is the mature JVM-based platform that preceded it. Both are maintained by Camunda Services GmbH. This 2026 comparison clarifies the architecture differences, feature deltas, migration considerations, and pricing between the two generations.