How to Self-Host n8n with PostgreSQL in 2026
A step-by-step tutorial for self-hosting n8n with PostgreSQL on a single Linux server using Docker Compose. Covers .env configuration, encryption keys, TLS via Caddy, persistence and backup strategy, queue mode for higher throughput, and the most common operational errors encountered during deployment.
Overview
n8n is an open-source workflow automation tool that competes with Zapier and Make. Unlike most competitors, n8n can be self-hosted on any Docker-capable server. By default n8n uses an embedded SQLite database; for any deployment beyond a single-user lab, swapping SQLite for PostgreSQL is the recommended next step. PostgreSQL handles concurrent writes, supports point-in-time recovery, and is required for the multi-process queue mode used in higher-throughput deployments.
This tutorial covers a production-ready single-server n8n + PostgreSQL stack using Docker Compose, including persistence, environment configuration, queue mode, and operational notes.
Prerequisites
- A Linux server with at least 2 GB RAM and 2 vCPU (4 GB recommended for queue mode)
- Docker Engine 24.0+ and Docker Compose v2
- A registered domain with DNS pointing at the server (n8n requires HTTPS for many OAuth integrations)
- Inbound firewall rules permitting ports 80 and 443
Architecture
A production n8n deployment with PostgreSQL has these components:
- n8n — Main n8n process (UI + API, accepts webhook traffic)
- postgres — Workflow data, credentials, execution history
- redis — Queue backend (only required for queue mode)
- n8n-worker — One or more worker containers that execute workflows (queue mode only)
- caddy or another reverse proxy for TLS termination
Step 1: Create the Project Directory
mkdir -p /opt/n8n/{data,postgres-data,redis-data}
cd /opt/n8n
Step 2: Create the .env File
Generate strong secrets and a 32-character encryption key. Credentials in n8n are encrypted with this key; losing it makes all stored credentials unrecoverable.
cat > .env <<EOF
N8N_HOST=n8n.example.com
N8N_PROTOCOL=https
N8N_PORT=5678
WEBHOOK_URL=https://n8n.example.com/
GENERIC_TIMEZONE=Europe/London
DB_TYPE=postgresdb
DB_POSTGRESDB_HOST=postgres
DB_POSTGRESDB_PORT=5432
DB_POSTGRESDB_DATABASE=n8n
DB_POSTGRESDB_USER=n8n
DB_POSTGRESDB_PASSWORD=$(openssl rand -hex 24)
N8N_ENCRYPTION_KEY=$(openssl rand -hex 16)
N8N_USER_MANAGEMENT_JWT_SECRET=$(openssl rand -hex 32)
EOF
chmod 600 .env
Back up the .env file off-server. Treat the encryption key like a master credential.
Step 3: Create the docker-compose.yml
version: "3.8"
services:
postgres:
image: postgres:15
restart: unless-stopped
environment:
POSTGRES_DB: ${DB_POSTGRESDB_DATABASE}
POSTGRES_USER: ${DB_POSTGRESDB_USER}
POSTGRES_PASSWORD: ${DB_POSTGRESDB_PASSWORD}
volumes:
- ./postgres-data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${DB_POSTGRESDB_USER}"]
interval: 5s
timeout: 5s
retries: 10
n8n:
image: n8nio/n8n:1.x
restart: unless-stopped
ports:
- "5678:5678"
environment:
- N8N_HOST
- N8N_PROTOCOL
- N8N_PORT
- WEBHOOK_URL
- GENERIC_TIMEZONE
- DB_TYPE
- DB_POSTGRESDB_HOST
- DB_POSTGRESDB_PORT
- DB_POSTGRESDB_DATABASE
- DB_POSTGRESDB_USER
- DB_POSTGRESDB_PASSWORD
- N8N_ENCRYPTION_KEY
- N8N_USER_MANAGEMENT_JWT_SECRET
volumes:
- ./data:/home/node/.n8n
depends_on:
postgres:
condition: service_healthy
volumes:
postgres-data:
n8n-data:
Pin the n8nio/n8n tag to a specific minor version (for example 1.85) rather than latest. n8n minor releases occasionally include database migrations that require downtime to apply.
Step 4: First Boot
docker compose up -d
docker compose logs -f n8n
On first boot n8n runs database migrations. Wait until the log shows Editor is now accessible via: before connecting. Open http://<server-ip>:5678, complete the owner setup form, and create the first user.
Step 5: Add TLS via Caddy
Add a Caddy reverse proxy for automatic Let's Encrypt certificates. Add this service to the compose file:
caddy:
image: caddy:2
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- caddy-data:/data
depends_on:
- n8n
volumes:
caddy-data:
And a Caddyfile:
n8n.example.com {
reverse_proxy n8n:5678
encode gzip
}
Restart the stack: docker compose up -d. Caddy issues a certificate within 30-60 seconds. Once HTTPS is live, remove the 5678:5678 port mapping from the n8n service so the only entry point is Caddy.
Step 6: Persistence and Backups
- PostgreSQL data lives in
./postgres-data. Usepg_dumpdaily via cron:
0 3 * * * docker compose exec -T postgres pg_dump -U n8n n8n | gzip > /backups/n8n-$(date +\%F).sql.gz
- n8n data directory at
./datacontains binary credentials and node modules; back it up alongside the database - Encryption key must be backed up separately; without it, restored credentials cannot be decrypted
Step 7: Queue Mode (for Higher Throughput)
By default n8n runs all workflow executions in the same process as the UI/API. For workloads above approximately 50 concurrent executions or for long-running workflows that should not block webhook responses, switch to queue mode.
Add Redis and a worker service to the compose file:
redis:
image: redis:7-alpine
restart: unless-stopped
volumes:
- ./redis-data:/data
n8n-worker:
image: n8nio/n8n:1.x
restart: unless-stopped
command: worker
environment:
- DB_TYPE
- DB_POSTGRESDB_HOST
- DB_POSTGRESDB_PORT
- DB_POSTGRESDB_DATABASE
- DB_POSTGRESDB_USER
- DB_POSTGRESDB_PASSWORD
- N8N_ENCRYPTION_KEY
- EXECUTIONS_MODE=queue
- QUEUE_BULL_REDIS_HOST=redis
depends_on:
- redis
- postgres
Add EXECUTIONS_MODE=queue and QUEUE_BULL_REDIS_HOST=redis to the main n8n service environment as well. Scale workers with docker compose up -d --scale n8n-worker=3.
Common Errors
ECONNREFUSED postgres:5432— Postgres still starting; thedepends_onhealthcheck handles this on first bootCould not decrypt credentials— The encryption key changed since credentials were saved. Restore the original key or re-enter credentials- OAuth callback errors —
WEBHOOK_URLmust match the public HTTPS URL exactly; trailing slash matters out of memoryduring workflow execution — Default node memory is 256 MB. Increase viaNODE_OPTIONS=--max-old-space-size=2048in the n8n service environment- Timezone-shifted scheduled triggers — Set
GENERIC_TIMEZONEandTZin the environment; both Postgres and n8n need consistent timezones
Operating Cost
A self-hosted n8n stack on a single Hetzner CCX13 (4 vCPU, 16 GB RAM) handles approximately 10,000 workflow executions per day comfortably. At approximately €15/month this compares favourably to the n8n Cloud Starter plan at $24/month for 5,000 executions, although the comparison ignores operational overhead.
Editor's Note: ShadowGen runs this exact stack for an internal automation orchestrator handling roughly 12,000 executions per day across 70 active workflows. Hardware: a single Hetzner CCX23 (8 vCPU, 32 GB RAM, approximately €30/month). We moved to queue mode at execution number 4,000/day after webhooks started timing out behind long-running workflows; the change took roughly 90 minutes including Redis bring-up and worker scaling. The single biggest mistake we made was running 6 weeks without
pg_dumpcron — a botched n8n upgrade corrupted the schema and the only saving grace was that Hetzner snapshots existed at the VM level. Lesson: schedule thepg_dumpcron on day one, not day forty.
Tools Mentioned
Activepieces
No-code workflow automation with self-hosting and AI-powered features
Workflow AutomationAutomatisch
Open-source Zapier alternative
Workflow AutomationBardeen
AI-powered browser automation via Chrome extension
Workflow AutomationCalendly
Scheduling automation platform for booking meetings without email back-and-forth, with CRM integrations and routing forms for lead qualification.
Workflow AutomationRelated Guides
How to Deploy Temporal Self-Hosted on a Single Server in 2026
A step-by-step tutorial for self-hosting the open-source Temporal Server on a single Linux server using Docker Compose. Covers cluster bring-up, namespace registration, worker deployment, security hardening, and scaling caveats. Suitable for development environments and low-volume production workloads up to approximately 100 workflow executions per second.
How to Set Up Claude Code with VS Code in 2026
A step-by-step tutorial for installing Claude Code, the official Anthropic CLI, and wiring it into Visual Studio Code via the Claude Code extension. Covers npm install, authentication, extension configuration, per-project permissions, and the most common errors encountered during setup.
n8n 2026 Roadmap: What's Shipping and What's Next
A summary of n8n product direction in 2026 based on the public changelog, official blog, and community forum. Covers recent releases (1.80-1.85), AI Agent node expansion, queue mode improvements, the v2 expression engine, governance and licensing, and signalled near-term roadmap items including streaming AI responses, Postgres-backed queues, and a native evaluation harness.
Related Rankings
Best Open-Source Workflow Engines for Engineers in 2026
A ranked list of the best open-source workflow engines for engineers in 2026. This ranking evaluates code-first workflow orchestration platforms that engineers can self-host, extend, and embed inside existing software stacks. The ranking differs from the broader Best Open-Source Automation 2026 list by focusing specifically on workflow engines intended for developers: platforms that prioritize SDK coverage, durable execution, scalability, and operational controls over visual SaaS-connector automation. It includes durable execution engines (Temporal), data and task orchestrators (Apache Airflow, Prefect), low-code workflow builders with strong self-host stories (n8n, Windmill, Activepieces), and historical agent-based tools (Huginn).
Best Automation Tools for Healthcare in 2026
A ranked list of the best automation tools for healthcare organisations in 2026. This ranking evaluates platforms across HIPAA readiness, audit logging, PHI handling, on-premise or private-cloud deployment options, and integration with clinical and administrative systems. The ranking includes enterprise RPA (UiPath, Automation Anywhere), Microsoft-native automation (Power Automate), general-purpose workflow automation (Zapier on Business tier, Make, n8n self-hosted), and enterprise iPaaS (Boomi). Each entry is evaluated against the specific compliance, data-residency, and clinical-integration requirements that distinguish healthcare from other industries.
Common Questions
What is a Story in Tines?
A Story in Tines is a single automation workflow built as a directed graph of Actions. Stories are the Tines equivalent of a Zap in Zapier or a Playbook in traditional SOAR products, composed of six Action types: HTTP Request, Send Email, IMAP, Trigger, Event Transform, and Webhook.
Tines vs Splunk SOAR: Which security automation platform in 2026?
Tines is a no-code, SIEM-agnostic SaaS SOAR platform starting around $35,000/year; Splunk SOAR (now Cisco-owned after 2024) is a Python-based SOAR with 350+ prebuilt apps and deeper Splunk SIEM integration, typically priced higher. The choice depends on SIEM commitment and authoring preference.
Can you use Tines for SOAR automation?
Yes. Tines is a no-code security automation platform built for SOAR use cases, with production deployments at Canva, McKesson, and Databricks as of April 2026. Security teams use Tines Stories to automate phishing triage, SIEM alert enrichment, IOC lookups, and endpoint isolation.
What does Temporal cost when self-hosted?
Self-hosted Temporal is free under the MIT license; the only cost is the infrastructure to run Temporal Server, its persistence layer (Cassandra or PostgreSQL), and optional Elasticsearch for advanced visibility. A small production deployment typically costs $400-$900/month on AWS or GCP as of April 2026.