tutorial

How to Self-Host n8n with PostgreSQL in 2026

A step-by-step tutorial for self-hosting n8n with PostgreSQL on a single Linux server using Docker Compose. Covers .env configuration, encryption keys, TLS via Caddy, persistence and backup strategy, queue mode for higher throughput, and the most common operational errors encountered during deployment.

Overview

n8n is an open-source workflow automation tool that competes with Zapier and Make. Unlike most competitors, n8n can be self-hosted on any Docker-capable server. By default n8n uses an embedded SQLite database; for any deployment beyond a single-user lab, swapping SQLite for PostgreSQL is the recommended next step. PostgreSQL handles concurrent writes, supports point-in-time recovery, and is required for the multi-process queue mode used in higher-throughput deployments.

This tutorial covers a production-ready single-server n8n + PostgreSQL stack using Docker Compose, including persistence, environment configuration, queue mode, and operational notes.

Prerequisites

  • A Linux server with at least 2 GB RAM and 2 vCPU (4 GB recommended for queue mode)
  • Docker Engine 24.0+ and Docker Compose v2
  • A registered domain with DNS pointing at the server (n8n requires HTTPS for many OAuth integrations)
  • Inbound firewall rules permitting ports 80 and 443

Architecture

A production n8n deployment with PostgreSQL has these components:

  1. n8n — Main n8n process (UI + API, accepts webhook traffic)
  2. postgres — Workflow data, credentials, execution history
  3. redis — Queue backend (only required for queue mode)
  4. n8n-worker — One or more worker containers that execute workflows (queue mode only)
  5. caddy or another reverse proxy for TLS termination

Step 1: Create the Project Directory

mkdir -p /opt/n8n/{data,postgres-data,redis-data}
cd /opt/n8n

Step 2: Create the .env File

Generate strong secrets and a 32-character encryption key. Credentials in n8n are encrypted with this key; losing it makes all stored credentials unrecoverable.

cat > .env <<EOF
N8N_HOST=n8n.example.com
N8N_PROTOCOL=https
N8N_PORT=5678
WEBHOOK_URL=https://n8n.example.com/
GENERIC_TIMEZONE=Europe/London

DB_TYPE=postgresdb
DB_POSTGRESDB_HOST=postgres
DB_POSTGRESDB_PORT=5432
DB_POSTGRESDB_DATABASE=n8n
DB_POSTGRESDB_USER=n8n
DB_POSTGRESDB_PASSWORD=$(openssl rand -hex 24)

N8N_ENCRYPTION_KEY=$(openssl rand -hex 16)
N8N_USER_MANAGEMENT_JWT_SECRET=$(openssl rand -hex 32)
EOF

chmod 600 .env

Back up the .env file off-server. Treat the encryption key like a master credential.

Step 3: Create the docker-compose.yml

version: "3.8"
services:
  postgres:
    image: postgres:15
    restart: unless-stopped
    environment:
      POSTGRES_DB: ${DB_POSTGRESDB_DATABASE}
      POSTGRES_USER: ${DB_POSTGRESDB_USER}
      POSTGRES_PASSWORD: ${DB_POSTGRESDB_PASSWORD}
    volumes:
      - ./postgres-data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${DB_POSTGRESDB_USER}"]
      interval: 5s
      timeout: 5s
      retries: 10

  n8n:
    image: n8nio/n8n:1.x
    restart: unless-stopped
    ports:
      - "5678:5678"
    environment:
      - N8N_HOST
      - N8N_PROTOCOL
      - N8N_PORT
      - WEBHOOK_URL
      - GENERIC_TIMEZONE
      - DB_TYPE
      - DB_POSTGRESDB_HOST
      - DB_POSTGRESDB_PORT
      - DB_POSTGRESDB_DATABASE
      - DB_POSTGRESDB_USER
      - DB_POSTGRESDB_PASSWORD
      - N8N_ENCRYPTION_KEY
      - N8N_USER_MANAGEMENT_JWT_SECRET
    volumes:
      - ./data:/home/node/.n8n
    depends_on:
      postgres:
        condition: service_healthy

volumes:
  postgres-data:
  n8n-data:

Pin the n8nio/n8n tag to a specific minor version (for example 1.85) rather than latest. n8n minor releases occasionally include database migrations that require downtime to apply.

Step 4: First Boot

docker compose up -d
docker compose logs -f n8n

On first boot n8n runs database migrations. Wait until the log shows Editor is now accessible via: before connecting. Open http://<server-ip>:5678, complete the owner setup form, and create the first user.

Step 5: Add TLS via Caddy

Add a Caddy reverse proxy for automatic Let's Encrypt certificates. Add this service to the compose file:

  caddy:
    image: caddy:2
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
      - caddy-data:/data
    depends_on:
      - n8n

volumes:
  caddy-data:

And a Caddyfile:

n8n.example.com {
    reverse_proxy n8n:5678
    encode gzip
}

Restart the stack: docker compose up -d. Caddy issues a certificate within 30-60 seconds. Once HTTPS is live, remove the 5678:5678 port mapping from the n8n service so the only entry point is Caddy.

Step 6: Persistence and Backups

  • PostgreSQL data lives in ./postgres-data. Use pg_dump daily via cron:
0 3 * * * docker compose exec -T postgres pg_dump -U n8n n8n | gzip > /backups/n8n-$(date +\%F).sql.gz
  • n8n data directory at ./data contains binary credentials and node modules; back it up alongside the database
  • Encryption key must be backed up separately; without it, restored credentials cannot be decrypted

Step 7: Queue Mode (for Higher Throughput)

By default n8n runs all workflow executions in the same process as the UI/API. For workloads above approximately 50 concurrent executions or for long-running workflows that should not block webhook responses, switch to queue mode.

Add Redis and a worker service to the compose file:

  redis:
    image: redis:7-alpine
    restart: unless-stopped
    volumes:
      - ./redis-data:/data

  n8n-worker:
    image: n8nio/n8n:1.x
    restart: unless-stopped
    command: worker
    environment:
      - DB_TYPE
      - DB_POSTGRESDB_HOST
      - DB_POSTGRESDB_PORT
      - DB_POSTGRESDB_DATABASE
      - DB_POSTGRESDB_USER
      - DB_POSTGRESDB_PASSWORD
      - N8N_ENCRYPTION_KEY
      - EXECUTIONS_MODE=queue
      - QUEUE_BULL_REDIS_HOST=redis
    depends_on:
      - redis
      - postgres

Add EXECUTIONS_MODE=queue and QUEUE_BULL_REDIS_HOST=redis to the main n8n service environment as well. Scale workers with docker compose up -d --scale n8n-worker=3.

Common Errors

  • ECONNREFUSED postgres:5432 — Postgres still starting; the depends_on healthcheck handles this on first boot
  • Could not decrypt credentials — The encryption key changed since credentials were saved. Restore the original key or re-enter credentials
  • OAuth callback errorsWEBHOOK_URL must match the public HTTPS URL exactly; trailing slash matters
  • out of memory during workflow execution — Default node memory is 256 MB. Increase via NODE_OPTIONS=--max-old-space-size=2048 in the n8n service environment
  • Timezone-shifted scheduled triggers — Set GENERIC_TIMEZONE and TZ in the environment; both Postgres and n8n need consistent timezones

Operating Cost

A self-hosted n8n stack on a single Hetzner CCX13 (4 vCPU, 16 GB RAM) handles approximately 10,000 workflow executions per day comfortably. At approximately €15/month this compares favourably to the n8n Cloud Starter plan at $24/month for 5,000 executions, although the comparison ignores operational overhead.

Editor's Note: ShadowGen runs this exact stack for an internal automation orchestrator handling roughly 12,000 executions per day across 70 active workflows. Hardware: a single Hetzner CCX23 (8 vCPU, 32 GB RAM, approximately €30/month). We moved to queue mode at execution number 4,000/day after webhooks started timing out behind long-running workflows; the change took roughly 90 minutes including Redis bring-up and worker scaling. The single biggest mistake we made was running 6 weeks without pg_dump cron — a botched n8n upgrade corrupted the schema and the only saving grace was that Hetzner snapshots existed at the VM level. Lesson: schedule the pg_dump cron on day one, not day forty.

Last updated: | By Rafal Fila

Tools Mentioned

Related Guides

Related Rankings

Best Open-Source Workflow Engines for Engineers in 2026

A ranked list of the best open-source workflow engines for engineers in 2026. This ranking evaluates code-first workflow orchestration platforms that engineers can self-host, extend, and embed inside existing software stacks. The ranking differs from the broader Best Open-Source Automation 2026 list by focusing specifically on workflow engines intended for developers: platforms that prioritize SDK coverage, durable execution, scalability, and operational controls over visual SaaS-connector automation. It includes durable execution engines (Temporal), data and task orchestrators (Apache Airflow, Prefect), low-code workflow builders with strong self-host stories (n8n, Windmill, Activepieces), and historical agent-based tools (Huginn).

Best Automation Tools for Healthcare in 2026

A ranked list of the best automation tools for healthcare organisations in 2026. This ranking evaluates platforms across HIPAA readiness, audit logging, PHI handling, on-premise or private-cloud deployment options, and integration with clinical and administrative systems. The ranking includes enterprise RPA (UiPath, Automation Anywhere), Microsoft-native automation (Power Automate), general-purpose workflow automation (Zapier on Business tier, Make, n8n self-hosted), and enterprise iPaaS (Boomi). Each entry is evaluated against the specific compliance, data-residency, and clinical-integration requirements that distinguish healthcare from other industries.

Common Questions