All articles
TutorialsApr 17, 2026 · 12 min read

Self-Host n8n on a VPS with Docker and PostgreSQL

Self-Host n8n on a VPS with Docker and PostgreSQL

n8n is an open-source workflow automation tool. Think Zapier or Make, but you own the data and can hook it directly into local LLMs, private APIs, and internal services. With the AI-agent boom, self-hosting n8n has gone from hobbyist curiosity to a common production setup.

This tutorial covers a production-ready install on a VPS: Docker Compose, PostgreSQL for the execution store, Caddy for automatic HTTPS, and a simple backup strategy.

The SQLite default that ships with n8n is fine for kicking the tires, but it falls over on busy workflows. If you're building anything with webhooks or scheduled runs, start on PostgreSQL from day one.

TL;DR

  • Install Docker and Docker Compose on a fresh VPS
  • Point a subdomain like n8n.example.com at your server
  • Run n8n + PostgreSQL + Caddy with one docker-compose.yml
  • Set a strong encryption key and basic auth before first login
  • Back up the PostgreSQL database daily

Total time: around 15 minutes.

What You Need

  • A VPS with at least 2 GB RAM (4 GB recommended for heavier flows) running Ubuntu 22.04 or 24.04
  • A domain you can add DNS records to
  • Ports 80 and 443 open to the internet
  • Root or sudo access

n8n itself is lightweight, but AI workflows that call external APIs benefit from a bit of breathing room.

Step 1: Point a Subdomain at Your VPS

In your DNS provider, add an A record:

n8n.example.com → YOUR_VPS_IPV4

Add an AAAA record for IPv6 if you use it. Verify propagation:

dig +short n8n.example.com

DNS needs to resolve before Caddy can fetch a Let's Encrypt certificate.

Step 2: Install Docker and Docker Compose

On a fresh Ubuntu server:

sudo apt update sudo apt install -y ca-certificates curl sudo install -m 0755 -d /etc/apt/keyrings sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg \ -o /etc/apt/keyrings/docker.asc sudo chmod a+r /etc/apt/keyrings/docker.asc echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] \ https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo $VERSION_CODENAME) stable" | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt update sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Confirm it's working:

docker --version docker compose version

Step 3: Open the Firewall

sudo ufw allow 22/tcp sudo ufw allow 80/tcp sudo ufw allow 443/tcp sudo ufw enable

Caddy needs port 80 for the ACME HTTP challenge and 443 for HTTPS.

Step 4: Create the Project Directory

sudo mkdir -p /opt/n8n cd /opt/n8n sudo mkdir -p n8n-data postgres-data caddy-data caddy-config

Everything persistent lives under /opt/n8n. Back up this directory and you can rebuild the stack anywhere.

Step 5: Generate Secrets

You need three secrets before you write the compose file:

# PostgreSQL password openssl rand -base64 32 # n8n encryption key (used to encrypt stored credentials) openssl rand -hex 32 # Basic-auth password for first login openssl rand -base64 24

Save these somewhere safe. The encryption key is the one you really cannot lose - without it, every credential in your vault is unrecoverable.

Step 6: Write the Environment File

Create /opt/n8n/.env:

# Domain N8N_HOST=n8n.example.com WEBHOOK_URL=https://n8n.example.com/ # Database POSTGRES_USER=n8n POSTGRES_PASSWORD=REPLACE_WITH_POSTGRES_PASSWORD POSTGRES_DB=n8n # n8n security N8N_ENCRYPTION_KEY=REPLACE_WITH_ENCRYPTION_KEY N8N_BASIC_AUTH_ACTIVE=true N8N_BASIC_AUTH_USER=admin N8N_BASIC_AUTH_PASSWORD=REPLACE_WITH_BASIC_AUTH_PASSWORD # Timezone - workflow schedules run in this zone GENERIC_TIMEZONE=Europe/Berlin

Lock down the file so only root can read it:

sudo chmod 600 /opt/n8n/.env

Step 7: Write the Compose File

Create /opt/n8n/docker-compose.yml:

services: postgres: image: postgres:16 container_name: n8n-postgres restart: unless-stopped environment: POSTGRES_USER: ${POSTGRES_USER} POSTGRES_PASSWORD: ${POSTGRES_PASSWORD} POSTGRES_DB: ${POSTGRES_DB} volumes: - ./postgres-data:/var/lib/postgresql/data healthcheck: test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"] interval: 10s timeout: 5s retries: 5 networks: - n8n-net n8n: image: n8nio/n8n:latest container_name: n8n restart: unless-stopped depends_on: postgres: condition: service_healthy environment: DB_TYPE: postgresdb DB_POSTGRESDB_HOST: postgres DB_POSTGRESDB_PORT: "5432" DB_POSTGRESDB_DATABASE: ${POSTGRES_DB} DB_POSTGRESDB_USER: ${POSTGRES_USER} DB_POSTGRESDB_PASSWORD: ${POSTGRES_PASSWORD} N8N_HOST: ${N8N_HOST} N8N_PORT: "5678" N8N_PROTOCOL: https WEBHOOK_URL: ${WEBHOOK_URL} N8N_ENCRYPTION_KEY: ${N8N_ENCRYPTION_KEY} N8N_BASIC_AUTH_ACTIVE: ${N8N_BASIC_AUTH_ACTIVE} N8N_BASIC_AUTH_USER: ${N8N_BASIC_AUTH_USER} N8N_BASIC_AUTH_PASSWORD: ${N8N_BASIC_AUTH_PASSWORD} GENERIC_TIMEZONE: ${GENERIC_TIMEZONE} N8N_RUNNERS_ENABLED: "true" volumes: - ./n8n-data:/home/node/.n8n networks: - n8n-net caddy: image: caddy:2 container_name: n8n-caddy restart: unless-stopped ports: - "80:80" - "443:443" volumes: - ./Caddyfile:/etc/caddy/Caddyfile:ro - ./caddy-data:/data - ./caddy-config:/config networks: - n8n-net networks: n8n-net:

A few notes:

  • n8n's container port 5678 stays internal. Caddy reaches it over the Docker network.
  • Never remove the N8N_ENCRYPTION_KEY. Changing it invalidates every stored credential.
  • N8N_RUNNERS_ENABLED activates the newer task-runner execution model, which is the recommended default going forward.

Step 8: Write the Caddyfile

Create /opt/n8n/Caddyfile:

n8n.example.com { encode zstd gzip reverse_proxy n8n:5678 }

Caddy will issue and renew a Let's Encrypt certificate automatically.

Step 9: Start the Stack

cd /opt/n8n sudo docker compose up -d sudo docker compose logs -f

Wait until the Caddy logs show the certificate was issued, then open https://n8n.example.com. Your browser will prompt for the basic-auth user (admin) and the password you generated. After that, n8n asks you to create an owner account.

Step 10: Disable Public Signups

n8n's owner account is created on first visit, so there's no signup form to worry about. Still, keep basic auth enabled in front of the app. It blocks bots, even though n8n itself also enforces login.

If you need to invite collaborators, n8n's user-management UI handles that through the dashboard.

Step 11: Back Up PostgreSQL

Credentials, workflows, execution history, and binary data all live in PostgreSQL. Back it up daily.

Create /usr/local/bin/n8n-backup.sh:

#!/usr/bin/env bash set -euo pipefail BACKUP_DIR="/var/backups/n8n" DATE="$(date +%F)" mkdir -p "$BACKUP_DIR" docker exec n8n-postgres \ pg_dump -U n8n -d n8n \ | gzip > "$BACKUP_DIR/n8n-$DATE.sql.gz" find "$BACKUP_DIR" -name "n8n-*.sql.gz" -mtime +14 -delete

Make it executable and schedule it:

sudo chmod +x /usr/local/bin/n8n-backup.sh echo "15 3 * * * root /usr/local/bin/n8n-backup.sh" | \ sudo tee /etc/cron.d/n8n-backup

For off-site safety, rsync or rclone the backup directory to object storage on the same schedule.

Step 12: Upgrade n8n Safely

n8n ships frequent updates. The safe upgrade path:

cd /opt/n8n sudo docker compose pull sudo docker compose up -d

Always take a fresh backup before upgrading, especially across major versions. If something breaks, restore the PostgreSQL dump and roll back with:

sudo docker compose down # edit the n8n image tag in docker-compose.yml to the previous version sudo docker compose up -d

Optional: Keep n8n Behind a VPN

Workflow automation panels are popular targets for credential stuffing. If you don't need webhooks reachable from the public internet, restrict access with a VPN. Pair this with our Tailscale guide so only your tailnet devices can reach n8n.example.com.

If you do need webhooks public, leave Caddy open on 443 but scope the /webhook paths carefully in your workflows.

Troubleshooting

Caddy fails to obtain a certificate. DNS hasn't propagated, or 80/443 are blocked upstream.

Webhooks return 404 externally but work internally. WEBHOOK_URL doesn't match the public URL. It must include https:// and the trailing slash.

Credentials are missing after a restore. The N8N_ENCRYPTION_KEY doesn't match the one used when the credentials were stored.

Long-running workflows get killed. Bump N8N_DEFAULT_TIMEOUT or move heavy jobs onto a queue-mode worker with Redis.

Going Further

  • Enable queue mode with Redis once you outgrow a single worker.
  • Hook n8n into a local Ollama instance for private LLM calls inside your workflows.
  • Add Uptime Kuma on the same VPS to monitor your webhook endpoints.

Self-hosted n8n gives you Zapier-grade automation without handing your API keys and customer data to another SaaS.


Running Docker workloads like this is what our Linux VPS plans are built for. NVMe storage, IPv6, and snapshots come standard. See the options.