n8n is an open-source workflow automation tool. Think Zapier or Make, but you own the data and can hook it directly into local LLMs, private APIs, and internal services. With the AI-agent boom, self-hosting n8n has gone from hobbyist curiosity to a common production setup.
This tutorial covers a production-ready install on a VPS: Docker Compose, PostgreSQL for the execution store, Caddy for automatic HTTPS, and a simple backup strategy.
n8n.example.com at your serverdocker-compose.ymlTotal time: around 15 minutes.
80 and 443 open to the internetn8n itself is lightweight, but AI workflows that call external APIs benefit from a bit of breathing room.
In your DNS provider, add an A record:
n8n.example.com → YOUR_VPS_IPV4
Add an AAAA record for IPv6 if you use it. Verify propagation:
dig +short n8n.example.com
DNS needs to resolve before Caddy can fetch a Let's Encrypt certificate.
On a fresh Ubuntu server:
sudo apt update
sudo apt install -y ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
-o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] \
https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo $VERSION_CODENAME) stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Confirm it's working:
docker --version
docker compose version
sudo ufw allow 22/tcp
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw enable
Caddy needs port 80 for the ACME HTTP challenge and 443 for HTTPS.
sudo mkdir -p /opt/n8n
cd /opt/n8n
sudo mkdir -p n8n-data postgres-data caddy-data caddy-config
Everything persistent lives under /opt/n8n. Back up this directory and you can rebuild the stack anywhere.
You need three secrets before you write the compose file:
# PostgreSQL password
openssl rand -base64 32
# n8n encryption key (used to encrypt stored credentials)
openssl rand -hex 32
# Basic-auth password for first login
openssl rand -base64 24
Save these somewhere safe. The encryption key is the one you really cannot lose - without it, every credential in your vault is unrecoverable.
Create /opt/n8n/.env:
# Domain
N8N_HOST=n8n.example.com
WEBHOOK_URL=https://n8n.example.com/
# Database
POSTGRES_USER=n8n
POSTGRES_PASSWORD=REPLACE_WITH_POSTGRES_PASSWORD
POSTGRES_DB=n8n
# n8n security
N8N_ENCRYPTION_KEY=REPLACE_WITH_ENCRYPTION_KEY
N8N_BASIC_AUTH_ACTIVE=true
N8N_BASIC_AUTH_USER=admin
N8N_BASIC_AUTH_PASSWORD=REPLACE_WITH_BASIC_AUTH_PASSWORD
# Timezone - workflow schedules run in this zone
GENERIC_TIMEZONE=Europe/Berlin
Lock down the file so only root can read it:
sudo chmod 600 /opt/n8n/.env
Create /opt/n8n/docker-compose.yml:
services:
postgres:
image: postgres:16
container_name: n8n-postgres
restart: unless-stopped
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
volumes:
- ./postgres-data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
interval: 10s
timeout: 5s
retries: 5
networks:
- n8n-net
n8n:
image: n8nio/n8n:latest
container_name: n8n
restart: unless-stopped
depends_on:
postgres:
condition: service_healthy
environment:
DB_TYPE: postgresdb
DB_POSTGRESDB_HOST: postgres
DB_POSTGRESDB_PORT: "5432"
DB_POSTGRESDB_DATABASE: ${POSTGRES_DB}
DB_POSTGRESDB_USER: ${POSTGRES_USER}
DB_POSTGRESDB_PASSWORD: ${POSTGRES_PASSWORD}
N8N_HOST: ${N8N_HOST}
N8N_PORT: "5678"
N8N_PROTOCOL: https
WEBHOOK_URL: ${WEBHOOK_URL}
N8N_ENCRYPTION_KEY: ${N8N_ENCRYPTION_KEY}
N8N_BASIC_AUTH_ACTIVE: ${N8N_BASIC_AUTH_ACTIVE}
N8N_BASIC_AUTH_USER: ${N8N_BASIC_AUTH_USER}
N8N_BASIC_AUTH_PASSWORD: ${N8N_BASIC_AUTH_PASSWORD}
GENERIC_TIMEZONE: ${GENERIC_TIMEZONE}
N8N_RUNNERS_ENABLED: "true"
volumes:
- ./n8n-data:/home/node/.n8n
networks:
- n8n-net
caddy:
image: caddy:2
container_name: n8n-caddy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- ./caddy-data:/data
- ./caddy-config:/config
networks:
- n8n-net
networks:
n8n-net:
A few notes:
5678 stays internal. Caddy reaches it over the Docker network.N8N_ENCRYPTION_KEY. Changing it invalidates every stored credential.N8N_RUNNERS_ENABLED activates the newer task-runner execution model, which is the recommended default going forward.Create /opt/n8n/Caddyfile:
n8n.example.com {
encode zstd gzip
reverse_proxy n8n:5678
}
Caddy will issue and renew a Let's Encrypt certificate automatically.
cd /opt/n8n
sudo docker compose up -d
sudo docker compose logs -f
Wait until the Caddy logs show the certificate was issued, then open https://n8n.example.com. Your browser will prompt for the basic-auth user (admin) and the password you generated. After that, n8n asks you to create an owner account.
n8n's owner account is created on first visit, so there's no signup form to worry about. Still, keep basic auth enabled in front of the app. It blocks bots, even though n8n itself also enforces login.
If you need to invite collaborators, n8n's user-management UI handles that through the dashboard.
Credentials, workflows, execution history, and binary data all live in PostgreSQL. Back it up daily.
Create /usr/local/bin/n8n-backup.sh:
#!/usr/bin/env bash
set -euo pipefail
BACKUP_DIR="/var/backups/n8n"
DATE="$(date +%F)"
mkdir -p "$BACKUP_DIR"
docker exec n8n-postgres \
pg_dump -U n8n -d n8n \
| gzip > "$BACKUP_DIR/n8n-$DATE.sql.gz"
find "$BACKUP_DIR" -name "n8n-*.sql.gz" -mtime +14 -delete
Make it executable and schedule it:
sudo chmod +x /usr/local/bin/n8n-backup.sh
echo "15 3 * * * root /usr/local/bin/n8n-backup.sh" | \
sudo tee /etc/cron.d/n8n-backup
For off-site safety, rsync or rclone the backup directory to object storage on the same schedule.
n8n ships frequent updates. The safe upgrade path:
cd /opt/n8n
sudo docker compose pull
sudo docker compose up -d
Always take a fresh backup before upgrading, especially across major versions. If something breaks, restore the PostgreSQL dump and roll back with:
sudo docker compose down
# edit the n8n image tag in docker-compose.yml to the previous version
sudo docker compose up -d
Workflow automation panels are popular targets for credential stuffing. If you don't need webhooks reachable from the public internet, restrict access with a VPN. Pair this with our Tailscale guide so only your tailnet devices can reach n8n.example.com.
If you do need webhooks public, leave Caddy open on 443 but scope the /webhook paths carefully in your workflows.
Caddy fails to obtain a certificate. DNS hasn't propagated, or 80/443 are blocked upstream.
Webhooks return 404 externally but work internally. WEBHOOK_URL doesn't match the public URL. It must include https:// and the trailing slash.
Credentials are missing after a restore. The N8N_ENCRYPTION_KEY doesn't match the one used when the credentials were stored.
Long-running workflows get killed. Bump N8N_DEFAULT_TIMEOUT or move heavy jobs onto a queue-mode worker with Redis.
Self-hosted n8n gives you Zapier-grade automation without handing your API keys and customer data to another SaaS.
Running Docker workloads like this is what our Linux VPS plans are built for. NVMe storage, IPv6, and snapshots come standard. See the options.