Streaming services keep changing. Shows leave the catalog the day you finally sit down to watch them, prices creep up every six months, and the same movie is split across three different platforms. If you already have a media collection, Jellyfin lets you stream it the way Netflix streams theirs: a clean web UI, native apps for every device, user profiles, watch progress, and subtitle support, all running on a server you own.
This guide walks through a production-ready Jellyfin install on a VPS using Docker, with Caddy in front for automatic HTTPS and proper WebSocket forwarding. It covers the gotchas that bite first-time self-hosters: transcoding on cheap hardware, locking down public signups, and getting Chromecast and DLNA to behave behind a reverse proxy.
media.example.com at your serverdocker-compose.yml/config, /cache, and /mediaTotal time: about 20 minutes.
80 and 443 open to the internet (required by Let's Encrypt)The library size is what makes Jellyfin different from most self-hosted apps. Docker, Caddy, and Jellyfin itself together use under a gigabyte. Your media is what fills the disk.
In your DNS provider, create an A record:
media.example.com → YOUR_VPS_IPV4
Add an AAAA record for IPv6 if you use it. Check propagation:
dig +short media.example.com
The output should match your VPS IP. Caddy needs DNS to resolve before it can issue a Let's Encrypt certificate.
On a fresh Ubuntu 22.04 or 24.04 server:
sudo apt update
sudo apt install -y ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
-o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] \
https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo $VERSION_CODENAME) stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Confirm it works:
docker --version
docker compose version
sudo ufw allow 22/tcp
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw enable
Caddy needs port 80 for the ACME HTTP challenge and 443 for HTTPS. Do not expose port 8096 directly. Caddy is the only thing that should answer on the public internet.
sudo mkdir -p /opt/jellyfin
cd /opt/jellyfin
sudo mkdir -p config cache media caddy-data caddy-config
The layout matters because Jellyfin keeps state in three different places:
config/ holds the database, settings, user accounts, and metadatacache/ is scratch space for transcoded segments and image thumbnailsmedia/ is where your library livesIf your VPS has a separate large disk mounted somewhere like /mnt/storage, point media/ at it instead:
sudo mkdir -p /mnt/storage/jellyfin-media
sudo ln -s /mnt/storage/jellyfin-media /opt/jellyfin/media
Create /opt/jellyfin/docker-compose.yml:
services:
jellyfin:
image: jellyfin/jellyfin:latest
container_name: jellyfin
restart: unless-stopped
user: "1000:1000"
environment:
JELLYFIN_PublishedServerUrl: "https://media.example.com"
TZ: "Europe/Berlin"
volumes:
- ./config:/config
- ./cache:/cache
- ./media:/media:ro
networks:
- jellynet
caddy:
image: caddy:2
container_name: jellyfin-caddy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- ./caddy-data:/data
- ./caddy-config:/config
networks:
- jellynet
networks:
jellynet:
A few notes:
user: line runs Jellyfin as UID 1000, which is usually the first non-root user on Ubuntu. Match this to whoever owns your media files. If id -u returns something different, change it.:ro (read-only) is a small but real safety win. Jellyfin only needs to read your files, not delete them.8096 stays inside the Docker network. Caddy reaches it over jellynet.JELLYFIN_PublishedServerUrl lets Jellyfin advertise its public URL correctly to clients during discovery.network_mode: hostSome Jellyfin guides use network_mode: host so the container can broadcast on the local network for DLNA discovery. That makes sense on a home server sitting next to your TV, but on a VPS in a datacenter there is no LAN to broadcast on, and host mode disables Docker's network isolation. Leave host networking off and use the published-URL setup above. Clients connect over HTTPS instead of DLNA, which works from any network anyway.
Create /opt/jellyfin/Caddyfile:
media.example.com {
encode zstd gzip
# Big upload limits for posters, subtitles, and metadata edits
request_body {
max_size 100MB
}
reverse_proxy jellyfin:8096 {
# Pass real client info to Jellyfin
header_up X-Real-IP {remote_host}
header_up X-Forwarded-For {remote_host}
header_up X-Forwarded-Proto {scheme}
header_up X-Forwarded-Host {host}
# WebSocket support for live updates and remote control
header_up Upgrade {http.request.header.upgrade}
header_up Connection "upgrade"
}
}
Caddy v2 actually upgrades WebSocket connections out of the box for reverse_proxy, but the explicit Upgrade and Connection headers make the intent obvious and survive copy-pasting into other proxies. The X-Forwarded-Proto header is the one that matters most: without it, Jellyfin generates broken http:// URLs in API responses and clients refuse to connect over mixed-content rules.
cd /opt/jellyfin
sudo docker compose up -d
sudo docker compose logs -f
Wait until the Caddy logs show the Let's Encrypt certificate was issued, then open https://media.example.com in a browser. You should see the Jellyfin first-run wizard.
The wizard walks through:
/media inside the container. Filenames matter: Jellyfin and the underlying TheTVDB and TMDb scrapers expect Show Name (Year)/Season 01/Show Name - S01E01.mkv and Movie Name (Year)/Movie Name (Year).mkv style. Read the naming guide before moving files.The wizard will trigger an initial library scan. For a multi-terabyte library this can take an hour or more. Let it finish before tinkering with settings.
Jellyfin does not enable public signups by default, which is good. But it does have a "Quick Connect" option and forgot-password reset behavior worth tightening.
In the dashboard, go to Administration -> Dashboard -> General:
Then go to Users -> New User:
There is no public signup form to disable - new users have to be created by an admin from the dashboard. That is by design and you should keep it that way.
This is the part that surprises people. Jellyfin can transcode video on the fly to match a client's bitrate or codec, but software transcoding is brutally CPU-heavy. A 1080p H.264 transcode pegs four CPU cores. A 4K HEVC transcode pegs eight. Cheap VPS plans do not have a GPU, and their CPUs are shared, which means a transcode will both tank quality and annoy your hosting provider.
The good fix is to avoid transcoding entirely:
For most home libraries with H.264 and HEVC content, native apps direct-play everything. The only reason you would need server-side transcoding is web browsers playing exotic codecs, and even there it is rare.
If you do need transcoding, do it on a dedicated server with a real GPU (Intel iGPU with QuickSync is the cheapest path) or an Nvidia card with NVENC. A VPS is the wrong shape for that workload.
Jellyfin has official apps for every platform that matters:
In each app, set the server URL to https://media.example.com and log in with your Jellyfin account. The connection sticks.
There is a strong argument for not exposing your media library to the open internet at all. Bots scan for Jellyfin instances, and an open server has been used as an unwitting indexer for piracy in the past. If you only stream to your own devices, putting Jellyfin behind Tailscale gives you full bandwidth, no public attack surface, and works from anywhere.
The setup is the same as the public version. Just skip the firewall openings for 80 and 443, and reach media.example.com over your tailnet instead. WireGuard works equally well if you prefer.
A VPN-side library has another nice property: full bandwidth. Public traffic on a VPS is often metered or shaped during peak hours. A WireGuard tunnel between your devices and your server keeps the throughput high and the latency low.
Caddy returns a 502 on first boot. Jellyfin takes 20 to 60 seconds to initialize the database on the first run, and Caddy hits it before it's ready. Wait a minute and reload. If it persists, check docker compose logs jellyfin - you are usually waiting on a slow library scan.
The library shows zero items after a scan. Almost always a path or naming problem. Confirm the folder you pointed Jellyfin at is actually inside the /media mount, and that filenames follow the Jellyfin naming guide. SSH in and run docker exec -it jellyfin ls /media/Movies to see what the container actually sees.
Audio gets out of sync during playback. That is a transcoding artifact. Check the active device - if it says "Transcoding (Audio)" you can usually fix it by switching to a native client that supports the source audio codec directly. If the source itself has drift, re-mux it with ffmpeg -i input.mkv -c copy output.mkv.
Cannot cast to Chromecast over the reverse proxy. Chromecast needs a fully valid HTTPS certificate (Let's Encrypt is fine), X-Forwarded-Proto set to https, and proper WebSocket upgrade headers. The Caddyfile above has all three. Cast also fails if the device is on a different network than the controlling phone, since it pulls the stream itself - both need to reach media.example.com.
WebSocket connection keeps dropping. Make sure no upstream proxy (Cloudflare, another reverse proxy, a load balancer) is buffering or terminating the connection. If you front Caddy with Cloudflare, enable WebSockets in the Cloudflare dashboard and consider setting that hostname to "DNS only" rather than proxied.
config/ folder daily. Losing your library scan results is annoying but recoverable. Losing user accounts, watch progress, and playlists is worse. The whole config/ directory is a few hundred megabytes at most - sync it to object storage with restic or rclone.That's it. Self-hosted Jellyfin gives you the convenience of a streaming service without the catalog roulette and without your viewing habits feeding someone else's recommendation engine.
A media server lives or dies by its disk. Our storage VPS plans ship with terabytes of HDD space and the same NVMe boot disk you get on the regular Linux plans. See the options.