All articles
TutorialsApr 22, 2026 · 21 min read

PostgreSQL Tuning for a Small VPS - A Practical Walkthrough

PostgreSQL Tuning for a Small VPS - A Practical Walkthrough

The PostgreSQL out-of-the-box configuration is famously conservative. It assumes a tiny machine that might be sharing the kernel page cache with a dozen other services, so it leaves almost all of the available memory on the table. On a small VPS with 1 to 4 GB of RAM, that default behavior leaves you with a database that feels sluggish even on a workload that should fit comfortably in cache.

The good news is that you do not need a DBA to fix this. A handful of well-chosen settings will get you most of the way there, and the rest is observability. This walkthrough covers what to change, why, how to apply it safely, and how to find the slow queries that no amount of tuning will save you from.

TL;DR

  • Set shared_buffers to about 25% of RAM and effective_cache_size to 50-75%.
  • Keep work_mem small (4-16 MB on a 1-4 GB box) because it multiplies per sort node.
  • Lower max_connections to 20-50 and let a pooler handle bursts.
  • Drop random_page_cost to 1.1 on SSD or NVMe.
  • Enable pg_stat_statements and use EXPLAIN (ANALYZE, BUFFERS) to find the hot spots.

Total time: around 30 minutes, plus one restart.

What You Need

  • A VPS running PostgreSQL 14, 15, or 16 (anything from 13 onwards is fine, the values below apply).
  • Root or sudo access on the host.
  • An idea of how much RAM you have. free -h will tell you.
  • Optional but highly recommended: a recent backup or a snapshot before you change anything.

We will assume Ubuntu 22.04 or 24.04 and a Postgres installed from the apt repository, but the file paths are the only thing distro-specific.

The single most common way to break a small Postgres box is to set `work_mem` too high. It is a per-operation setting, not per-connection, so a single query with three sort or hash nodes uses three times that value. Multiply by your concurrency and you can OOM the kernel in seconds.

Step 1: Locate postgresql.conf

The config file lives in different places depending on how Postgres was installed.

On Debian and Ubuntu packages:

ls /etc/postgresql/*/main/postgresql.conf

On RHEL, Rocky, and Alma:

ls /var/lib/pgsql/*/data/postgresql.conf

Inside the official Docker image the file is at /var/lib/postgresql/data/postgresql.conf, mounted from your volume.

If you are not sure, ask Postgres directly:

sudo -u postgres psql -c "SHOW config_file;"

That returns the absolute path of the file currently in use. Make a backup before you touch it:

sudo cp /etc/postgresql/16/main/postgresql.conf \ /etc/postgresql/16/main/postgresql.conf.bak

Step 2: Pick Starting Values With pgtune

Hand-rolling all of this from scratch is fine, but the PGTune calculator is a faster way to get a sane baseline. Plug in:

  • DB version
  • OS type (Linux)
  • DB type (Web application or Mixed for most apps)
  • Total RAM
  • CPU count
  • Storage (SSD)

Copy the output and treat it as a starting point, not gospel. The values below explain what each one is doing so you can adjust.

Step 3: Memory Settings

For a 2 GB VPS with a single CPU running a typical web app workload, edit postgresql.conf and set:

# Memory shared_buffers = 512MB # ~25% of RAM effective_cache_size = 1500MB # ~75% of RAM work_mem = 8MB # per sort/hash node, careful maintenance_work_mem = 128MB # used by VACUUM, CREATE INDEX

What each one actually does:

  • shared_buffers is Postgres's own page cache. Bigger is better up to about 25% of RAM. Beyond that you get diminishing returns because the kernel's own cache also caches the same blocks.
  • effective_cache_size is not an allocation. It is a hint to the planner about how much data is likely to be hot in any cache (Postgres + kernel). Higher values make the planner more eager to use index scans.
  • work_mem is the per-operation budget for sorts, hashes, and merges. A query with two sort nodes and a hash join allocates this three times. With 50 connections each running such a query, you have used 50 * 3 * work_mem of RAM. On a small box, keep this conservative.
  • maintenance_work_mem is what VACUUM, CREATE INDEX, and ALTER TABLE get to use. Higher is better here, you just do not want it competing with normal queries.

For a 1 GB box, halve those numbers. For a 4 GB box, double them.

Step 4: Connection Limits

The default max_connections = 100 is wildly too high for a small VPS. Each connection costs roughly 5-10 MB of RAM by itself, before any query is run.

# Connections max_connections = 30

Twenty to fifty is plenty for a small app. If you genuinely need more concurrency, put a connection pooler (PgBouncer) in front of the database and keep the backend count low. Postgres is happiest with a small pool of long-lived connections, not hundreds of short-lived ones.

Step 5: WAL and Checkpoints

The Write-Ahead Log settles a lot of write performance questions. The defaults bias toward small WAL files and frequent checkpoints, which on a busy server causes I/O spikes. Loosen them:

# Write-Ahead Log wal_buffers = 16MB checkpoint_timeout = 15min max_wal_size = 2GB min_wal_size = 80MB checkpoint_completion_target = 0.9
  • wal_buffers is rarely useful above 16 MB. The default -1 already auto-tunes to about 1/32 of shared_buffers, but pinning it avoids surprises.
  • checkpoint_timeout of 15 minutes lets dirty pages accumulate in cache before being flushed, which means more efficient writes.
  • max_wal_size = 2GB lets WAL grow before forcing a checkpoint. On a small VPS, watch your disk free space, this can grow up to that value during heavy writes.
  • checkpoint_completion_target = 0.9 spreads the checkpoint I/O across most of the interval instead of bursting it.

Step 6: I/O and the Planner

If you are on SSD or NVMe (which any modern VPS will be), tell the planner about it:

# Planner random_page_cost = 1.1 effective_io_concurrency = 200

random_page_cost defaults to 4.0, a value chosen for spinning rust where seeking is expensive. On flash, random and sequential reads are essentially the same speed. Setting 1.1 makes the planner stop avoiding index scans as if they were costly.

effective_io_concurrency tells Postgres how many concurrent I/O requests the storage can handle. NVMe drives swallow hundreds. A value around 200 is safe.

Step 7: Autovacuum

Autovacuum exists to keep dead row bloat under control and refresh planner statistics. The defaults are too lazy on write-heavy workloads. Tune it to be more eager:

# Autovacuum autovacuum = on autovacuum_max_workers = 3 autovacuum_naptime = 30s autovacuum_vacuum_scale_factor = 0.05 autovacuum_analyze_scale_factor = 0.025 autovacuum_vacuum_cost_limit = 1000

The two scale factors say "vacuum a table once 5% of its rows have changed" and "analyze it once 2.5% have changed". The defaults are 20% and 10%, which on a 100M-row table means autovacuum waits for 20M dead rows before doing anything. By then your indexes are already bloated.

autovacuum_vacuum_cost_limit = 1000 (default 200) lets autovacuum work harder before pausing. On modern storage you can absorb the I/O.

Step 8: Apply the Changes

Some settings reload, others require a restart. The split matters because you do not want unscheduled downtime in the middle of a tuning session.

For settings that only need a reload (most planner and autovacuum settings):

SELECT pg_reload_conf();

Or from the shell:

sudo systemctl reload postgresql

For settings that need a restart (shared_buffers, max_connections, wal_buffers):

sudo systemctl restart postgresql

You can ask Postgres which is which:

SELECT name, setting, context FROM pg_settings WHERE name IN ( 'shared_buffers', 'work_mem', 'effective_cache_size', 'max_connections', 'wal_buffers', 'random_page_cost' );

Anything with context = postmaster requires a restart. sighup reloads. user and superuser can be set per-session.

You can also set values without editing the file at all, using ALTER SYSTEM:

ALTER SYSTEM SET work_mem = '12MB'; ALTER SYSTEM SET random_page_cost = 1.1; SELECT pg_reload_conf();
`ALTER SYSTEM` writes to a file called `postgresql.auto.conf` in the data directory. Settings there override `postgresql.conf`. If a value seems to ignore your edits to `postgresql.conf`, check `postgresql.auto.conf`, that is almost always the culprit. Reset with `ALTER SYSTEM RESET work_mem;`.

Step 9: Enable pg_stat_statements

Tuning settings is only half the job. The other half is finding the queries that are slow. pg_stat_statements is built into Postgres but not loaded by default.

Edit postgresql.conf:

shared_preload_libraries = 'pg_stat_statements' pg_stat_statements.track = all pg_stat_statements.max = 10000

shared_preload_libraries requires a restart:

sudo systemctl restart postgresql

Then create the extension in each database you want to track:

CREATE EXTENSION IF NOT EXISTS pg_stat_statements;

Now you can find the slowest queries by total time:

SELECT substr(query, 1, 80) AS short_query, calls, round(total_exec_time::numeric, 1) AS total_ms, round(mean_exec_time::numeric, 2) AS mean_ms, rows FROM pg_stat_statements ORDER BY total_exec_time DESC LIMIT 20;

This is the report that actually moves the needle. The query at the top is the one to fix first, even if it is not the slowest individually, because it is the one your users hit the most.

Step 10: Read a Plan With EXPLAIN ANALYZE

Once you have a suspect query, run it through EXPLAIN ANALYZE to see the actual execution. Add BUFFERS to see cache hits versus disk reads:

EXPLAIN (ANALYZE, BUFFERS, VERBOSE) SELECT u.id, u.email, count(o.id) AS order_count FROM users u LEFT JOIN orders o ON o.user_id = u.id WHERE u.created_at > now() - interval '30 days' GROUP BY u.id, u.email ORDER BY order_count DESC LIMIT 50;

Sample output (trimmed):

Limit (cost=12450.31..12450.44 rows=50 width=44) (actual time=180.4..180.5 rows=50 loops=1) Buffers: shared hit=412 read=1893 -> Sort Sort Key: (count(o.id)) DESC Sort Method: top-N heapsort Memory: 31kB -> HashAggregate -> Hash Right Join Hash Cond: (o.user_id = u.id) -> Seq Scan on orders o -> Hash -> Index Scan using users_created_at_idx on users u Planning Time: 0.3 ms Execution Time: 181.2 ms

Things to look for:

  • A Seq Scan on a large table that should be using an index. Add the missing index.
  • Buffers: read=... numbers in the thousands. That is data being pulled from disk on every run, not cache.
  • Sort Method: external merge Disk: ...kB. The sort spilled to disk because work_mem was too small for that one query. You can raise it for the session: SET work_mem = '64MB'; and rerun.
  • A rows estimate that is wildly off the actual count. Run ANALYZE on the table or lower default_statistics_target's sister setting per-column.

Step 11: Verify the Tuning

After applying changes, take a baseline. Run a representative query before and after, and watch:

SELECT pg_size_pretty(pg_database_size(current_database())) AS db_size; SELECT count(*) FROM pg_stat_activity; SELECT * FROM pg_stat_bgwriter;

pg_stat_bgwriter shows you whether checkpoints are happening on time (checkpoints_timed) versus being forced by hitting max_wal_size (checkpoints_req). If checkpoints_req is climbing, raise max_wal_size.

Troubleshooting

OOM kills under load. The kernel killed Postgres because total RAM use exceeded what the box has. Almost always work_mem set too high, or too many connections. Drop work_mem to 4 MB, set max_connections to 20, and add PgBouncer.

Autovacuum cannot keep up. Tables stay bloated and queries slow down over time. Lower autovacuum_vacuum_scale_factor further (try 0.02), raise autovacuum_max_workers to 4, and raise autovacuum_vacuum_cost_limit to 2000. Check pg_stat_user_tables.last_autovacuum to see whether it is actually running.

The first query after a restart is slow, then fast. Cold cache. The kernel and shared_buffers are empty after a restart. Either accept it, or use the pg_prewarm extension to warm hot tables on startup.

Settings revert after pg_ctl reload or systemctl reload. You edited the wrong file, or postgresql.auto.conf (created by ALTER SYSTEM) is shadowing your value. Run SELECT name, source, sourcefile FROM pg_settings WHERE name = 'work_mem'; to see exactly which file is winning.

shared_buffers change ignored. It needs a full restart, not a reload. sudo systemctl restart postgresql.

Going Further

  • Put PgBouncer between your app and Postgres so the backend connection count stays small while the app sees a big pool. This single change often produces a bigger speedup than any individual postgresql.conf tweak.
  • Partition large tables (anything with hundreds of millions of rows) by date or tenant. Postgres native partitioning is solid from version 13 onwards and keeps autovacuum runtimes bounded.
  • Set up pg_stat_statements dashboards in Grafana so you can spot regressions instead of waiting for a user to complain.
  • Once you outgrow a single small VPS, move to a managed Postgres or a beefier dedicated box. There is a real ceiling to what tuning achieves on 1 GB of RAM, and at some point throwing hardware at the problem is the right call.

A well-tuned Postgres on a $5 VPS will outrun a default-config Postgres on a $50 one. The settings above will get you 80% of the way there in an afternoon.


All of our Linux VPS plans ship with NVMe storage, the right kernel knobs already in place, and snapshots so you can experiment with postgresql.conf without fear.