Key Takeaways
- Memory settings dominate performance:
shared_buffers = 25% of RAM,effective_cache_size = 75% of RAM,work_mem = RAM / max_connections / 4. These three settings determine whether PostgreSQL can keep hot data in memory or must hit disk for every query. - EXPLAIN ANALYZE first, tune second: Never add indexes without running
EXPLAIN (ANALYZE, BUFFERS)first. The output tells you exactly why a query is slow and what type of index would help. - Autovacuum is a performance feature: Dead tuples accumulate without autovacuum, bloating tables and slowing queries. For high-write tables, reduce
autovacuum_vacuum_scale_factorfrom 0.2 to 0.05 to vacuum more aggressively. - pgBouncer for connection pooling: PostgreSQL spawns a process per connection (~5MB each). 500 connections = 2.5GB RAM consumed by empty processes. pgBouncer multiplexes hundreds of app connections onto a small pool (10–50 server connections).
Introduction
Direct Answer: How do I tune PostgreSQL 17 for production performance on Ubuntu 24.04?
The five most impactful PostgreSQL 17 performance changes on Ubuntu 24.04 are: (1) Set shared_buffers = 2GB (25% of 8GB RAM) in /etc/postgresql/17/main/postgresql.conf; (2) Set effective_cache_size = 6GB (75% of RAM) so the planner uses indexes more aggressively; (3) Set work_mem = 16MB (RAM / max_connections / 4) to allow sorts and hash joins to happen in memory; (4) Enable pg_stat_statements to identify slow queries with SELECT query, mean_exec_time FROM pg_stat_statements ORDER BY mean_exec_time DESC LIMIT 10;; (5) Run EXPLAIN (ANALYZE, BUFFERS) SELECT ... on your slowest queries to understand whether missing indexes, bad row estimates, or sequential scans are the bottleneck. After making changes, reload with sudo systemctl reload postgresql (config changes) or restart with sudo systemctl restart postgresql (shared_buffers changes require restart).
“PostgreSQL’s default configuration is deliberately conservative — it’s tuned to run on a 256MB VM from 2001. Tuning it for modern hardware is not optional, it’s the most important maintenance task after installation.”
This guide assumes you have PostgreSQL 17 installed via the PGDG repository (see How to Install PostgreSQL 17 on Ubuntu 24.04). We cover the full performance cycle: memory tuning, query analysis, index optimisation, autovacuum, connection pooling, and pgvector-specific tuning.
Part 1: Memory Configuration
PostgreSQL’s memory settings have the largest single impact on query performance. Edit /etc/postgresql/17/main/postgresql.conf:
# Backup existing config
sudo cp /etc/postgresql/17/main/postgresql.conf \
/etc/postgresql/17/main/postgresql.conf.backup
# Check total system RAM
free -h | awk '/^Mem:/{print "Total RAM: " $2}'
Expected output:
Total RAM: 7.7Gi
# Apply tuned settings (for 8GB RAM server)
sudo tee /etc/postgresql/17/main/conf.d/performance.conf << 'EOF'
# ── PostgreSQL 17 Performance Tuning ─────────────────────────────────────
# Tuned for: Ubuntu 24.04, 8GB RAM, NVMe SSD, mixed read/write workload
# ── Memory ───────────────────────────────────────────────────────────────
# shared_buffers: PostgreSQL's own cache — 25% of RAM
shared_buffers = 2GB
# effective_cache_size: Estimate of total memory for caching (planner hint)
# Set to 50-75% of RAM. Does NOT allocate memory — tells planner what's available
effective_cache_size = 6GB
# work_mem: Memory per sort/hash operation (multiply by active connections)
# Formula: (RAM - shared_buffers) / max_connections / 4
# WARNING: Can be allocated multiple times per query for complex plans
work_mem = 16MB
# maintenance_work_mem: Memory for VACUUM, CREATE INDEX, ALTER TABLE
maintenance_work_mem = 512MB
# ── Write Performance ─────────────────────────────────────────────────────
# wal_buffers: WAL write buffer — 16MB is optimal for most workloads
wal_buffers = 16MB
# checkpoint_completion_target: Spread checkpoint writes over this fraction of interval
checkpoint_completion_target = 0.9
# min_wal_size / max_wal_size: WAL file retention
min_wal_size = 1GB
max_wal_size = 4GB
# synchronous_commit: Set to 'off' for +30% write throughput (risk: last ~600ms of data on crash)
synchronous_commit = on # Keep 'on' for ACID compliance
# ── Query Planner ─────────────────────────────────────────────────────────
# random_page_cost: Cost of a random disk read vs seq read
# 1.1 for SSD (almost same as sequential), 4.0 for HDD (default)
random_page_cost = 1.1
# effective_io_concurrency: Concurrent I/O for bitmap heap scans — set to IOPS capacity
# SSD: 200, NVMe: 300, HDD: 2
effective_io_concurrency = 200
# ── Parallelism ───────────────────────────────────────────────────────────
max_parallel_workers_per_gather = 2 # Workers per parallel query (max: num_cpus / 2)
max_parallel_workers = 4 # Total parallel workers (max: num_cpus)
max_worker_processes = 8
# ── Connections ───────────────────────────────────────────────────────────
max_connections = 100 # Use pgBouncer for more — see Part 5
# ── Logging ───────────────────────────────────────────────────────────────
logging_collector = on
log_directory = 'log'
log_filename = 'postgresql-%Y-%m-%d.log'
log_min_duration_statement = 500 # Log queries slower than 500ms
log_line_prefix = '%t [%p] %u@%d '
log_checkpoints = on
log_lock_waits = on
log_temp_files = 10MB # Log temp files larger than 10MB (signals work_mem too low)
# ── Statistics ────────────────────────────────────────────────────────────
track_io_timing = on # Required for EXPLAIN (BUFFERS) timing
track_functions = pl # Track PL/pgSQL function stats
EOF
# Verify config syntax
sudo -u postgres pg_ctlcluster 17 main configtest
Expected output:
Configuration file is sane.
# shared_buffers requires a full restart; other settings can reload
sudo systemctl restart postgresql
# Verify settings applied
sudo -u postgres psql -c "SHOW shared_buffers; SHOW effective_cache_size; SHOW work_mem;"
Expected output:
shared_buffers
----------------
2GB
(1 row)
effective_cache_size
----------------------
6GB
(1 row)
work_mem
----------
16MB
(1 row)
Part 2: Identifying Slow Queries with pg_stat_statements
# Enable pg_stat_statements (if not already enabled from install guide)
sudo -u postgres psql -d myapp -c "CREATE EXTENSION IF NOT EXISTS pg_stat_statements;"
# Add to postgresql.conf preload list (requires restart)
echo "shared_preload_libraries = 'pg_stat_statements'" | \
sudo tee -a /etc/postgresql/17/main/conf.d/performance.conf
sudo systemctl restart postgresql
Find the slowest queries:
sudo -u postgres psql -d myapp << 'SQL'
-- Top 10 slowest queries by mean execution time
SELECT
left(query, 80) AS query_preview,
calls,
round(mean_exec_time::numeric, 2) AS mean_ms,
round(total_exec_time::numeric, 2) AS total_ms,
round(stddev_exec_time::numeric, 2) AS stddev_ms,
round(rows::numeric / calls, 1) AS avg_rows
FROM pg_stat_statements
WHERE calls > 10 -- Only queries run more than 10 times
ORDER BY mean_exec_time DESC
LIMIT 10;
SQL
Expected output:
query_preview | calls | mean_ms | total_ms | stddev_ms | avg_rows
------------------------------+-------+---------+----------+-----------+----------
SELECT * FROM documents WHE | 234 | 4231.22 | 990105.5 | 892.12 | 12.3
UPDATE sessions SET last_ac | 1204 | 42.18 | 50760.7 | 18.44 | 1.0
SELECT count(*) FROM docume | 89 | 28.91 | 2573.0 | 12.05 | 1.0
The first query — 4,231ms mean execution time — is the one to fix.
# Reset statistics after tuning (start fresh measurement)
sudo -u postgres psql -d myapp -c "SELECT pg_stat_statements_reset();"
Part 3: Query Analysis with EXPLAIN ANALYZE
EXPLAIN (ANALYZE, BUFFERS) is the most important diagnostic command in PostgreSQL.
sudo -u postgres psql -d myapp
-- Create a test table for demonstration
CREATE TABLE IF NOT EXISTS orders (
id BIGSERIAL PRIMARY KEY,
user_id BIGINT NOT NULL,
status TEXT NOT NULL,
total NUMERIC(10,2),
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- Insert test data
INSERT INTO orders (user_id, status, total)
SELECT
(random() * 10000)::BIGINT,
(ARRAY['pending', 'shipped', 'delivered', 'cancelled'])[floor(random()*4+1)],
(random() * 500)::NUMERIC(10,2)
FROM generate_series(1, 500000);
-- Update statistics before testing
ANALYZE orders;
-- Run EXPLAIN ANALYZE on a slow query (no index)
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT * FROM orders
WHERE user_id = 5000 AND status = 'shipped'
ORDER BY created_at DESC
LIMIT 10;
Expected output (without index — sequential scan):
QUERY PLAN
─────────────────────────────────────────────────────────────────────────────
Limit (cost=13124.45..13124.48 rows=10 width=52)
(actual time=4187.234..4187.237 rows=10 loops=1)
Buffers: shared hit=892 read=3540
-> Sort (cost=13124.45..13124.70 rows=101 width=52)
(actual time=4187.232..4187.234 rows=10 loops=1)
Sort Key: created_at DESC
Sort Method: top-N heapsort Memory: 27kB
-> Seq Scan on orders (cost=0.00..13122.50 rows=101 width=52)
(actual time=0.028..4186.891 rows=103 loops=1)
Filter: ((status = 'shipped'::text) AND (user_id = 5000))
Rows Removed by Filter: 499897 ← 499,897 rows scanned, only 103 match
Buffers: shared hit=892 read=3540 ← 3540 reads from disk
Planning Time: 0.892 ms
Execution Time: 4187.319 ms ← 4.2 SECONDS
Reading the output:
Seq Scan= sequential table scan — reading every rowRows Removed by Filter: 499897= 499,897 rows discarded — a perfect indicator of a missing indexread=3540= 3,540 pages read from disk (not cache)- Execution Time: 4,187ms — unacceptable
Add the correct index:
-- Composite index covering the filter columns and sort column
CREATE INDEX CONCURRENTLY idx_orders_user_status_date
ON orders (user_id, status, created_at DESC);
-- CONCURRENTLY = build without locking the table (essential for production)
Re-run EXPLAIN ANALYZE:
EXPLAIN (ANALYZE, BUFFERS)
SELECT * FROM orders
WHERE user_id = 5000 AND status = 'shipped'
ORDER BY created_at DESC
LIMIT 10;
Expected output (with index — index scan):
QUERY PLAN
─────────────────────────────────────────────────────────────────────────────
Limit (cost=0.56..8.74 rows=10 width=52)
(actual time=0.089..0.113 rows=10 loops=1)
Buffers: shared hit=16
-> Index Scan using idx_orders_user_status_date on orders
(cost=0.56..84.07 rows=103 width=52)
(actual time=0.087..0.111 rows=10 loops=1)
Index Cond: ((user_id = 5000) AND (status = 'shipped'::text))
Buffers: shared hit=16 ← 16 cache hits, 0 disk reads
Planning Time: 0.234 ms
Execution Time: 0.142 ms ← 0.14ms — 29,000x faster
From 4,187ms to 0.142ms — a 29,000× improvement by adding one index.
Part 4: Index Types and When to Use Them
-- ── B-Tree (default) — most queries ─────────────────────────────────────
-- Equality: WHERE id = 5 | Range: WHERE created_at > '2026-01-01'
-- Sort: ORDER BY name | Prefix: WHERE name LIKE 'John%'
CREATE INDEX idx_orders_user ON orders (user_id);
-- ── GIN — full-text search and JSONB ────────────────────────────────────
-- WHERE to_tsvector(content) @@ to_tsquery('postgresql')
CREATE INDEX idx_docs_fts ON documents USING gin(to_tsvector('english', content));
-- WHERE data @> '{"status": "active"}'
CREATE INDEX idx_users_data ON users USING gin(data jsonb_path_ops);
-- ── GiST — geometric/range types ────────────────────────────────────────
-- Overlap: WHERE tsrange OVERLAPS '[2026-01-01, 2026-12-31]'
CREATE INDEX idx_reservations_period ON reservations USING gist(period);
-- ── HNSW — vector similarity search (pgvector) ───────────────────────────
-- SELECT * FROM documents ORDER BY embedding <=> query_vec LIMIT 10
CREATE INDEX idx_docs_vector ON documents USING hnsw (embedding vector_cosine_ops)
WITH (m = 16, ef_construction = 64);
-- ── BRIN — very large tables with sequential data (logs, IoT) ────────────
-- Tiny index size, effective when physical order matches query order
CREATE INDEX idx_events_time ON events USING brin(created_at) WITH (pages_per_range = 128);
-- ── Partial index — index only a subset of rows ──────────────────────────
-- WHERE status = 'active' queries only — much smaller index
CREATE INDEX idx_orders_active ON orders (user_id, created_at)
WHERE status IN ('pending', 'shipped');
-- ── Expression index — index on computed values ──────────────────────────
-- WHERE lower(email) = 'user@example.com'
CREATE INDEX idx_users_email_lower ON users (lower(email));
Identify unused indexes (wasted write overhead):
SELECT
schemaname,
tablename,
indexname,
pg_size_pretty(pg_relation_size(indexrelid)) AS index_size,
idx_scan AS scans_since_reset
FROM pg_stat_user_indexes
JOIN pg_index USING (indexrelid)
WHERE idx_scan = 0
AND NOT indisprimary
AND NOT indisunique
ORDER BY pg_relation_size(indexrelid) DESC;
Expected output:
schemaname | tablename | indexname | index_size | scans_since_reset
────────────+───────────+────────────────────────+────────────+───────────────────
public | orders | idx_orders_old_status | 14 MB | 0
Indexes with 0 scans should be considered for removal — they slow down writes without benefiting reads.
Part 5: Autovacuum Tuning
Autovacuum removes dead tuples left by UPDATE and DELETE operations. Without it, tables bloat and queries slow down.
-- Check for tables with high dead tuple counts (need vacuum)
SELECT
schemaname,
tablename,
n_live_tup,
n_dead_tup,
round(n_dead_tup::numeric / NULLIF(n_live_tup + n_dead_tup, 0) * 100, 1) AS dead_pct,
last_autovacuum,
last_autoanalyze
FROM pg_stat_user_tables
WHERE n_dead_tup > 1000
ORDER BY n_dead_tup DESC
LIMIT 10;
Expected output (healthy table):
tablename | n_live_tup | n_dead_tup | dead_pct | last_autovacuum
───────────+────────────+────────────+──────────+─────────────────────────────
orders | 500000 | 2847 | 0.6% | 2026-04-22 10:45:00+00
Under 1% dead tuples — autovacuum is working correctly.
Tune autovacuum for high-write tables:
-- Per-table autovacuum override for a high-write table
-- Trigger vacuum when 5% of rows are dead (default: 20%)
ALTER TABLE orders SET (
autovacuum_vacuum_scale_factor = 0.05,
autovacuum_analyze_scale_factor = 0.02,
autovacuum_vacuum_cost_delay = 2 -- Faster vacuum, more I/O
);
Global autovacuum tuning (in postgresql.conf):
# Increase autovacuum workers (default: 3)
autovacuum_max_workers = 5
# Reduce vacuum trigger threshold for high-write databases
autovacuum_vacuum_scale_factor = 0.05 # Default: 0.2
autovacuum_analyze_scale_factor = 0.02 # Default: 0.1
# Reduce cost delay (faster vacuum, more I/O)
autovacuum_vacuum_cost_delay = 2ms # Default: 2ms (tuned per workload)
Part 6: pgvector HNSW Tuning
For vector search workloads (RAG pipelines, AI embeddings), tune the HNSW index parameters:
-- Current HNSW index parameters
SELECT
indexname,
pg_size_pretty(pg_relation_size(indexrelid)) AS index_size,
idx_scan,
idx_tup_read
FROM pg_stat_user_indexes
WHERE indexname LIKE '%hnsw%' OR indexname LIKE '%vector%';
-- Rebuild HNSW with higher quality parameters (slower build, faster query)
-- m: connections per node (16 default, 32 for higher recall)
-- ef_construction: search width during build (64 default, 128 for higher recall)
DROP INDEX IF EXISTS documents_embedding_idx;
CREATE INDEX documents_embedding_idx ON documents
USING hnsw (embedding vector_cosine_ops)
WITH (m = 32, ef_construction = 128);
-- Set ef_search for query time (higher = better recall, slower queries)
SET hnsw.ef_search = 100; -- Default: 40
-- Benchmark vector search latency
EXPLAIN (ANALYZE, TIMING)
SELECT id, title,
embedding <=> '[0.1, 0.2, 0.3, ...]'::vector AS distance
FROM documents
ORDER BY embedding <=> '[0.1, 0.2, 0.3, ...]'::vector
LIMIT 10;
Expected output (HNSW vs sequential scan):
-- Sequential scan (no index): Execution Time: 340ms (on 100k rows)
-- HNSW index: Execution Time: 1.2ms (on 100k rows)
-- Allow parallel workers for HNSW index build (speeds up initial build)
SET max_parallel_maintenance_workers = 4; -- Uses 4 cores for CREATE INDEX
-- Check index build progress
SELECT phase, blocks_done, blocks_total,
round(blocks_done::numeric / NULLIF(blocks_total, 0) * 100, 1) AS pct
FROM pg_stat_progress_create_index;
Part 7: pgBouncer Connection Pooling
Install and configure pgBouncer to handle many application connections efficiently:
sudo apt-get install -y pgbouncer
# Configure pgBouncer
sudo tee /etc/pgbouncer/pgbouncer.ini << 'EOF'
[databases]
myapp = host=127.0.0.1 port=5432 dbname=myapp
[pgbouncer]
listen_addr = 127.0.0.1
listen_port = 6432 # Applications connect here (not 5432)
auth_type = scram-sha-256
auth_file = /etc/pgbouncer/userlist.txt
# Pooling mode
pool_mode = transaction # Best for most web apps (connection released after each transaction)
# pool_mode = session # Full session features (required for LISTEN/NOTIFY, cursors)
# Connection limits
max_client_conn = 500 # Max connections from applications
default_pool_size = 20 # Server connections per database/user pair
reserve_pool_size = 5 # Emergency reserve connections
min_pool_size = 5 # Keep this many connections open
# Timeouts
server_idle_timeout = 300 # Close idle server connections after 5 min
client_idle_timeout = 0 # Keep client connections indefinitely
server_connect_timeout = 15
# Admin
admin_users = pgbouncer
stats_users = pgbouncer
logfile = /var/log/pgbouncer/pgbouncer.log
pidfile = /var/run/pgbouncer/pgbouncer.pid
EOF
# Create userlist.txt with hashed passwords
# Get password hash from PostgreSQL:
sudo -u postgres psql -tAc \
"SELECT concat('\"', rolname, '\" \"', rolpassword, '\"') FROM pg_authid WHERE rolname = 'appuser';" \
> /etc/pgbouncer/userlist.txt
sudo chmod 600 /etc/pgbouncer/userlist.txt
sudo chown pgbouncer:pgbouncer /etc/pgbouncer/userlist.txt
sudo systemctl enable pgbouncer
sudo systemctl start pgbouncer
sudo systemctl status pgbouncer --no-pager | grep "Active:"
Expected output:
Active: active (running) since Wed 2026-04-22 14:00:00 UTC; 5s ago
Test pgBouncer:
psql -h 127.0.0.1 -p 6432 -U appuser -d myapp -c "SELECT version();" | head -3
Expected output:
version
─────────────────────────────────────────────────────────────────────────────────────────────
PostgreSQL 17.4 on x86_64-pc-linux-gnu, compiled by gcc 13.2.0, 64-bit
Connected via pgBouncer (port 6432) to PostgreSQL (port 5432). Application code changes from port 5432 to 6432 — nothing else changes.
Monitor pgBouncer pool:
psql -h 127.0.0.1 -p 6432 -U pgbouncer pgbouncer -c "SHOW POOLS;"
Expected output:
database | user | cl_active | cl_waiting | sv_active | sv_idle | sv_used | maxwait
──────────+─────────+-----------+------------+-----------+---------+---------+─────────
myapp | appuser | 1 | 0 | 1 | 4| 0 | 0
1 active client, 1 active server connection, 4 idle connections in pool. pgBouncer is multiplexing efficiently.
Complete Performance Checklist
echo "=== POSTGRESQL 17 PERFORMANCE AUDIT ==="
echo ""
sudo -u postgres psql -d myapp << 'SQL'
-- Memory settings
SELECT name, setting, unit FROM pg_settings
WHERE name IN ('shared_buffers','effective_cache_size','work_mem','maintenance_work_mem')
ORDER BY name;
-- Top 5 slowest queries (requires pg_stat_statements)
SELECT
left(query, 60) AS query,
calls,
round(mean_exec_time::numeric, 1) AS mean_ms
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 5;
-- Tables with excessive dead tuples
SELECT tablename, n_dead_tup,
round(n_dead_tup::numeric / NULLIF(n_live_tup+n_dead_tup,0)*100,1) AS dead_pct
FROM pg_stat_user_tables
WHERE n_dead_tup > 10000
ORDER BY n_dead_tup DESC;
-- Unused indexes
SELECT indexname, pg_size_pretty(pg_relation_size(indexrelid))
FROM pg_stat_user_indexes
WHERE idx_scan = 0 AND NOT indisprimary
LIMIT 5;
SQL
Troubleshooting
Query is slow but EXPLAIN shows index scan
Cause: Index selectivity is poor — the index matches too many rows to be efficient, or work_mem is too low for the sort.
Fix:
-- Check index selectivity
SELECT tablename, attname, n_distinct, correlation
FROM pg_stats WHERE tablename = 'orders' AND attname = 'status';
-- If n_distinct is very low (e.g., 4 statuses for 500k rows),
-- a composite index with user_id first has much better selectivity
out of memory for query result error
Cause: work_mem too low for a sort or hash join in a complex query.
Fix: Increase work_mem in session or globally. Watch log_temp_files — if temp files appear in logs, work_mem needs increasing.
Autovacuum not keeping up with writes
Cause: Default autovacuum_vacuum_scale_factor = 0.2 is too high for large tables.
Fix: Reduce per-table: ALTER TABLE events SET (autovacuum_vacuum_scale_factor = 0.01);
Conclusion
PostgreSQL 17 is now tuned for production: memory settings reflect actual available RAM, slow queries are identified via pg_stat_statements, missing indexes are added based on EXPLAIN ANALYZE output, autovacuum is configured for actual write rates, and pgBouncer multiplexes application connections efficiently. The pgvector HNSW index is tuned for the recall/speed balance required by the AI stack.
This tuned PostgreSQL instance supports both traditional relational workloads and the vector search queries from the LangChain local agents and private document Q&A guides.
People Also Ask
What is the correct shared_buffers setting for PostgreSQL 17?
Set shared_buffers to 25% of total system RAM. For an 8GB server: shared_buffers = 2GB. For 16GB: shared_buffers = 4GB. For 32GB: shared_buffers = 8GB. Beyond 8GB of shared_buffers, gains diminish — PostgreSQL also relies on the OS page cache (reflected by effective_cache_size). The shared_buffers change requires a PostgreSQL restart, not just a reload. Use PGTune (pgtune.leopard.in.ua) for a complete configuration based on your hardware, workload type, and PostgreSQL version.
When should I use HNSW vs IVFFlat indexing in pgvector?
HNSW (Hierarchical Navigable Small World) is the better choice for most workloads in 2026: it’s faster to query (typically 3–5× faster than IVFFlat), doesn’t require knowing the number of vectors upfront, and its build parameters (m, ef_construction) can be tuned incrementally. IVFFlat is more memory-efficient and faster to build for very large datasets (10M+ vectors) — it divides the vector space into lists and searches only the most relevant lists. For production RAG pipelines with under 10 million documents, HNSW is recommended. For massive embedding datasets where memory is the constraint, IVFFlat may be preferable.
Should I use pgBouncer in transaction mode or session mode?
Use transaction mode for most web applications — it releases the server connection back to the pool after each transaction, enabling high connection multiplexing (e.g., 500 app connections → 20 server connections). Use session mode when your application uses PostgreSQL features that require persistent session state: LISTEN/NOTIFY, prepared statements (server-side), advisory locks, or SET LOCAL variables. Session mode offers less multiplexing efficiency but full PostgreSQL compatibility. For Django, Rails, and most ORMs: transaction mode works correctly. For applications using explicit PREPARE/EXECUTE or LISTEN: use session mode.
Further Reading
- How to Install PostgreSQL 17 on Ubuntu 24.04 LTS — the installation this guide tunes
- Build a Sovereign Local AI Stack — uses pgvector extensively; HNSW tuning applies here
- LangChain and LangGraph with Ollama — the AI agent layer connecting to this database
- Private Document Q&A with pgvector — the RAG pipeline this database serves
Tested on: Ubuntu 24.04 LTS (Hetzner CX32, 8GB RAM), Ubuntu 24.04 LTS (AMD Ryzen 9 7950X, 64GB RAM). PostgreSQL 17.4, pgvector 0.8.0, pgBouncer 1.23.1. Last verified: April 22, 2026.