Postgres connection tips

Config, indexing, and connection pooling notes

September 5, 2025 · 3 min read

Postgres is reliable, powerful, and generally pleasant to work with. But connection management can be a source of friction. Here are some small adjustments that make Postgres easier to live with.

Use connection pooling

Opening a new database connection for every request is expensive. Each connection uses memory and file descriptors. If you’re handling concurrent requests, you’ll quickly exhaust your connection limit.

Use a connection pool. PgBouncer is the standard choice. It sits between your application and Postgres, managing a pool of connections and reusing them efficiently.

For most applications, a pool size of 20-50 connections is sufficient. Don’t set it too high, Postgres can handle hundreds of connections, but each one uses resources. Better to have a smaller pool with proper queuing.

Set reasonable timeouts

Default Postgres timeouts are often too long for web applications. If a query is going to hang, you want to know quickly, not after 30 seconds.

Set these in your connection string or postgresql.conf:

  • connect_timeout: 5 seconds (how long to wait when connecting)
  • statement_timeout: 10 seconds (how long a query can run)
  • idle_in_transaction_session_timeout: 5 minutes (kill idle transactions)

These timeouts prevent runaway queries from consuming resources and help you catch performance issues early.

Index foreign keys

Postgres doesn’t automatically index foreign keys. If you’re joining on a foreign key or filtering by it, you need an index.

This is easy to miss because queries might work fine with small datasets. But as your data grows, unindexed foreign keys become bottlenecks.

Create indexes on foreign key columns:

CREATE INDEX idx_payments_user_id ON payments(user_id);

Monitor connection usage

Know how many connections you’re using. This query shows active connections:

SELECT count(*) FROM pg_stat_activity;

This shows connections by state:

SELECT state, count(*) 
FROM pg_stat_activity 
GROUP BY state;

If you see a lot of “idle in transaction” connections, you have a problem. These connections hold locks and consume resources. Fix the application code that’s leaving transactions open.

Use prepared statements

Prepared statements are faster and safer. They’re parsed once and reused, which reduces overhead. They also protect against SQL injection.

Most Postgres drivers support prepared statements. Use them for queries you run frequently.

Configure max_connections carefully

The default max_connections is 100, which is usually fine. But if you’re using connection pooling (which you should be), you can set it lower.

Each connection uses memory (about 10MB per connection on average). If you set max_connections too high, you’ll run out of memory before you run out of connection slots.

With PgBouncer, you can set max_connections to 20-30 and let the pool handle the rest.

Keep connections alive

If your application has periods of low activity, connections might time out. Use TCP keepalive to prevent this.

In your connection string:

?tcp_keepalives_idle=600&tcp_keepalives_interval=30&tcp_keepalives_count=3

This sends keepalive packets every 30 seconds after 10 minutes of inactivity, and closes the connection if 3 packets fail.

Final thoughts

These are small changes, but they add up. Connection pooling prevents resource exhaustion. Timeouts catch problems early. Indexing keeps queries fast. Monitoring helps you understand what’s happening.

Postgres is reliable, but it needs proper configuration. Spend a few hours getting these settings right, and you’ll save days of debugging later.