tva
← Insights

Self-Hosting Multiple Database Instances on a Single Server

The promise of self-hosting databases

Cloud-managed databases are convenient — until the invoice arrives. For teams running multiple projects, each requiring its own Postgres instance, the cost arithmetic shifts dramatically once you pass three or four databases.

But in reality, cramming multiple database instances onto a single machine is not simply a matter of running more containers. Memory contention, disk I/O conflicts, and backup orchestration all become salient concerns that managed services quietly handle for you.

Our setup: multiple Supabase stacks on one server

We run several full Supabase stacks — each comprising Postgres, GoTrue, PostgREST, Realtime, and Storage — on a single dedicated server. The server is a 16 vCPU, 32 GB machine in a European data center, orchestrated entirely through Docker Compose.

Each stack gets its own compose file, its own network namespace, and its own data volume. The key architectural decision: no shared Postgres cluster. Each project gets a fully isolated database instance. The overhead of running multiple Postgres processes is real — roughly 200-400 MB per idle instance — but the operational simplicity of complete isolation outweighs the resource cost.

Resource limits that actually matter

services:
  db:
    image: supabase/postgres:15.6
    deploy:
      resources:
        limits:
          memory: 2G
          cpus: '2.0'
        reservations:
          memory: 512M
          cpus: '0.5'

Docker resource limits are the first line of defense. Without them, a single runaway query can starve every other database on the machine. The reservation ensures each instance always has a minimum allocation, even under contention.

Backup orchestration across instances

The problem with multiple databases is not backing them up — pg_dump handles that reliably. The problem is orchestrating backups so they do not all run simultaneously and saturate disk I/O.

We stagger backups using a simple cron schedule with offsets. Instance A backs up at 02:00, Instance B at 02:15, Instance C at 02:30. Each backup pipes through gzip and uploads to object storage. Total backup window for all instances: under 45 minutes.

Monitoring without a monitoring stack

A common trap: deploying Prometheus, Grafana, and AlertManager to monitor a server — thereby consuming the resources you are trying to protect. For our scale, a lightweight approach works better.

A shell script runs every five minutes via cron, checks each Postgres instance with pg_isready, measures disk usage per volume, and sends a summary to a notification endpoint if any threshold is breached. Total resource cost: negligible.

When to stop self-hosting

Self-hosting multiple databases makes sense when the team understands Postgres operations, the workloads are predictable, and the cost savings justify the operational overhead. The moment any of those conditions changes — rapid scaling requirements, compliance mandates for managed services, or team turnover — the calculus shifts back toward managed offerings.

The best infrastructure decisions are reversible. Docker volumes can be exported, pg_dump files can be restored anywhere, and the application layer should not care where its database lives. That portability is the real value of the containerized approach — not the cost savings.

Related Insights