Skip to content
Infrastructure & Technology

Supabase Self-Hosting Runbook: Secure Architecture

Supabase self-hosting runbook: Hetzner setup, Docker Compose, service architecture, secrets, RLS and backups.

Mansoor Ahmed
Mansoor Ahmed
Head of Engineering 30 min read

Self-hosting Supabase is technically straightforward. Operating Supabase securely and reliably is considerably harder.

The reason: Supabase is not a single database. It is a complete backend platform. A self-hosted Supabase stack typically consists of several components: PostgreSQL, PostgREST API, GoTrue Auth, Realtime Server, Storage, Kong API Gateway, Supabase Studio, and optionally Edge Functions.

That means you are effectively running a backend platform, not just a database. Similar considerations apply when self-hosting language models. Once the platform runs, the Next.js app layer requires its own security configuration on top of it.

This runbook describes a minimal production setup that can be operated securely while remaining fully automatable. Every step contains a concrete implementation, a verifiable condition, and a failure scenario.

At a Glance - Part 1 of 6 in the DevOps Runbook Series

  • Two-server architecture physically separates production and audit systems
  • Docker Compose with version-pinned images (no :latest)
  • Seven services: PostgreSQL, PostgREST, GoTrue, Kong, Realtime, Storage, Studio
  • Row Level Security required on all public tables
  • Daily encrypted backups with GPG, stored externally on the audit server

Note on hosting providers: This runbook uses Hetzner Cloud as the infrastructure example because Hetzner is widely adopted in Europe, offers German data centers, and provides strong value for money. However, the architecture principles are provider-agnostic. The Hetzner-specific parts (vSwitch, Cloud Firewall API, Robot Panel) translate directly to other EU providers: OVH vRack (EU/UK), IONOS Cloud Network (EU/UK), Scaleway Private Network (EU). Where a step is Hetzner-specific, we point it out.

Series Table of Contents

This guide is part of our DevOps runbook series for modern self-hosted app stacks.

  1. Supabase Self-Hosting Runbook - this article
  2. Running Next.js on Supabase Securely
  3. Deploying Supabase Edge Functions Securely
  4. Running Trigger.dev Background Jobs Securely
  5. Claude Code as Security Control in DevOps Workflows
  6. Security Baseline for the Entire Stack

Article 1 covers infrastructure, services, and stack configuration. The following articles build on top of it.

Target Architecture

A stable setup separates at least two areas of responsibility.

Internet
   |
   |  HTTPS (443)
   |
Reverse Proxy (Caddy / Nginx / Traefik)
   |
   |  TLS terminated
   |
Kong API Gateway
   |
   +-- GoTrue         (Auth)
   +-- PostgREST       (API)
   +-- Realtime        (WebSocket)
   +-- Storage         (S3-compatible Object Store)
   |
PostgreSQL
   |
   +-- RLS Policies

Running in parallel is a second, separate system:

Audit Server
   |
   +-- Security Checks    (Lynis, Trivy, Port Scans)
   +-- Drift Detection     (Config Diffs against Baseline)
   +-- Claude Code Review  (Contextual Analysis)
   +-- Monitoring          (Metrics, Alerts)
   +-- Backup Verification (Restore Tests)

Why this separation matters: if the production system and the audit system are identical, a compromised server can simultaneously manipulate its own security checks. The audit server also hosts Claude Code for contextual security analysis, which reviews the entire stack weekly.

Supabase Service Overview

ServicePortFunctionSecurity-Critical Configuration
PostgreSQL5432DatabaseInternal interface only (10.0.1.10), RLS on all public tables
PostgREST3000REST APIAccessible only via Kong, JWT validation
GoTrue9999AuthenticationJWT expiry max 3600s, refresh token rotation
Kong8000API GatewaySingle entry point, rate limiting
Realtime4000WebSocketRLS-based authorization
Storage5000File StorageBucket policies, no public buckets
Studio3000Admin UINot externally accessible, SSH tunnel only

According to the Verizon Data Breach Investigations Report 2024, more than 60% of all database-related security incidents stem from misconfigurations and missing access controls.

Part A - Infrastructure Decisions

These decisions are made once and form the foundation for everything that follows.

A1 - Separate Infrastructure: Two Servers

Implementation

Run at least two servers, either physical machines or separate VMs:

supabase-prod     (Hetzner Cloud CX32 or higher)
audit-runner      (Hetzner Cloud CX22 is sufficient)

Hetzner-specific: In the Hetzner Cloud Console under “Servers”, create two separate instances in the same project and the same location (e.g. fsn1). For other providers: two VMs in the same region/zone.

supabase-prod runs the entire Supabase stack and PostgreSQL. audit-runner runs security checks, monitoring, drift detection, and Claude Code analysis.

Verifiable Condition

# Both servers must be separate hosts
ssh supabase-prod hostname
ssh audit-runner hostname

# Expected: different hostnames and IPs

Failure Scenario

If audit and production run on the same server and an attacker gains root access, they can delete logs, manipulate security check results, and suppress alerts. The compromise goes undetected.

A2 - Set Up a Private Network

Implementation

Both servers communicate internally over a private network. Supabase services are only reachable through these internal IPs.

Hetzner-specific: Under “Networks”, create a vSwitch or Cloud Network with subnet 10.0.1.0/24. Assign both servers to the network. Hetzner automatically creates an interface (typically ens10). At OVH: vRack. At IONOS: Cloud Network. At Scaleway: Private Networks.

# Configure the private interface on both servers
# /etc/network/interfaces.d/60-private.cfg (Hetzner-specific)

auto ens10
iface ens10 inet static
  address 10.0.1.10/24    # supabase-prod
  # address 10.0.1.11/24  # audit-runner
# Activate the network
systemctl restart networking

# Verify
ip addr show ens10
ping 10.0.1.11   # from the prod server

Verifiable Condition

# From audit-runner: is the internal interface active?
ssh audit-runner "ip addr show ens10 | grep 10.0.1.11"

# PostgreSQL must only listen on the internal interface
ssh supabase-prod "ss -tlnp | grep 5432"

# Expected: 10.0.1.10:5432, NOT 0.0.0.0:5432

Failure Scenario

If PostgreSQL listens on 0.0.0.0:5432 and the firewall has a misconfiguration, the database is directly reachable from the internet. With the service_role key or a weak Postgres password, the entire database is compromised.

A3 - Reverse Proxy with TLS

Implementation

A reverse proxy sits in front of Kong. It terminates TLS and is the only service reachable from the outside. Caddy is used here as an example because it includes automatic Let’s Encrypt certificate management.

# Caddyfile

app.example.com {
    reverse_proxy localhost:8000 {
        header_up X-Forwarded-Proto {scheme}
        header_up X-Real-IP {remote_host}
    }

    header {
        X-Frame-Options "DENY"
        X-Content-Type-Options "nosniff"
        Referrer-Policy "strict-origin-when-cross-origin"
        Strict-Transport-Security "max-age=63072000; includeSubDomains; preload"
        -Server
    }
}

Alternatively with Nginx:

server {
    listen 443 ssl http2;
    server_name app.example.com;

    ssl_certificate /etc/letsencrypt/live/app.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/app.example.com/privkey.pem;
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers HIGH:!aNULL:!MD5;

    location / {
        proxy_pass http://127.0.0.1:8000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # WebSocket support for Realtime
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }
}

Verifiable Condition

# TLS active and correctly configured?
curl -I https://app.example.com

# Expected: HTTP/2 200, Strict-Transport-Security header present

# Check TLS version
openssl s_client -connect app.example.com:443 -tls1_2 </dev/null 2>&1 | grep "Protocol"

# Certificate expiry date
echo | openssl s_client -connect app.example.com:443 2>/dev/null | \
  openssl x509 -noout -enddate

# Expected: notAfter at least 14 days in the future

# Security headers present?
curl -sI https://app.example.com | grep -E \
  "X-Frame-Options|Strict-Transport-Security|X-Content-Type-Options"

Failure Scenario

Without TLS, auth tokens travel in plain text across the network. Anyone in the same network segment can intercept them (man-in-the-middle). Without automatic certificate renewal, the certificate expires after 90 days and the app becomes unreachable.

A4 - Set Up Dual-Layer Firewalls

Implementation

Two layers that back each other up:

Layer 1: Cloud Firewall (in front of the server)

Hetzner-specific: Under “Firewalls”, create a new firewall and assign it to both servers. At OVH: Security Groups. At IONOS: Cloud Firewalls. At Scaleway: Security Groups.

# Hetzner Cloud Firewall rules for supabase-prod

Inbound:
  TCP 443    from 0.0.0.0/0        (HTTPS)
  TCP 22     from ADMIN_IP/32       (SSH from admin only)
  TCP ALL    from 10.0.1.0/24       (internal network)

Outbound:
  ALL        to 0.0.0.0/0        (Updates, DNS, Let's Encrypt)

Layer 2: Host Firewall (on the server)

# /etc/iptables/rules.v4

*filter
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]

# Loopback
-A INPUT -i lo -j ACCEPT

# Established Connections
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

# SSH from admin IP only
-A INPUT -p tcp --dport 22 -s ADMIN_IP -j ACCEPT

# HTTPS
-A INPUT -p tcp --dport 443 -j ACCEPT

# Internal network (all ports)
-A INPUT -s 10.0.1.0/24 -j ACCEPT

# Drop everything else (default policy)
COMMIT
# Activate the firewall
apt install iptables-persistent
iptables-restore < /etc/iptables/rules.v4

# Save baseline for drift detection
iptables-save > /root/firewall-baseline.txt

Verifiable Condition

# From outside: only port 443 open
nmap -p 22,80,443,5432,8000,9000 app.example.com

# Expected: only 443 open (22 only from admin IP)

# On the server: rules active?
iptables -L -n | grep -c "DROP"

# Expected: at least 1 (default DROP policy)

# Drift detection: has the firewall changed?
iptables-save | diff /root/firewall-baseline.txt -

# Expected: no differences

Failure Scenario

A single firewall can be misconfigured. Running iptables -F (flush) on the server opens all ports if no cloud firewall exists. Conversely, a cloud firewall does not protect against processes on the server itself opening new ports if no host firewall is active.

A5 - Harden SSH Access

Implementation

# /etc/ssh/sshd_config

PasswordAuthentication no
PermitRootLogin no
PubkeyAuthentication yes
MaxAuthTries 3
LoginGraceTime 30
AllowUsers deploy
# Deploy the SSH key on the server
mkdir -p /home/deploy/.ssh
echo "ssh-ed25519 AAAA... admin@workstation" > /home/deploy/.ssh/authorized_keys
chmod 700 /home/deploy/.ssh
chmod 600 /home/deploy/.ssh/authorized_keys
chown -R deploy:deploy /home/deploy/.ssh

# Reload SSHD
systemctl reload sshd

Verifiable Condition

# Password login must fail
ssh -o PasswordAuthentication=yes -o PubkeyAuthentication=no deploy@app.example.com

# Expected: Permission denied

# Root login must fail
ssh root@app.example.com

# Expected: Permission denied

# Verify configuration
sshd -T | grep -E "passwordauthentication|permitrootlogin"

# Expected: passwordauthentication no, permitrootlogin no

Failure Scenario

Password-based SSH access is under constant attack from the internet (brute force). A weak password is typically cracked within hours. Root login means an attacker immediately gains full control over the server.

Part B - Setting Up and Securing Supabase Services

A self-hosted Supabase stack consists of over 10 services. Each has its own environment variables, its own database roles, and its own security requirements. The official docker-compose.yml from Supabase is over 400 lines long and contains many settings that are not obviously security-relevant but still are.

This section explains each service, its role, its security-relevant configuration, and the typical mistakes made during setup.

Service Overview

Internet
   |
   |  HTTPS (443)
   |
Reverse Proxy (Caddy/Nginx)     <- Part A3
   |
Kong (API Gateway)               <- routes to all services
   |
   +-- GoTrue (Auth)              <- /auth/v1/*
   +-- PostgREST (REST API)       <- /rest/v1/*
   +-- Realtime                   <- /realtime/v1/*
   +-- Storage API                <- /storage/v1/*
   +-- Edge Functions             <- /functions/v1/*
   +-- Studio (Dashboard)         <- / (protected with Basic Auth)
   |
   +-- Meta (Postgres-Meta)       <- internal, for Studio
   +-- ImgProxy                   <- internal, for Storage
   +-- Analytics (Logflare)       <- internal, for Logs
   +-- Vector                     <- internal, Log Pipeline
   +-- Supavisor (Pooler)         <- Connection Pooling
   |
PostgreSQL                        <- Database with Init Scripts
   |
   +-- Roles: anon, authenticated, service_role,
       authenticator, supabase_admin, supabase_auth_admin,
       supabase_storage_admin

B0 - Generate Secrets (Before First Start)

This must happen BEFORE starting any services. The official .env.example contains placeholders. These must be replaced with generated values.

# JWT Secret (shared by GoTrue, PostgREST, Realtime, Kong)
JWT_SECRET=$(openssl rand -base64 32)

# Postgres Password
POSTGRES_PASSWORD=$(openssl rand -base64 24)

# Dashboard Password (Basic Auth via Kong)
DASHBOARD_PASSWORD=$(openssl rand -base64 16)

# Logflare Tokens
LOGFLARE_PUBLIC_ACCESS_TOKEN=$(openssl rand -hex 16)
LOGFLARE_PRIVATE_ACCESS_TOKEN=$(openssl rand -hex 16)

# MinIO Root Password (if using S3 Storage)
MINIO_ROOT_PASSWORD=$(openssl rand -hex 16)

# Realtime Secret Key Base (min. 64 characters)
SECRET_KEY_BASE=$(openssl rand -base64 48)

# Supavisor Vault Encryption Key
VAULT_ENC_KEY=$(openssl rand -base64 24)

# PG Meta Crypto Key
PG_META_CRYPTO_KEY=$(openssl rand -base64 24)

# Anon Key and Service Role Key (JWTs, generated with the JWT_SECRET)
# Use https://supabase.com/docs/guides/self-hosting/docker#generate-api-keys
# or generate them manually with jwt.io using the JWT_SECRET

Security rule: All secrets go into the .env file on the server (permissions 600, not in Git). Supabase uses a single JWT_SECRET for all services. If this secret is compromised, GoTrue, PostgREST, and Realtime are all affected simultaneously.

Verifiable Condition

# All required secrets set?
for var in JWT_SECRET POSTGRES_PASSWORD ANON_KEY SERVICE_ROLE_KEY \
  DASHBOARD_PASSWORD LOGFLARE_PUBLIC_ACCESS_TOKEN SECRET_KEY_BASE \
  VAULT_ENC_KEY PG_META_CRYPTO_KEY; do
  if grep -q "^${var}=$\|^${var}=your-\|^${var}=change" .env 2>/dev/null; then
    echo "CRITICAL: $var is not set or has default value"
  fi
done

# No identical passwords?
PASSWORDS=$(grep -E "PASSWORD|SECRET|KEY" .env | cut -d= -f2 | sort)
UNIQUE=$(echo "$PASSWORDS" | sort -u)
if [ "$(echo "$PASSWORDS" | wc -l)" != "$(echo "$UNIQUE" | wc -l)" ]; then
  echo "WARNING: Some secrets have identical values"
fi

Failure Scenario

Default secrets from .env.example are publicly known. An attacker can generate valid tokens using the default JWT_SECRET and gains full API access. With the default SERVICE_ROLE_KEY, they additionally bypass all RLS policies.

B1 - PostgreSQL: Database and Roles

What This Service Does

PostgreSQL is the central database. In the Supabase context it has a special role: it stores not only application data but also auth data (GoTrue), Realtime configuration, Storage metadata, and Analytics logs. The init scripts create specialized schemas and roles.

Security-Relevant Configuration

db:
  image: supabase/postgres:15.8.1.085    # Supabase's own image, pin the version
  restart: unless-stopped
  ports:
    - "10.0.1.10:5432:5432"              # internal interface ONLY
  volumes:
    # Init scripts (create schemas, roles, extensions)
    - ./volumes/db/realtime.sql:/docker-entrypoint-initdb.d/migrations/99-realtime.sql:Z
    - ./volumes/db/webhooks.sql:/docker-entrypoint-initdb.d/init-scripts/98-webhooks.sql:Z
    - ./volumes/db/roles.sql:/docker-entrypoint-initdb.d/init-scripts/99-roles.sql:Z
    - ./volumes/db/jwt.sql:/docker-entrypoint-initdb.d/init-scripts/99-jwt.sql:Z
    - ./volumes/db/_supabase.sql:/docker-entrypoint-initdb.d/migrations/97-_supabase.sql:Z
    - ./volumes/db/logs.sql:/docker-entrypoint-initdb.d/migrations/99-logs.sql:Z
    - ./volumes/db/pooler.sql:/docker-entrypoint-initdb.d/migrations/99-pooler.sql:Z
    # Persistent data
    - ./volumes/db/data:/var/lib/postgresql/data:Z
    - db-config:/etc/postgresql-custom
  healthcheck:
    test: ["CMD", "pg_isready", "-U", "postgres", "-h", "localhost"]
    interval: 5s
    timeout: 5s
    retries: 10
  environment:
    POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    POSTGRES_DB: ${POSTGRES_DB}
    JWT_SECRET: ${JWT_SECRET}
    JWT_EXP: ${JWT_EXPIRY}

The init scripts create the following roles:

postgres              -> Superuser (admin only, never for services)
anon                  -> Unauthenticated requests (via PostgREST)
authenticated         -> Authenticated requests (via PostgREST)
service_role          -> Bypasses RLS (for admin operations)
authenticator         -> PostgREST uses this role to connect
supabase_admin        -> Internal admin role (Realtime, Analytics)
supabase_auth_admin   -> GoTrue-specific (auth schema)
supabase_storage_admin -> Storage-specific (storage schema)

What Can Go Wrong

The roles.sql defines the grants for each role. If this file is modified (e.g. to quickly fix an issue), roles can end up with excessive permissions. The anon role must only have the permissions that RLS policies explicitly allow. All these role and RLS rules are consolidated into the machine-readable security baseline described in Article 6.

Verifiable Condition

# Check roles and their permissions
docker compose exec -T db psql -U postgres -c \
  "SELECT rolname, rolsuper, rolcreaterole, rolcreatedb, rolcanlogin
   FROM pg_roles
   WHERE rolname IN ('anon','authenticated','service_role','authenticator',
     'supabase_admin','supabase_auth_admin','supabase_storage_admin')
   ORDER BY rolname;"

# anon must NOT be superuser
docker compose exec -T db psql -U postgres -c \
  "SELECT rolname, rolsuper FROM pg_roles WHERE rolname = 'anon' AND rolsuper = true;"
# Expected: no rows

# Postgres listens ONLY on internal interface
ss -tlnp | grep 5432
# Expected: 10.0.1.10:5432, NOT 0.0.0.0:5432

B2 - Kong: API Gateway and Routing

What This Service Does

Kong is the central entry point for all API requests. It routes based on the URL path to the correct services, handles JWT validation, and protects the Studio dashboard with Basic Auth.

Security-Relevant Configuration

kong:
  image: kong:2.8.1                       # pin the version
  restart: unless-stopped
  ports:
    - "127.0.0.1:8000:8000"              # localhost ONLY (reverse proxy in front)
    - "127.0.0.1:8443:8443"              # HTTPS internal
  volumes:
    - ./volumes/api/kong.yml:/home/kong/temp.yml:ro,z
  environment:
    KONG_DATABASE: "off"                   # declarative config, no DB
    KONG_DECLARATIVE_CONFIG: /home/kong/kong.yml
    KONG_DNS_ORDER: LAST,A,CNAME
    KONG_PLUGINS: request-transformer,cors,key-auth,acl,basic-auth,request-termination,ip-restriction
    SUPABASE_ANON_KEY: ${ANON_KEY}
    SUPABASE_SERVICE_KEY: ${SERVICE_ROLE_KEY}
    DASHBOARD_USERNAME: ${DASHBOARD_USERNAME}
    DASHBOARD_PASSWORD: ${DASHBOARD_PASSWORD}
  entrypoint: bash -c 'eval "echo \"$$(cat ~/temp.yml)\"" > ~/kong.yml && /docker-entrypoint.sh kong docker-start'

The kong.yml defines the routing:

/rest/v1/*        -> PostgREST (port 3000)    + JWT Validation
/auth/v1/*        -> GoTrue (port 9999)       + JWT Validation
/realtime/v1/*    -> Realtime (port 4000)      + JWT Validation
/storage/v1/*     -> Storage (port 5000)       + JWT Validation
/functions/v1/*   -> Edge Functions (port 54321) + JWT Validation (optional)
/                 -> Studio (port 3000)        + Basic Auth

What Can Go Wrong

Setting Kong to 0.0.0.0:8000 instead of 127.0.0.1:8000 means the API is directly reachable without the reverse proxy (without TLS). A weak dashboard password grants access to Studio via Basic Auth and thereby to the entire database. The kong.yml can be configured to disable JWT validation for certain routes.

Verifiable Condition

# Kong on localhost only?
ss -tlnp | grep 8000
# Expected: 127.0.0.1:8000

# Dashboard password strong enough?
DASH_PW_LEN=$(grep "DASHBOARD_PASSWORD" .env | cut -d= -f2 | wc -c)
[ "$DASH_PW_LEN" -lt 16 ] && echo "WARNING: Dashboard password too short"

# kong.yml: JWT Validation active on all API routes?
grep -A5 "key-auth" volumes/api/kong.yml | head -20

B3 - GoTrue: Authentication

What This Service Does

GoTrue handles user registration, login, magic links, OAuth, MFA, and token issuance. It is the only service that issues JWTs and has its own DB user (supabase_auth_admin) with access to the auth schema.

Security-Relevant Configuration

auth:
  image: supabase/gotrue:v2.184.0       # pin the version
  restart: unless-stopped
  environment:
    GOTRUE_API_HOST: 0.0.0.0
    GOTRUE_API_PORT: 9999
    API_EXTERNAL_URL: ${API_EXTERNAL_URL}    # https://app.example.com

    # Database (dedicated admin user)
    GOTRUE_DB_DATABASE_URL: postgres://supabase_auth_admin:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}

    # JWT
    GOTRUE_JWT_SECRET: ${JWT_SECRET}
    GOTRUE_JWT_EXP: ${JWT_EXPIRY}            # MAXIMUM 3600 (1 hour)
    GOTRUE_JWT_AUD: authenticated
    GOTRUE_JWT_ADMIN_ROLES: service_role
    GOTRUE_JWT_DEFAULT_GROUP_NAME: authenticated

    # Registration
    GOTRUE_DISABLE_SIGNUP: ${DISABLE_SIGNUP}  # true for closed apps
    GOTRUE_EXTERNAL_EMAIL_ENABLED: ${ENABLE_EMAIL_SIGNUP}
    GOTRUE_MAILER_AUTOCONFIRM: false          # ALWAYS false in production

    # SMTP (for magic links, email confirmation)
    GOTRUE_SMTP_HOST: ${SMTP_HOST}
    GOTRUE_SMTP_PORT: ${SMTP_PORT}
    GOTRUE_SMTP_USER: ${SMTP_USER}
    GOTRUE_SMTP_PASS: ${SMTP_PASS}
    GOTRUE_SMTP_ADMIN_EMAIL: ${SMTP_ADMIN_EMAIL}
    GOTRUE_SMTP_SENDER_NAME: ${SMTP_SENDER_NAME}

    # Mailer URL paths (must match Kong routing)
    GOTRUE_MAILER_URLPATHS_CONFIRMATION: "/auth/v1/verify"
    GOTRUE_MAILER_URLPATHS_INVITE: "/auth/v1/verify"
    GOTRUE_MAILER_URLPATHS_RECOVERY: "/auth/v1/verify"
    GOTRUE_MAILER_URLPATHS_EMAIL_CHANGE: "/auth/v1/verify"

    # Site URL (redirect after auth)
    GOTRUE_SITE_URL: ${SITE_URL}              # URL of your Next.js app
    GOTRUE_URI_ALLOW_LIST: ${ADDITIONAL_REDIRECT_URLS}

    # Refresh Token Rotation (prevents token reuse)
    GOTRUE_SECURITY_REFRESH_TOKEN_ROTATION_ENABLED: true
    GOTRUE_SECURITY_REFRESH_TOKEN_REUSE_INTERVAL: 10

SMTP Configuration: Why It Is Security-Relevant

Without a working SMTP server, no confirmation emails can be sent. If GOTRUE_MAILER_AUTOCONFIRM: true is set to work around this, anyone can register with any email address. This means no identity verification whatsoever.

Verifiable Condition

# GoTrue Health Check
docker compose exec -T auth wget --no-verbose --tries=1 --spider http://localhost:9999/health

# JWT Expiry not above 3600?
grep "JWT_EXPIRY\|JWT_EXP" .env | head -1
# Expected: 3600 or less

# Autoconfirm disabled?
grep "AUTOCONFIRM" .env
# Expected: false

# SMTP configured (not empty)?
for var in SMTP_HOST SMTP_PORT SMTP_USER SMTP_PASS; do
  VAL=$(grep "^${var}=" .env | cut -d= -f2)
  [ -z "$VAL" ] && echo "WARNING: $var is empty"
done

# Refresh Token Rotation active?
grep "REFRESH_TOKEN_ROTATION" .env docker-compose.yml 2>/dev/null
# Expected: true

Failure Scenario

With AUTOCONFIRM=true and DISABLE_SIGNUP=false, anyone can create an account and use it immediately without confirming their email address. An attacker can register with arbitrary addresses and instantly gains the authenticated role in the database. Without Refresh Token Rotation, a stolen refresh token can be used indefinitely.

B4 - PostgREST: REST API

What This Service Does

PostgREST automatically generates a REST API from the PostgreSQL schema. It is the primary data access point. Security lies primarily in the PostgreSQL roles and RLS policies, not in PostgREST itself.

Security-Relevant Configuration

rest:
  image: postgrest/postgrest:v14.1       # pin the version
  restart: unless-stopped
  environment:
    PGRST_DB_URI: postgres://authenticator:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
    PGRST_DB_SCHEMAS: ${PGRST_DB_SCHEMAS}   # public,storage,graphql_public
    PGRST_DB_ANON_ROLE: anon
    PGRST_JWT_SECRET: ${JWT_SECRET}
    PGRST_DB_USE_LEGACY_GUCS: "false"
    PGRST_APP_SETTINGS_JWT_SECRET: ${JWT_SECRET}
    PGRST_APP_SETTINGS_JWT_EXP: ${JWT_EXPIRY}

How PostgREST works with roles:

Request without JWT     -> PostgREST uses role "anon"
Request with JWT        -> PostgREST switches to role from JWT (e.g. "authenticated")
Request with service_role JWT -> PostgREST uses "service_role" (bypasses RLS)

The authenticator user connects to the database and switches via SET ROLE to the respective role. This means the grants on the anon and authenticated roles are the actual security layer.

Verifiable Condition

# PostgREST uses authenticator role (not postgres)?
grep "PGRST_DB_URI" docker-compose.yml | grep "authenticator"
# Expected: Yes

# Schemas explicitly defined (not all)?
grep "PGRST_DB_SCHEMAS" .env
# Expected: public,storage,graphql_public (not empty = all schemas)

Failure Scenario

If PGRST_DB_URI uses the postgres superuser instead of authenticator, every request has superuser privileges and RLS is ineffective. If PGRST_DB_SCHEMAS is empty, PostgREST exposes all schemas including internal Supabase schemas (auth, _realtime, _analytics).

B5 - Realtime: WebSocket Subscriptions

What This Service Does

Realtime enables WebSocket-based subscriptions to database changes. It uses the supabase_admin user and the _realtime schema.

Security-Relevant Configuration

realtime:
  container_name: realtime-dev.supabase-realtime   # name is relevant for tenant ID
  image: supabase/realtime:v2.68.0
  restart: unless-stopped
  environment:
    PORT: 4000
    DB_HOST: ${POSTGRES_HOST}
    DB_PORT: ${POSTGRES_PORT}
    DB_USER: supabase_admin
    DB_PASSWORD: ${POSTGRES_PASSWORD}
    DB_NAME: ${POSTGRES_DB}
    DB_AFTER_CONNECT_QUERY: 'SET search_path TO _realtime'
    DB_ENC_KEY: supabaserealtime          # CHANGE in production
    API_JWT_SECRET: ${JWT_SECRET}
    SECRET_KEY_BASE: ${SECRET_KEY_BASE}   # min. 64 characters
    SEED_SELF_HOST: "true"
    RUN_JANITOR: "true"

What Can Go Wrong

DB_ENC_KEY: supabaserealtime is a default value. It must be changed in production. SECRET_KEY_BASE must be at least 64 characters long, otherwise the service will not start or is insecure. Realtime uses supabase_admin, meaning it has full DB access internally. Security comes from JWT validation: only authenticated users can open subscriptions, and RLS determines which rows they can see.

Verifiable Condition

# DB_ENC_KEY not set to default?
grep "DB_ENC_KEY" docker-compose.yml
# NOT "supabaserealtime"

# SECRET_KEY_BASE long enough?
SKB_LEN=$(grep "SECRET_KEY_BASE" .env | cut -d= -f2 | wc -c)
[ "$SKB_LEN" -lt 64 ] && echo "CRITICAL: SECRET_KEY_BASE too short ($SKB_LEN characters)"

# Realtime Health Check
docker compose exec -T realtime-dev.supabase-realtime \
  curl -sSf -H "Authorization: Bearer ${ANON_KEY}" \
  http://localhost:4000/api/tenants/realtime-dev/health

B6 - Storage API and MinIO (S3)

What This Service Does

Storage manages file uploads and downloads. By default it stores files locally in a volume. For production, an S3-compatible backend is recommended (self-hosted MinIO or a Cloud Act-free S3 service).

Local Storage (Default)

storage:
  image: supabase/storage-api:v1.33.0
  restart: unless-stopped
  volumes:
    - ./volumes/storage:/var/lib/storage:z
  environment:
    ANON_KEY: ${ANON_KEY}
    SERVICE_KEY: ${SERVICE_ROLE_KEY}
    POSTGREST_URL: http://rest:3000
    PGRST_JWT_SECRET: ${JWT_SECRET}
    DATABASE_URL: postgres://supabase_storage_admin:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
    STORAGE_BACKEND: file
    FILE_STORAGE_BACKEND_PATH: /var/lib/storage
    FILE_SIZE_LIMIT: 52428800              # 50MB, adjust as needed
    ENABLE_IMAGE_TRANSFORMATION: "true"
    IMGPROXY_URL: http://imgproxy:5001

MinIO for S3 Backend (Production)

If you use MinIO, an additional service is required:

# docker-compose.s3.yml (in addition to the main compose file)
minio:
  image: minio/minio:latest              # pin the version in production
  restart: unless-stopped
  ports:
    - "127.0.0.1:9000:9000"              # API, localhost ONLY
    - "127.0.0.1:9001:9001"              # Console, localhost ONLY
  volumes:
    - minio-data:/data
  environment:
    MINIO_ROOT_USER: ${MINIO_ROOT_USER}
    MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}   # min. 8 characters
  command: server /data --console-address ":9001"

Then in the Storage configuration:

storage:
  environment:
    STORAGE_BACKEND: s3
    GLOBAL_S3_BUCKET: supabase-storage
    GLOBAL_S3_ENDPOINT: http://minio:9000
    GLOBAL_S3_FORCE_PATH_STYLE: "true"
    AWS_ACCESS_KEY_ID: ${MINIO_ACCESS_KEY}
    AWS_SECRET_ACCESS_KEY: ${MINIO_SECRET_KEY}
    REGION: eu-central-1

Verifiable Condition

# Storage Health Check
docker compose exec -T storage wget --no-verbose --tries=1 --spider http://localhost:5000/status

# MinIO not reachable from outside (if in use)?
ss -tlnp | grep 9000
# Expected: 127.0.0.1:9000 (not 0.0.0.0)

# MinIO default credentials?
grep "MINIO_ROOT" .env | grep -iE "minioadmin\|minio123\|admin"
# Expected: no matches

# Storage volume exists and has data?
ls -la volumes/storage/ 2>/dev/null | head -5

# Storage Bucket Policies (via MinIO Client)
# mc alias set local http://localhost:9000 $MINIO_ROOT_USER $MINIO_ROOT_PASSWORD
# mc admin policy ls local

Failure Scenario

MinIO with default credentials (minioadmin:minioadmin) on 0.0.0.0:9000 means anyone on the internet can read and write all files. The MinIO Console on port 9001 additionally provides a web UI with full administrative access. Storage bucket policies can be set to “public”, meaning files are accessible without authentication.

B7 - Analytics, Vector and Supavisor (Internal Services)

What These Services Do

These services are not directly reachable from outside, but they are security-relevant because they have privileged database access.

Analytics (Logflare): Collects and stores logs from all services. Uses supabase_admin and the _analytics schema.

Vector: Log pipeline that forwards Docker logs to Logflare. Has access to the Docker socket.

Supavisor: Connection pooler for PostgreSQL. Manages the connection pool and has supabase_admin access.

Security-Relevant Configuration

analytics:
  image: supabase/logflare:1.27.0
  ports:
    - "127.0.0.1:4000:4000"             # localhost ONLY
  environment:
    DB_USERNAME: supabase_admin
    DB_PASSWORD: ${POSTGRES_PASSWORD}
    LOGFLARE_PUBLIC_ACCESS_TOKEN: ${LOGFLARE_PUBLIC_ACCESS_TOKEN}
    LOGFLARE_PRIVATE_ACCESS_TOKEN: ${LOGFLARE_PRIVATE_ACCESS_TOKEN}

vector:
  image: timberio/vector:0.28.1-alpine
  volumes:
    - ${DOCKER_SOCKET_LOCATION}:/var/run/docker.sock:ro,z   # Read-Only!
  security_opt:
    - "label=disable"

supavisor:
  image: supabase/supavisor:2.7.4
  ports:
    - "10.0.1.10:5432:5432"             # Pooler exposes Postgres port
    - "127.0.0.1:6543:6543"             # Transaction Mode
  environment:
    DATABASE_URL: ecto://supabase_admin:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/_supabase
    SECRET_KEY_BASE: ${SECRET_KEY_BASE}
    VAULT_ENC_KEY: ${VAULT_ENC_KEY}
    POOLER_DEFAULT_POOL_SIZE: ${POOLER_DEFAULT_POOL_SIZE}
    POOLER_MAX_CLIENT_CONN: ${POOLER_MAX_CLIENT_CONN}

What Can Go Wrong

Vector with Docker socket access (/var/run/docker.sock) can read container logs. The socket should be mounted read-only (:ro). Analytics on port 4000 with default tokens exposes logs from all services. Supavisor on 0.0.0.0:5432 instead of the internal interface makes the connection pooler (and thereby PostgreSQL) reachable from outside.

Verifiable Condition

# Analytics on internal only?
ss -tlnp | grep 4000
# Expected: 127.0.0.1:4000

# Vector Docker Socket Read-Only?
grep "docker.sock" docker-compose.yml | grep ":ro"
# Expected: :ro present

# Supavisor port on internal only?
ss -tlnp | grep 5432
# Expected: 10.0.1.10:5432 (internal interface)

# Logflare Tokens not on default?
grep "LOGFLARE.*TOKEN" .env | grep -iE "your-\|change\|example"
# Expected: no matches

B8 - Studio: Securing the Dashboard

What This Service Does

Studio is the web UI for database management, user administration, and SQL queries. Through the SERVICE_ROLE_KEY and direct Postgres access, it has full access to everything.

Security-Relevant Configuration

Studio is protected by Kong via Basic Auth (DASHBOARD_USERNAME/DASHBOARD_PASSWORD). Additionally, Studio should either not run in production at all or only be reachable via SSH tunnel.

Option 1: Do not run Studio in production (recommended)

# In docker-compose.override.yml or remove Studio from the compose file
# Use Studio only locally with supabase start for development

Option 2: Studio only via SSH tunnel

# From your workstation:
ssh -L 3000:localhost:3000 deploy@supabase-prod

# Then in the browser: http://localhost:3000

Verifiable Condition

# Studio container running? (Should not be running in production)
docker compose ps | grep studio
# Recommendation: not running in production

# If Studio is running: not reachable from outside?
curl -s -o /dev/null -w "%{http_code}" --connect-timeout 3 \
  "https://app.example.com:3000" 2>/dev/null
# Expected: Timeout or Connection Refused

# Dashboard password strong?
DASH_LEN=$(grep "DASHBOARD_PASSWORD" .env | cut -d= -f2 | wc -c)
[ "$DASH_LEN" -lt 16 ] && echo "WARNING: Dashboard password too short"

Failure Scenario

Studio with a weak dashboard password and public access gives an attacker a full database admin UI. They can execute SQL queries, delete users, disable RLS, and export data. This is the worst case for a self-hosted stack.

Service Checklist

After setting up all services, verify these items before the first production deployment:

Secrets
  [ ] All secrets generated (no default values)
  [ ] JWT_SECRET at least 32 characters
  [ ] POSTGRES_PASSWORD at least 24 characters
  [ ] DASHBOARD_PASSWORD at least 16 characters
  [ ] SECRET_KEY_BASE at least 64 characters
  [ ] MINIO_ROOT_PASSWORD at least 8 characters (if using MinIO)
  [ ] DB_ENC_KEY changed (not "supabaserealtime")

PostgreSQL
  [ ] Port on internal interface only (10.0.1.10:5432)
  [ ] Init scripts unmodified (roles.sql, jwt.sql etc.)
  [ ] Roles correctly created (anon not superuser)

Kong
  [ ] Port on localhost only (127.0.0.1:8000)
  [ ] JWT validation active on all API routes
  [ ] Dashboard password strong

GoTrue (Auth)
  [ ] JWT_EXP maximum 3600
  [ ] AUTOCONFIRM false
  [ ] SMTP configured and tested
  [ ] SITE_URL and API_EXTERNAL_URL correct
  [ ] Refresh Token Rotation active
  [ ] DISABLE_SIGNUP set for closed apps

PostgREST
  [ ] Uses authenticator role (not postgres)
  [ ] DB_SCHEMAS explicitly defined

Realtime
  [ ] DB_ENC_KEY changed
  [ ] SECRET_KEY_BASE at least 64 characters

Storage / MinIO
  [ ] MinIO not reachable from outside
  [ ] MinIO default credentials changed
  [ ] Storage volume permissions correct

Internal Services
  [ ] Analytics port on localhost only
  [ ] Vector Docker Socket read-only
  [ ] Supavisor port on internal only
  [ ] Logflare tokens not on default

Studio
  [ ] Disabled in production OR only via SSH tunnel
  [ ] Dashboard password strong

Part C - Supabase Stack Configuration

These steps cover the actual Supabase installation and its security-relevant settings.

C1 - Version Your Supabase Deployment

Implementation

All infrastructure files belong in a Git repository. Deployments happen only through this repository, never through manual changes on the server.

infra/
  docker-compose.yml
  .env.example            (template, no real secrets)
  caddy/
    Caddyfile
  postgres/
    migrations/
  scripts/
    backup.sh
    restore.sh
    health-check.sh
    security-check.sh
  runbooks/
    supabase-production.md
    security-baseline.md

Deployment Workflow

# On the server
cd /opt/supabase
git pull origin main

# Load env variables (file exists only on the server)
source .env

# Start/update the stack
docker compose up -d

# Health check
./scripts/health-check.sh

Verifiable Condition

# Any uncommitted changes on the server?
cd /opt/supabase && git status --porcelain

# Expected: empty (no local modifications)

# Is the server on the current commit?
git log --oneline -1
# Compare with remote
git fetch origin && git diff HEAD origin/main --stat

# Expected: no difference

Failure Scenario

Manual changes to docker-compose.yml on the server get overwritten on the next git pull or create merge conflicts. Worse: nobody knows which change was made when and by whom. After a server loss, the configuration is not reproducible.

C2 - docker-compose.yml: Minimal Production Stack

Implementation

Supabase ships a reference compose file with over 400 lines and roughly 15 services. Not all of them are needed for production. Here are the security-relevant decisions:

Minimal stack (these services are required):

postgres          Database
kong              API Gateway
gotrue            Auth
postgrest         REST API
realtime          WebSocket (if needed)
storage           File Storage (if needed)
meta              Metadata for PostgREST

Not for production (omit or keep internal only):

studio            Admin UI, access only via SSH tunnel or VPN
imgproxy          only if image transformations are needed
inbucket          only for local email testing

Excerpt of the security-relevant configuration:

# docker-compose.yml (excerpt, security-relevant parts)

services:
  postgres:
    image: supabase/postgres:15.6.1.143    # pin the version
    restart: unless-stopped
    ports:
      - "10.0.1.10:5432:5432"              # internal interface ONLY
    environment:
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    volumes:
      - postgres-data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 30s
      timeout: 10s
      retries: 3

  kong:
    image: kong:2.8.1                       # pin the version
    restart: unless-stopped
    ports:
      - "127.0.0.1:8000:8000"              # localhost ONLY (reverse proxy in front)
    environment:
      KONG_DNS_ORDER: LAST,A,CNAME
      KONG_PLUGINS: request-transformer,cors,key-auth,acl
      KONG_NGINX_PROXY_PROXY_BUFFER_SIZE: 160k
      KONG_NGINX_PROXY_PROXY_BUFFERS: 64 160k

  gotrue:
    image: supabase/gotrue:v2.164.0         # pin the version
    restart: unless-stopped
    environment:
      GOTRUE_JWT_SECRET: ${JWT_SECRET}
      GOTRUE_JWT_EXP: 3600                  # 1 hour, no more
      GOTRUE_EXTERNAL_EMAIL_ENABLED: true
      GOTRUE_MAILER_AUTOCONFIRM: false      # enforce email confirmation
      GOTRUE_DISABLE_SIGNUP: false           # set to true if registration is closed
      GOTRUE_RATE_LIMIT_HEADER: X-Real-IP
      GOTRUE_SECURITY_REFRESH_TOKEN_ROTATION_ENABLED: true
      GOTRUE_SECURITY_REFRESH_TOKEN_REUSE_INTERVAL: 10
      API_EXTERNAL_URL: https://app.example.com

  postgrest:
    image: postgrest/postgrest:v12.2.3      # pin the version
    restart: unless-stopped
    environment:
      PGRST_DB_URI: postgres://authenticator:${POSTGRES_PASSWORD}@postgres:5432/postgres
      PGRST_DB_ANON_ROLE: anon
      PGRST_JWT_SECRET: ${JWT_SECRET}

volumes:
  postgres-data:

networks:
  default:
    driver: bridge

Critical configuration points:

GOTRUE_JWT_EXP: 3600         no higher than 3600 (1h)
GOTRUE_MAILER_AUTOCONFIRM    false in production
GOTRUE_DISABLE_SIGNUP         true if no open registration
REFRESH_TOKEN_ROTATION        true (prevents token reuse)
Image versions               always pin, never use :latest
Postgres port                bind to internal interface only
Kong port                    bind to localhost only (reverse proxy in front)

Verifiable Condition

# Images pinned (no :latest)?
grep "image:" docker-compose.yml | grep -c "latest"
# Expected: 0

# Postgres only reachable internally?
docker compose exec postgres ss -tlnp | grep 5432
# Expected: only 10.0.1.10:5432 or 0.0.0.0:5432 (then check firewall)

# Postgres not reachable from outside?
nmap -p 5432 app.example.com
# Expected: filtered or closed

# JWT expiry correct?
grep "GOTRUE_JWT_EXP" .env
# Expected: 3600 or less

# Supabase Studio not reachable from outside?
curl -s -o /dev/null -w "%{http_code}" https://app.example.com:3000
# Expected: timeout or connection refused

Failure Scenario

Unpinned images (image: supabase/gotrue:latest) can silently introduce a new version during a docker compose pull that contains breaking changes or a known vulnerability. If Postgres listens on 0.0.0.0:5432 and the firewall temporarily fails, the entire database is exposed to the internet. If GOTRUE_JWT_EXP is set to 86400 (24h), a stolen token remains valid for an entire day.

C3 - Manage Secrets Securely

Implementation

A Supabase stack has at least these secrets:

# .env (on the server only, never in Git)

# Core secrets
JWT_SECRET=                    # min. 32 characters, generated with openssl rand -base64 32
ANON_KEY=                      # JWT token with anon role
SERVICE_ROLE_KEY=              # JWT token with service_role, bypasses RLS
POSTGRES_PASSWORD=             # min. 24 characters, generated

# Dashboard
DASHBOARD_USERNAME=            # Supabase Studio login
DASHBOARD_PASSWORD=            # min. 16 characters

# Email (GoTrue)
SMTP_HOST=
SMTP_PORT=
SMTP_USER=
SMTP_PASS=
SMTP_SENDER_NAME=

# Storage (if using S3 backend)
S3_ACCESS_KEY=
S3_SECRET_KEY=

Generating secrets:

# JWT secret
openssl rand -base64 32

# Postgres password
openssl rand -base64 24

# Anon and service role keys (Supabase CLI)
# Or manually create JWTs using the JWT_SECRET

Secret management on the server:

# .env file with restrictive permissions
chmod 600 /opt/supabase/.env
chown deploy:deploy /opt/supabase/.env

# Verify that .env is not tracked in Git
cat /opt/supabase/.gitignore | grep ".env"

Only the template belongs in the repository:

# .env.example (in Git)
JWT_SECRET=generate-with-openssl-rand-base64-32
ANON_KEY=generate-jwt-with-anon-role
SERVICE_ROLE_KEY=generate-jwt-with-service-role
POSTGRES_PASSWORD=generate-with-openssl-rand-base64-24
DASHBOARD_USERNAME=admin
DASHBOARD_PASSWORD=generate-min-16-chars
SMTP_HOST=
SMTP_PORT=587
SMTP_USER=
SMTP_PASS=
SMTP_SENDER_NAME=

The same principle applies to any data security in enterprise AI infrastructure.

Verifiable Condition

# .env not in Git?
cd /opt/supabase && git ls-files .env
# Expected: empty

# .env listed in .gitignore?
grep "^\.env$" .gitignore
# Expected: .env

# File permissions correct?
stat -c "%a %U" .env
# Expected: 600 deploy

# Secrets long enough?
awk -F= '{if (length($2) < 16 && $2 != "" && $1 !~ /PORT|HOST|NAME/) print "TOO SHORT: "$1}' .env
# Expected: no output

# No default passwords?
grep -iE "password|secret" .env | grep -iE "change.me|default|example|your.*here"
# Expected: no matches

Failure Scenario

The most common security issue with Supabase self-hosting is not a server exploit but a leaked secret. If .env is committed to Git and the repository is public (or becomes public), all secrets are exposed. With the SERVICE_ROLE_KEY, the entire database can be read and written without RLS.

C4 - Audit Database Policies

Implementation

With Supabase, a large portion of security lives in PostgreSQL Row Level Security (RLS), not in the application server. Every table in the public schema must have RLS enabled.

Find tables without RLS:

-- All public tables without RLS
SELECT schemaname, tablename, rowsecurity
FROM pg_tables
WHERE schemaname = 'public'
  AND rowsecurity = false;

Find tables with RLS but no policies:

-- RLS enabled but no policy defined = no access possible
-- (may be intentional, but verify)
SELECT t.tablename
FROM pg_tables t
LEFT JOIN pg_policies p ON t.tablename = p.tablename
WHERE t.schemaname = 'public'
  AND t.rowsecurity = true
  AND p.policyname IS NULL;

Find overly permissive policies:

-- Policies that grant full access to all roles
SELECT tablename, policyname, permissive, roles, cmd, qual
FROM pg_policies
WHERE schemaname = 'public'
  AND (roles = '{public}' OR qual = 'true');

Audit service role usage:

-- Which roles exist and what permissions do they have?
SELECT rolname, rolsuper, rolcreaterole, rolcreatedb
FROM pg_roles
WHERE rolname IN ('anon', 'authenticated', 'service_role', 'authenticator');

Verifiable Condition

# As an automated script from the audit-runner
ssh supabase-prod "docker compose exec -T postgres psql -U postgres -c \"
  SELECT tablename FROM pg_tables
  WHERE schemaname = 'public' AND rowsecurity = false;
\""

# Expected: no tables (or only intentionally excluded ones)

Failure Scenario

A users table with rowsecurity = false is fully readable through the PostgREST API by anyone with the anon key. This exposes all columns, including email addresses, phone numbers, and other personal data. A simple curl command with the public anon key is all it takes.

Part D - Operations and Monitoring

These steps run regularly and are automated.

D1 - Automate Backups

Implementation

Daily PostgreSQL dumps, encrypted and stored externally.

#!/bin/bash
# scripts/backup.sh

set -euo pipefail

BACKUP_DIR="/opt/backups"
DATE=$(date +%Y-%m-%d_%H%M)
RETENTION_DAYS=30
GPG_RECIPIENT="backup@example.com"   # GPG key ID

# PostgreSQL dump
docker compose exec -T postgres pg_dump \
  -U postgres \
  --format=custom \
  --compress=9 \
  postgres > "${BACKUP_DIR}/db_${DATE}.dump"

# Back up storage buckets (if Supabase Storage is in use)
docker compose exec -T storage tar -czf - /var/lib/storage \
  > "${BACKUP_DIR}/storage_${DATE}.tar.gz"

# Encrypt
for file in "${BACKUP_DIR}"/*_${DATE}.*; do
  gpg --encrypt --recipient "${GPG_RECIPIENT}" "$file"
  rm "$file"   # Delete unencrypted version
done

# Copy to external server (audit-runner or S3)
rsync -az "${BACKUP_DIR}/"*_${DATE}*.gpg \
  deploy@10.0.1.11:/opt/backup-archive/

# Remove old backups (local)
find "${BACKUP_DIR}" -name "*.gpg" -mtime +${RETENTION_DAYS} -delete

# Clean up on the backup server as well
ssh deploy@10.0.1.11 \
  "find /opt/backup-archive -name '*.gpg' -mtime +${RETENTION_DAYS} -delete"

echo "Backup ${DATE} completed"
# Set up the cron job
# crontab -e
0 3 * * * /opt/supabase/scripts/backup.sh >> /var/log/backup.log 2>&1

Verifiable Condition

# Today's backup exists?
ls -la /opt/backups/*_$(date +%Y-%m-%d)*.gpg

# Backup arrived on the external server?
ssh deploy@10.0.1.11 "ls -la /opt/backup-archive/*_$(date +%Y-%m-%d)*.gpg"

# Backup size is plausible (not 0 bytes)?
find /opt/backups -name "*.gpg" -size 0 -print
# Expected: no matches

Failure Scenario

Storing backups only on the same server means that if the server fails or gets encrypted (ransomware), the backups are gone too. Unencrypted backups on an external server are a data leak because the dump contains all table data in plain text.

D2 - Test Restores Regularly

Implementation

Once a month, restore a backup on a test system.

#!/bin/bash
# scripts/restore-test.sh

set -euo pipefail

BACKUP_FILE=$1   # e.g. /opt/backup-archive/db_2026-03-01_0300.dump.gpg

# Decrypt
gpg --decrypt "$BACKUP_FILE" > /tmp/restore-test.dump

# Start a test container
docker run -d --name restore-test \
  -e POSTGRES_PASSWORD=testpassword \
  supabase/postgres:15.6.1.143

sleep 10

# Restore
docker exec -i restore-test pg_restore \
  -U postgres \
  --dbname=postgres \
  --clean \
  --if-exists \
  < /tmp/restore-test.dump

# Verify tables
docker exec restore-test psql -U postgres -c \
  "SELECT schemaname, tablename FROM pg_tables WHERE schemaname = 'public';"

# Verify row counts
docker exec restore-test psql -U postgres -c \
  "SELECT relname, n_live_tup FROM pg_stat_user_tables ORDER BY n_live_tup DESC LIMIT 10;"

# Clean up
docker rm -f restore-test
rm /tmp/restore-test.dump

echo "Restore test completed"

Verifiable Condition

# Run the restore test script and check the exit code
./scripts/restore-test.sh /opt/backup-archive/db_latest.dump.gpg
echo $?
# Expected: 0

# Check the log of the last restore test
cat /var/log/restore-test.log | tail -20

Failure Scenario

Many teams have backups running for months without ever testing a restore. Typical problems: wrong pg_dump format (plain text instead of custom), missing permissions during restore, incompatible PostgreSQL versions between backup and restore. All of this only surfaces when you actually need the restore.

D3 - Daily Infrastructure Checks

Implementation

A script on the audit-runner checks the state of the production system daily.

#!/bin/bash
# scripts/security-check.sh (runs on audit-runner)

set -euo pipefail

PROD_HOST="10.0.1.10"
REPORT=""
CRITICAL=0

# 1. Container status
STOPPED=$(ssh deploy@${PROD_HOST} "docker compose ps --format json" | \
  jq -r 'select(.State != "running") | .Name')
if [ -n "$STOPPED" ]; then
  REPORT+="CRITICAL: Container not running: ${STOPPED}\n"
  CRITICAL=1
fi

# 2. Open ports from outside
OPEN_PORTS=$(nmap -p 22,80,443,3000,5432,8000,9000 app.example.com \
  --open -oG - | grep "/open/" | grep -v "443/open")
if [ -n "$OPEN_PORTS" ]; then
  REPORT+="CRITICAL: Unexpected open ports: ${OPEN_PORTS}\n"
  CRITICAL=1
fi

# 3. Certificate expiry date
CERT_EXPIRY=$(echo | openssl s_client -connect app.example.com:443 2>/dev/null | \
  openssl x509 -noout -enddate | cut -d= -f2)
CERT_EPOCH=$(date -d "$CERT_EXPIRY" +%s)
NOW_EPOCH=$(date +%s)
DAYS_LEFT=$(( (CERT_EPOCH - NOW_EPOCH) / 86400 ))
if [ "$DAYS_LEFT" -lt 14 ]; then
  REPORT+="WARNING: TLS certificate expires in ${DAYS_LEFT} days\n"
fi

# 4. Firewall drift
FIREWALL_DIFF=$(ssh deploy@${PROD_HOST} "iptables-save" | \
  diff /opt/baselines/firewall-baseline.txt - || true)
if [ -n "$FIREWALL_DIFF" ]; then
  REPORT+="WARNING: Firewall has changed:\n${FIREWALL_DIFF}\n"
fi

# 5. Docker image versions (drift against baseline)
IMAGE_DIFF=$(ssh deploy@${PROD_HOST} \
  "docker compose images --format '{{.Repository}}:{{.Tag}}'" | \
  diff /opt/baselines/image-versions.txt - || true)
if [ -n "$IMAGE_DIFF" ]; then
  REPORT+="WARNING: Container versions have changed:\n${IMAGE_DIFF}\n"
fi

# 6. Disk space
DISK_USAGE=$(ssh deploy@${PROD_HOST} "df -h / | tail -1 | awk '{print \$5}' | tr -d '%'")
if [ "$DISK_USAGE" -gt 85 ]; then
  REPORT+="WARNING: Disk usage at ${DISK_USAGE}%\n"
fi

# 7. Backup status
LAST_BACKUP=$(ssh deploy@${PROD_HOST} "ls -t /opt/backups/*.gpg 2>/dev/null | head -1")
if [ -z "$LAST_BACKUP" ]; then
  REPORT+="CRITICAL: No backup found\n"
  CRITICAL=1
else
  BACKUP_AGE=$(ssh deploy@${PROD_HOST} \
    "echo \$(( (\$(date +%s) - \$(stat -c %Y ${LAST_BACKUP})) / 3600 ))")
  if [ "$BACKUP_AGE" -gt 26 ]; then
    REPORT+="WARNING: Last backup is ${BACKUP_AGE} hours old\n"
  fi
fi

# 8. RLS check
UNPROTECTED=$(ssh deploy@${PROD_HOST} "docker compose exec -T postgres psql -U postgres -t -c \
  \"SELECT count(*) FROM pg_tables WHERE schemaname = 'public' AND rowsecurity = false;\"" | tr -d ' ')
if [ "$UNPROTECTED" -gt 0 ]; then
  REPORT+="WARNING: ${UNPROTECTED} tables without RLS\n"
fi

# Result
echo "=== Security Check $(date) ==="
if [ -n "$REPORT" ]; then
  echo -e "$REPORT"
else
  echo "All checks passed"
fi

# On critical findings: send alert
if [ "$CRITICAL" -eq 1 ]; then
  echo -e "$REPORT" | mail -s "CRITICAL: Security Check $(date)" ops@example.com
fi
# Cron on the audit-runner
0 7 * * * /opt/audit/scripts/security-check.sh >> /var/log/security-check.log 2>&1

Verifiable Condition

# Check script ran today?
grep "$(date +%Y-%m-%d)" /var/log/security-check.log | tail -1
# Expected: entry from today

# Result?
grep "All checks passed\|CRITICAL\|WARNING" /var/log/security-check.log | tail -5

D4 - Claude Code as a Contextual Analysis Layer

Claude Code analyzes the results of deterministic checks and identifies patterns that scripts cannot see.

Architecture

Daily Checks (D3)
     |
     +-- Deterministic Findings
     |   (open ports, firewall drift, missing backups)
     |
     +---> Weekly Claude Code Review
          |
          +-- Config files (docker-compose.yml, Caddyfile, .env.example)
          +-- Security check logs from the last 7 days
          +-- Git diff of infrastructure changes
          +-- RLS policy export
          |
          +---> Prioritized Report
               |
               +---> DevOps Decision (Human)

Script

#!/bin/bash
# scripts/claude-review-prep.sh (runs on audit-runner)

REPORT_DIR="/opt/audit/weekly-reports"
DATE=$(date +%Y-%m-%d)
OUTPUT="${REPORT_DIR}/review-input-${DATE}.md"

mkdir -p "$REPORT_DIR"

cat > "$OUTPUT" << 'HEADER'
# Weekly Security Review Input

## Context
Self-hosted Supabase stack on Hetzner Cloud.
Architecture: Reverse Proxy -> Kong -> Supabase Services -> PostgreSQL
Audit system on a separate server.
HEADER

# Security check logs from the last 7 days
echo -e "\n## Security Check Results (last 7 days)\n" >> "$OUTPUT"
echo '```' >> "$OUTPUT"
grep -A 20 "Security Check" /var/log/security-check.log | \
  tail -100 >> "$OUTPUT"
echo '```' >> "$OUTPUT"

# Current Docker Compose config (without secrets)
echo -e "\n## Current docker-compose.yml\n" >> "$OUTPUT"
echo '```yaml' >> "$OUTPUT"
ssh deploy@10.0.1.10 "cat /opt/supabase/docker-compose.yml" >> "$OUTPUT"
echo '```' >> "$OUTPUT"

# Git diff from the last week
echo -e "\n## Infrastructure Changes (last 7 days)\n" >> "$OUTPUT"
echo '```' >> "$OUTPUT"
ssh deploy@10.0.1.10 "cd /opt/supabase && git log --oneline --since='7 days ago'" >> "$OUTPUT"
ssh deploy@10.0.1.10 "cd /opt/supabase && git diff HEAD~5 --stat" >> "$OUTPUT" 2>/dev/null
echo '```' >> "$OUTPUT"

# RLS status
echo -e "\n## RLS Status\n" >> "$OUTPUT"
echo '```' >> "$OUTPUT"
ssh deploy@10.0.1.10 "docker compose exec -T postgres psql -U postgres -c \"
  SELECT tablename, rowsecurity FROM pg_tables WHERE schemaname = 'public' ORDER BY tablename;
\"" >> "$OUTPUT"
echo '```' >> "$OUTPUT"

# Container versions
echo -e "\n## Container Versions\n" >> "$OUTPUT"
echo '```' >> "$OUTPUT"
ssh deploy@10.0.1.10 "docker compose images --format '{{.Repository}}:{{.Tag}}'" >> "$OUTPUT"
echo '```' >> "$OUTPUT"

echo "Review input created: $OUTPUT"

What Claude Code Does Not Do

Claude does NOT make automatic changes on production.
Claude does NOT deploy.
Claude does NOT rotate secrets.
Claude does NOT have direct access to the production server.

Claude analyzes data that is provided to it
and produces reports for human decision-making.

Deployment Checklist

Before the first go-live and after major changes:

Part A - Infrastructure
  [ ] Two separate servers (prod + audit)
  [ ] Private network configured and tested
  [ ] Cloud firewall active (only 443, SSH from admin IP)
  [ ] Host firewall active (iptables)
  [ ] Firewall baseline saved
  [ ] SSH: keys only, no root, no passwords
  [ ] Reverse proxy configured (Caddy/Nginx)
  [ ] TLS active with automatic renewal
  [ ] Security headers set (HSTS, CSP, X-Frame-Options)
  [ ] WebSocket proxy configured for Realtime

Part B - Supabase Services
  Secrets (B0)
    [ ] All secrets generated (no default values)
    [ ] JWT_SECRET at least 32 characters
    [ ] POSTGRES_PASSWORD at least 24 characters
    [ ] DASHBOARD_PASSWORD at least 16 characters
    [ ] SECRET_KEY_BASE at least 64 characters
    [ ] MINIO_ROOT_PASSWORD at least 8 characters (if using MinIO)
    [ ] DB_ENC_KEY changed (not "supabaserealtime")
  PostgreSQL (B1)
    [ ] Port on internal interface only (10.0.1.10:5432)
    [ ] Init scripts unmodified (roles.sql, jwt.sql etc.)
    [ ] Roles correctly created (anon not superuser)
  Kong (B2)
    [ ] Port on localhost only (127.0.0.1:8000)
    [ ] JWT validation active on all API routes
    [ ] Dashboard password strong
  GoTrue (B3)
    [ ] JWT_EXP maximum 3600
    [ ] AUTOCONFIRM false
    [ ] SMTP configured and tested
    [ ] SITE_URL and API_EXTERNAL_URL correct
    [ ] Refresh Token Rotation active
    [ ] DISABLE_SIGNUP set for closed apps
  PostgREST (B4)
    [ ] Uses authenticator role (not postgres)
    [ ] DB_SCHEMAS explicitly defined
  Realtime (B5)
    [ ] DB_ENC_KEY changed
    [ ] SECRET_KEY_BASE at least 64 characters
  Storage / MinIO (B6)
    [ ] MinIO not reachable from outside
    [ ] MinIO default credentials changed
    [ ] Storage volume permissions correct
  Internal Services (B7)
    [ ] Analytics port on localhost only
    [ ] Vector Docker Socket read-only
    [ ] Supavisor port on internal only
    [ ] Logflare tokens not on default
  Studio (B8)
    [ ] Disabled in production OR only via SSH tunnel
    [ ] Dashboard password strong

Part C - Stack Configuration
  [ ] docker-compose.yml versioned in Git
  [ ] All image versions pinned (no :latest)
  [ ] .env not in Git
  [ ] .env file permissions 600
  [ ] .env.example in Git as template
  [ ] No default passwords
  [ ] RLS enabled on all public tables
  [ ] No tables without policies (unless intentional)
  [ ] No overly permissive policies (qual = 'true')

Part D - Operations and Monitoring
  [ ] Daily backup job active
  [ ] Backups encrypted
  [ ] Backups stored externally (audit-runner or S3)
  [ ] Restore test completed at least once
  [ ] Retention strategy configured
  [ ] Daily security check job on audit-runner
  [ ] Alerting on critical findings
  [ ] Weekly Claude Code review set up

Part E - Updates and Maintenance
  [ ] Unattended Upgrades installed and active
  [ ] Auto-update interval configured to daily
  [ ] Supabase images pinned (no :latest)
  [ ] No Supabase image older than 90 days
  [ ] Update commit within the last 45 days
  [ ] Auto-patch script for PostgreSQL minor patches active
  [ ] Auto-patch cron AFTER backup cron (03:00 after 02:00)
  [ ] Security release monitor on audit-runner active (daily)
  [ ] Trivy installed on audit-runner
  [ ] Maintenance check cron on audit-runner (weekly Monday)
  [ ] TLS certificate valid for at least 14 days
  [ ] Disk usage below 85%

Part E - Updates and Maintenance

A self-hosted stack that is not updated regularly accumulates security vulnerabilities. Unpatched CVEs in PostgreSQL, Kong, or GoTrue are real attack vectors. At the same time, updates can introduce breaking changes that take down the stack.

That is why the update process needs clear rules: what gets updated when, how it is tested, and how to ensure nothing is missed.

E1 - Three Update Layers

The stack has three independent update layers with different rhythms and risk levels.

Layer 1: OS-Level (Ubuntu)
   |  What: Kernel, System Packages, OpenSSL, Docker Engine
   |  Rhythm: Weekly security patches, monthly full update
   |  Risk: Low (apt upgrade is stable)
   |  Method: apt update && apt upgrade
   |
Layer 2: Supabase Services (Docker Images)
   |  What: PostgreSQL, Kong, GoTrue, PostgREST, Realtime, Storage, etc.
   |  Rhythm: Monthly (Supabase release cycle)
   |  Risk: Medium to high (breaking changes between versions)
   |  Method: Change image tags in docker-compose.yml, pull, restart
   |
Layer 3: Reverse Proxy and Tools
   |  What: Caddy, iptables, GPG, nmap, jq
   |  Rhythm: On security advisories or quarterly
   |  Risk: Low
   |  Method: apt upgrade (Caddy via its own repo)

Verifiable Condition:

# OS: When was the last apt upgrade?
stat -c %y /var/cache/apt/pkgcache.bin
# Expected: less than 7 days old

# Supabase: Which image versions are running?
cd /opt/supabase && docker compose images --format '{{.Repository}}:{{.Tag}}'

# Caddy Version
caddy version

Failure Scenario: An unpatched PostgreSQL with a known Remote Code Execution vulnerability (such as CVE-2023-5869) can be exploited by an attacker, even when RLS is correctly configured. An outdated Kong with a known auth bypass vulnerability can circumvent JWT validation.

E2 - OS-Level Updates

Weekly: Security Patches (automatic)

# Install and configure unattended upgrades
sudo apt install -y unattended-upgrades

# Configuration: only security updates automatically
sudo tee /etc/apt/apt.conf.d/50unattended-upgrades > /dev/null << 'EOF'
Unattended-Upgrade::Allowed-Origins {
    "${distro_id}:${distro_codename}-security";
};

// Automatic reboot if needed (e.g. kernel update)
// Only enable if you can accept the server briefly
// restarting at 4:00 AM
Unattended-Upgrade::Automatic-Reboot "false";

// Email notification on updates
Unattended-Upgrade::Mail "ops@example.com";
Unattended-Upgrade::MailReport "on-change";
EOF

# Enable automatic updates
sudo tee /etc/apt/apt.conf.d/20auto-upgrades > /dev/null << 'EOF'
APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Unattended-Upgrade "1";
APT::Periodic::Download-Upgradeable-Packages "1";
APT::Periodic::AutocleanInterval "7";
EOF

sudo systemctl enable unattended-upgrades
sudo systemctl start unattended-upgrades

Monthly: Full System Update (manual, with review)

# First check what will be updated
apt list --upgradable

# Then update
sudo apt update && sudo apt upgrade -y

# Docker Engine update (separate, via Docker repo)
sudo apt install --only-upgrade docker-ce docker-ce-cli containerd.io

# After kernel updates: reboot needed?
if [ -f /var/run/reboot-required ]; then
  echo "REBOOT REQUIRED"
fi

Verifiable Condition:

# Unattended Upgrades active?
systemctl is-active unattended-upgrades

# Recent automatic updates
cat /var/log/unattended-upgrades/unattended-upgrades.log | tail -20

# Pending security updates?
apt list --upgradable 2>/dev/null | grep -i security | wc -l
# Expected: 0

E3 - Supabase Service Updates

Supabase releases new Docker images approximately once a month. The update process must be controlled because breaking changes between versions are possible.

Workflow:

1. Read release notes (github.com/supabase/supabase/releases)
2. Enter new image tags in docker-compose.yml
3. Test on staging (or: backup + rollback plan)
4. Create backup
5. docker compose pull
6. docker compose down && docker compose up -d
7. Check health checks
8. Update baselines

Check current versions vs. available:

cd /opt/supabase

echo "=== Current Versions ==="
docker compose images --format '{{.Repository}}:{{.Tag}}'

echo ""
echo "=== Available Updates ==="
# Check official releases
echo "Supabase Releases: https://github.com/supabase/supabase/releases"
echo "GoTrue: https://github.com/supabase/gotrue/releases"
echo "PostgREST: https://github.com/PostgREST/postgrest/releases"
echo "Realtime: https://github.com/supabase/realtime/releases"
echo ""
echo "Or compare with the official docker-compose.yml:"
echo "https://raw.githubusercontent.com/supabase/supabase/master/docker/docker-compose.yml"

Safe update procedure:

cd /opt/supabase

# 1. Backup BEFORE the update
./scripts/backup.sh

# 2. Document current versions
docker compose images --format '{{.Repository}}:{{.Tag}}' > /opt/baselines/pre-update-versions.txt

# 3. Enter new versions in docker-compose.yml
# MANUAL: Change image tags to the new version
# e.g. supabase/gotrue:v2.164.0 -> supabase/gotrue:v2.184.0

# 4. Pull new images
docker compose pull

# 5. Restart stack
docker compose down
docker compose up -d

# 6. Wait and run health checks
sleep 30
docker compose ps --format "table {{.Name}}\t{{.Status}}"

# 7. Check all services
docker compose exec -T auth wget --no-verbose --tries=1 --spider http://localhost:9999/health
docker compose exec -T db pg_isready -U postgres -h localhost
curl -s -o /dev/null -w "Kong: %{http_code}\n" http://localhost:8000
curl -s -o /dev/null -w "Storage: %{http_code}\n" http://localhost:8000/storage/v1/

# 8. Update baselines
docker compose images --format '{{.Repository}}:{{.Tag}}' > /opt/baselines/image-versions.txt

# 9. Git commit
git add docker-compose.yml
git commit -m "Update Supabase images to [VERSION]"

Rollback if something goes wrong:

cd /opt/supabase

# Restore old versions
git checkout HEAD~1 -- docker-compose.yml
docker compose pull
docker compose down
docker compose up -d

# If a database migration is the problem:
./scripts/restore.sh /opt/backups/[LATEST_BACKUP].gpg

Verifiable Condition:

# Image versions older than 3 months?
cd /opt/supabase
docker compose images --format '{{.Repository}}:{{.Tag}}' | while read image; do
  CREATED=$(docker inspect --format='{{.Created}}' "$image" 2>/dev/null | cut -dT -f1)
  if [ -n "$CREATED" ]; then
    AGE_DAYS=$(( ($(date +%s) - $(date -d "$CREATED" +%s)) / 86400 ))
    [ "$AGE_DAYS" -gt 90 ] && echo "WARNING: $image is $AGE_DAYS days old"
  fi
done

Failure Scenario: Supabase GoTrue v2.170.0 introduced a change in refresh token handling that broke older clients. Without reading the release notes first and without a backup, this would have caused an outage. PostgreSQL major version updates (e.g. 15 -> 16) require a pg_dump/pg_restore cycle - a simple image tag change is not enough.

E3b - Automatic Patching (Layer 1)

Not all updates require a human. PostgreSQL minor patches and Caddy updates are low-risk and can be applied automatically at night.

Two-tier model:

TIER 1 - AUTOMATIC (auto-patch.sh, daily 03:00):
  OS Security Patches (unattended-upgrades)
  PostgreSQL Minor Patches (15.8.1.x -> 15.8.1.y)
  Caddy Updates
  -> Info email after successful patch
  -> Alert email on health check failure
  -> Automatic rollback on failure

TIER 2 - MANUAL (/supabase-update, within 24h of alert):
  GoTrue/Auth (breaking changes possible)
  PostgREST (query behavior may change)
  Kong (routing may change)
  Realtime, Storage, Supavisor
  PostgreSQL MAJOR (15 -> 16, requires pg_dump/pg_restore)

The script auto-patch.sh runs daily at 03:00 (after the backup at 02:00) and:

  1. Checks whether a current backup exists (aborts if not)
  2. Saves the current state (docker-compose.yml, image versions)
  3. Checks whether a new PostgreSQL minor image is available
  4. Applies it and runs a health check
  5. On failure: automatic rollback to the previous version
  6. Sends an info email (success) or alert email (failure)

Verifiable Condition:

# Auto-patch cron active?
crontab -l | grep auto-patch
# Expected: 0 3 * * * /opt/supabase/scripts/auto-patch.sh

# Last auto-patch run?
tail -20 /var/log/auto-patch.log

# Has auto-patch ever patched?
grep "Patches applied\|No patches" /var/log/auto-patch.log | tail -5

Cron schedule (prod server):

02:00 daily    -> Backup (DB + Storage + external)
03:00 daily    -> Auto-Patch (PostgreSQL Minor + Caddy)
04:00 monthly  -> Restore test

E4 - Update Schedule and Responsibilities

Weekly (automatic):
  [ ] OS security patches (unattended-upgrades)
  [ ] Audit-runner verifies patches were applied

Monthly (manual, scheduled):
  [ ] Check Supabase release notes
  [ ] Evaluate new image tags
  [ ] Create backup
  [ ] Perform update
  [ ] Health checks
  [ ] Update baselines

Quarterly (review):
  [ ] Check Caddy version
  [ ] Check Docker Engine version
  [ ] Check Node.js version (for Claude Code on audit-runner)
  [ ] Evaluate PostgreSQL major version
  [ ] Entire toolchain up to date?

On security advisories (immediate):
  [ ] CVE affects our stack?
  [ ] Identify affected image/package
  [ ] Patch available?
  [ ] Perform emergency update

Verifiable Condition:

# When was the last Supabase update?
cd /opt/supabase
git log --oneline --grep="Update\|update\|upgrade" | head -5

# How old is the last update commit?
LAST_UPDATE=$(git log --format="%ai" --grep="Update\|update" -1 2>/dev/null | cut -d' ' -f1)
if [ -n "$LAST_UPDATE" ]; then
  AGE=$(( ($(date +%s) - $(date -d "$LAST_UPDATE" +%s)) / 86400 ))
  echo "Last update: $LAST_UPDATE ($AGE days ago)"
  [ "$AGE" -gt 45 ] && echo "WARNING: Last update over 45 days ago"
fi

E5 - Security Release Monitor (daily)

The biggest blind spot in self-hosting is not the initial configuration but missing security patches. When Supabase GoTrue publishes an auth bypass fix, the team must act within 24 hours, not after a week.

On the audit-runner, a script runs daily that checks the GitHub releases of all Supabase components and immediately alerts via email on security releases.

Monitored components:

supabase/auth (GoTrue)          -> frequent security patches
PostgREST/postgrest             -> API layer
supabase/realtime               -> WebSocket
supabase/storage-api            -> File Storage
Kong/kong                       -> API Gateway
supabase/edge-runtime           -> Edge Functions
supabase/postgres               -> Database Image
supabase/supavisor              -> Connection Pooler
moby/moby (Docker Engine)       -> Container Runtime

Three check layers:

Layer 1: GitHub Releases
  -> Is there a new version?
  -> Do the release notes contain "security"/"CVE"?

Layer 2: Trivy Container Scan
  -> Scans every running Docker image against NVD/GitHub Advisories
  -> Finds CVEs in all dependencies (OS packages, libraries)

Layer 3: OSV API
  -> Checks application-level vulnerabilities for GoTrue, PostgREST etc.
  -> Complements Trivy with package-specific CVEs

How it works:

Daily 07:00 (audit-runner cron)
   |
   +-- Fetch current versions from prod server
   +-- GitHub API: Check latest releases for each component
   +-- Scan release notes for "security", "CVE", "vulnerability"
   |
   +-- Security release found?
   |     -> IMMEDIATE email to ops@
   |     -> "Action required within 24h"
   |
   +-- Normal release found?
         -> Weekly summary (Monday)

Cache mechanism: The script remembers reported releases so the same email does not arrive every day. A new alert is only sent when a NEW release is detected.

Verifiable Condition:

# Security release monitor active on audit-runner?
ssh deploy@10.0.1.11 "crontab -l | grep security-release"
# Expected: daily cron job

# Last check log
ssh deploy@10.0.1.11 "tail -5 /var/log/security-releases.log"
# Expected: entry from today

# Cache present (script has already run)?
ssh deploy@10.0.1.11 "ls /opt/audit/cache/*-last-seen.txt 2>/dev/null | wc -l"

Failure Scenario: In January 2024, CVE-2023-5869 was published for PostgreSQL (Remote Code Execution). Anyone without a release monitor who only checked for updates monthly was vulnerable for 3-4 weeks. With the daily monitor, the email would have arrived the day after the release.

E6 - Audit Runner as Update Watchdog

The audit runner monitors whether updates are being performed and informs the team when something is overdue.

A weekly script runs on the audit runner that checks the following:

#!/bin/bash
# scripts/check-maintenance.sh (runs on audit-runner)

PROD_HOST="10.0.1.10"
REPORT=""

echo "=== Maintenance Check $(date) ==="

# 1. OS updates overdue?
LAST_APT=$(ssh deploy@${PROD_HOST} "stat -c %Y /var/cache/apt/pkgcache.bin")
APT_AGE=$(( ($(date +%s) - $LAST_APT) / 86400 ))
if [ "$APT_AGE" -gt 7 ]; then
  REPORT+="WARNING: apt update is ${APT_AGE} days old (max. 7)\n"
fi

# 2. Unattended Upgrades active?
UA_STATUS=$(ssh deploy@${PROD_HOST} "systemctl is-active unattended-upgrades 2>/dev/null")
if [ "$UA_STATUS" != "active" ]; then
  REPORT+="CRITICAL: Unattended Upgrades not active\n"
fi

# 3. Pending security updates?
SEC_UPDATES=$(ssh deploy@${PROD_HOST} "apt list --upgradable 2>/dev/null | grep -ci security")
if [ "$SEC_UPDATES" -gt 0 ]; then
  REPORT+="WARNING: ${SEC_UPDATES} pending security updates\n"
fi

# 4. Reboot required?
REBOOT=$(ssh deploy@${PROD_HOST} "test -f /var/run/reboot-required && echo yes || echo no")
if [ "$REBOOT" = "yes" ]; then
  REPORT+="WARNING: Server reboot required (kernel update)\n"
fi

# 5. Supabase image age
ssh deploy@${PROD_HOST} "cd /opt/supabase && docker compose images --format '{{.Repository}}:{{.Tag}}'" | \
while read image; do
  CREATED=$(ssh deploy@${PROD_HOST} "docker inspect --format='{{.Created}}' '$image' 2>/dev/null" | cut -dT -f1)
  if [ -n "$CREATED" ]; then
    AGE_DAYS=$(( ($(date +%s) - $(date -d "$CREATED" +%s 2>/dev/null || echo 0)) / 86400 ))
    if [ "$AGE_DAYS" -gt 90 ]; then
      REPORT+="WARNING: $image is ${AGE_DAYS} days old (max. 90)\n"
    fi
  fi
done

# 6. Last Supabase update?
LAST_UPDATE=$(ssh deploy@${PROD_HOST} "cd /opt/supabase && git log --format='%ai' --grep='Update\|update' -1 2>/dev/null" | cut -d' ' -f1)
if [ -n "$LAST_UPDATE" ]; then
  UPDATE_AGE=$(( ($(date +%s) - $(date -d "$LAST_UPDATE" +%s)) / 86400 ))
  if [ "$UPDATE_AGE" -gt 45 ]; then
    REPORT+="WARNING: Last Supabase update ${UPDATE_AGE} days ago (max. 45)\n"
  fi
else
  REPORT+="WARNING: No update commit found in Git\n"
fi

# 7. Docker Engine version
DOCKER_VERSION=$(ssh deploy@${PROD_HOST} "docker version --format '{{.Server.Version}}' 2>/dev/null")
echo "Docker Engine: $DOCKER_VERSION"

# 8. Caddy version
CADDY_VERSION=$(ssh deploy@${PROD_HOST} "caddy version 2>/dev/null")
echo "Caddy: $CADDY_VERSION"

# 9. TLS certificate remaining validity
CERT_DAYS=$(ssh deploy@${PROD_HOST} "echo | openssl s_client -connect localhost:443 2>/dev/null | openssl x509 -noout -enddate 2>/dev/null | cut -d= -f2")
if [ -n "$CERT_DAYS" ]; then
  DAYS_LEFT=$(( ($(date -d "$CERT_DAYS" +%s) - $(date +%s)) / 86400 ))
  [ "$DAYS_LEFT" -lt 14 ] && REPORT+="CRITICAL: TLS certificate expires in ${DAYS_LEFT} days\n"
fi

# Result
if [ -n "$REPORT" ]; then
  echo -e "\n$REPORT"
  echo -e "$REPORT" | mail -s "Maintenance Check: Action required" ops@example.com
else
  echo "All maintenance checks passed"
fi

Cron on the audit-runner:

# Weekly Monday 08:00 (after the Sunday audit)
0 8 * * 1 /opt/audit/scripts/check-maintenance.sh >> /var/log/maintenance-check.log 2>&1

When Claude notifies:

Claude Code on the audit-runner sends notifications in three cases:

IMMEDIATE (email to ops@):
  - Unattended Upgrades not active
  - TLS certificate < 14 days
  - Security update pending and > 3 days old

WEEKLY (Maintenance Report):
  - Reboot required
  - apt update overdue
  - Supabase images > 60 days old

MONTHLY (Update Reminder):
  - Supabase release notes not reviewed (no update commit > 45 days)
  - Quarterly review due

Conclusion

Self-hosting Supabase is relatively straightforward. Operating Supabase securely requires clear architecture rules and automated controls.

This runbook separates infrastructure decisions (Part A), service architecture and hardening (Part B), stack configuration (Part C), ongoing monitoring (Part D), and update processes (Part E). The combination of deterministic checks and contextual Claude Code analysis covers both known patterns and unexpected risks.

Teams that follow these principles from the start build a cert-ready-by-design architecture and avoid retroactive audit cycles.

Reminder: The Hetzner-specific configurations (vSwitch, Cloud Firewall, interface names) translate directly to other EU hosting providers such as OVH (EU/UK), IONOS (EU/UK), and Scaleway (EU). The architecture principles are provider-agnostic.

Audit Checklist Download

Prepared prompt for Claude Code. Upload the file to your server and start Claude Code in your Supabase stack's project directory. Claude Code will automatically check all security points from this runbook and report PASS, WARNING, or CRITICAL.

claude -p "$(cat claude-check-artikel-1-supabase-en.md)" --allowedTools Read,Grep,Glob,Bash

Download checklist

Series Table of Contents

This article is part of our DevOps series for self-hosted app stacks.

  1. Supabase Self-Hosting Runbook - this article
  2. Running Next.js on Supabase Securely
  3. Deploying Supabase Edge Functions Securely
  4. Running Trigger.dev Background Jobs Securely
  5. Claude Code as Security Control in DevOps Workflows
  6. Security Baseline for the Entire Stack

The next article covers how to run Next.js securely on Supabase without making common mistakes with Server Actions, auth handling, and API access.

Bert Gogolin

Bert Gogolin

CEO & Founder, Gosign

AI Governance Briefing

Enterprise AI, regulation, and infrastructure - once a month, directly from me.

No spam. Unsubscribe anytime. Privacy policy

Supabase Self-Hosting DevOps Security PostgreSQL
Share this article

Frequently Asked Questions

What components make up the Supabase self-hosting stack?

The Supabase stack consists of PostgreSQL, PostgREST API, GoTrue Auth, Realtime Server, Storage, Kong API Gateway, and Supabase Studio. Edge Functions are optional. This means you are running a complete backend platform, not just a database.

Why do you need two servers for Supabase self-hosting?

Separating the production system from the audit system prevents a compromised server from also manipulating its own security checks. The audit system stays independent and can reliably detect configuration drift.

Is this article part of a series?

Yes. This runbook is part 1 of a six-part DevOps series for self-hosted app stacks. The series covers Supabase, Next.js, Edge Functions, Trigger.dev, Claude Code as a security control, and a security baseline.