Skip to content
Infrastructure & Technology

Security Baseline for the Full Stack

DevOps runbook: A YAML-based security baseline for Supabase, Next.js, Edge Functions, and Trigger.dev with automated checks and Claude Code integration.

Mansoor Ahmed
Mansoor Ahmed
Head of Engineering 16 min read

The previous runbooks described the individual components: Supabase Platform (Article 1), Next.js App Layer (Article 2), Edge Functions (Article 3), Trigger.dev Jobs (Article 4), and Claude Code as a Security Control (Article 5).

This final article consolidates all rules into a single, machine-readable file: the security-baseline.yml. This file defines the target state of the entire stack. All deployments, reviews, and audits are checked against this baseline, both by deterministic scripts and by Claude Code.

The article delivers three things:

  1. The complete security-baseline.yml
  2. The script that checks it automatically
  3. The integration with the Claude Code audit from Article 5

At a Glance - Part 6 of 6 in the DevOps Runbook Series

  • A single YAML file (security-baseline.yml) defines the desired state of the entire stack
  • Daily automated check validates all rules deterministically
  • Weekly Claude Code analysis interprets results and prioritizes findings
  • Deployment gate blocks releases on critical baseline violations
  • Living document: baseline updated with every stack change

Series Table of Contents

This guide is part of our DevOps runbook series for self-hosted app stacks.

  1. Supabase Self-Hosting Runbook
  2. Running Next.js on Supabase Securely
  3. Deploying Supabase Edge Functions Securely
  4. Running Trigger.dev Background Jobs Securely
  5. Claude Code as a Security Control in the DevOps Workflow
  6. Security Baseline for the Full Stack - this article

This article connects all previous runbooks - from Supabase infrastructure to Edge Function webhook security - into a verifiable security strategy.

Architecture Overview

security-baseline.yml
   |
   +---> scripts/check-baseline.sh     (deterministic, daily)
   |       |
   |       +---> baseline-report.md    (facts: passed/failed)
   |
   +---> Claude Code Audit             (contextual, weekly)
   |       |
   |       +---> claude-review.md      (interpretation + priorities)
   |
   +---> Deployment Gate               (CI/CD, on every deploy)
           |
           +---> Deploy blocked when critical rules are violated

The baseline is the Single Source of Truth for the security state. Everything else - the check scripts, the Claude Code contextual analysis layer, the deployment checklists from Articles 1-5 - derives from it.

The security-baseline.yml

This file belongs in the root of the infrastructure repository. It is machine-readable (YAML), human-readable (comments), and interpretable by Claude Code (context).

Organizations with a machine-readable security baseline detect configuration drift on average 14x faster than teams without a baseline (SANS Institute 2024).

Baseline Category Overview

CategoryRule CountCheck FrequencyEscalation on Violation
Infrastructure (Network, Firewall, SSH)8DailyCRITICAL: immediate
Supabase (Containers, RLS, Secrets)7DailyCRITICAL: immediate
Next.js (Auth, Headers, Env Vars)6DailyWARNING: this week
Edge Functions (Signatures, CORS, Secrets)5DailyCRITICAL: immediate
Trigger.dev (Tasks, Concurrency, Secrets)6DailyWARNING: this week
Compliance (Git, Data Residency)3WeeklyWARNING: this week
# security-baseline.yml
# Security Baseline für den self-hosted Stack
# Letzte Aktualisierung: 2026-03-12
# Verantwortlich: DevOps Team
#
# Diese Datei definiert den Soll-Zustand des gesamten Stacks.
# Sie wird täglich automatisch geprüft (scripts/check-baseline.sh)
# und wöchentlich von Claude Code kontextuell analysiert.

version: "1.0"
stack: "supabase-nextjs-trigger"
last_reviewed: "2026-03-12"

# =============================================
# INFRASTRUKTUR (Artikel 1)
# =============================================

infrastructure:
  servers:
    production:
      host: "10.0.1.10"
      external_hostname: "app.example.com"
      provider: "hetzner"    # CLOUD-Act-frei
    audit:
      host: "10.0.1.11"
      purpose: "security checks, monitoring, claude code"

  network:
    private_subnet: "10.0.1.0/24"
    # Nur diese Ports dürfen von aussen erreichbar sein
    allowed_external_ports:
      - 443    # HTTPS (über Reverse Proxy)
    # Diese Ports dürfen NUR intern erreichbar sein
    internal_only_ports:
      - 5432   # PostgreSQL (Supabase)
      - 5433   # PostgreSQL (Trigger.dev)
      - 8000   # Kong API Gateway
      - 3000   # Next.js (hinter Reverse Proxy)
      - 3040   # Trigger.dev Dashboard
      - 9000   # Supabase Studio

  firewall:
    layers: 2    # Cloud Firewall + Host Firewall
    baseline_file: "/opt/baselines/firewall-baseline.txt"
    drift_check: "daily"

  tls:
    provider: "letsencrypt"
    min_days_before_expiry: 14
    headers_required:
      - "Strict-Transport-Security"
      - "X-Frame-Options"
      - "X-Content-Type-Options"
      - "Content-Security-Policy"

  ssh:
    password_auth: false
    root_login: false
    pubkey_only: true
    allowed_users: ["deploy"]

  backups:
    frequency: "daily"
    max_age_hours: 26
    encryption: "gpg"
    external_storage: true    # auf audit-runner, nicht nur lokal
    restore_test: "monthly"

# =============================================
# SUPABASE PLATFORM (Artikel 1)
# =============================================

supabase:
  images:
    pinned: true              # kein :latest
    # Erwartete Images und Versionen (aktualisieren bei Updates)
    postgres: "supabase/postgres:15.6.1.143"
    kong: "kong:2.8.1"
    gotrue: "supabase/gotrue:v2.164.0"

  postgres:
    listen_address: "10.0.1.10"    # NUR internes Interface
    rls:
      required_on_all_public_tables: true
      # Tabellen die bewusst ohne RLS sind (mit Begründung)
      exceptions: []

  gotrue:
    jwt_exp: 3600               # max. 1 Stunde
    mailer_autoconfirm: false
    refresh_token_rotation: true

  studio:
    external_access: false      # nur über SSH Tunnel oder VPN

  secrets:
    env_file: ".env"
    env_file_permissions: "600"
    in_git: false
    min_length: 16
    # Erwartete Secrets (ohne Werte)
    required:
      - "JWT_SECRET"
      - "ANON_KEY"
      - "SERVICE_ROLE_KEY"
      - "POSTGRES_PASSWORD"
      - "DASHBOARD_USERNAME"
      - "DASHBOARD_PASSWORD"

# =============================================
# NEXT.JS APP LAYER (Artikel 2)
# =============================================

nextjs:
  service_role:
    # Dateien in denen service_role erlaubt ist
    allowed_files:
      - "lib/supabase/admin.ts"
    # Darf NIEMALS erscheinen in:
    forbidden_patterns:
      - "NEXT_PUBLIC_*"
      - "app/**/*.tsx"           # Client Components
      - ".next/static/**"        # Client Bundle

  auth:
    middleware_required: true
    middleware_file: "middleware.ts"
    middleware_uses: "getUser"     # NICHT getSession
    server_actions_require_auth: true
    route_handlers_require_auth: true

  security_headers:
    required:
      - "Strict-Transport-Security"
      - "X-Frame-Options"
      - "Content-Security-Policy"
      - "X-Content-Type-Options"

  env_vars:
    # Nur diese dürfen NEXT_PUBLIC sein
    allowed_public:
      - "NEXT_PUBLIC_SUPABASE_URL"
      - "NEXT_PUBLIC_SUPABASE_ANON_KEY"

  rate_limiting:
    required_endpoints:
      - "/api/auth/login"
      - "/api/auth/signup"
      - "/api/auth/reset"

# =============================================
# EDGE FUNCTIONS (Artikel 3)
# =============================================

edge_functions:
  purpose: "integrations_only"     # NICHT für Business-Logik
  runtime: "deno"

  requirements:
    webhook_signature_check: true
    input_validation: true
    cors_no_wildcard_in_production: true
    no_hardcoded_secrets: true
    max_duration_seconds: 60

  secrets:
    env_file: ".env.functions"
    in_git: false

  deployment:
    method: "volume_copy"          # Dateien ins Volume, Container restart

# =============================================
# TRIGGER.DEV (Artikel 4)
# =============================================

trigger_dev:
  version: "v3"
  hosting: "self-hosted"
  separate_database: true           # eigene PostgreSQL, nicht Supabase DB

  dashboard:
    external_access: false          # nur SSH Tunnel oder VPN

  task_requirements:
    max_duration_required: true
    concurrency_limit_required: true
    retry_config_required: true
    idempotency_on_external_calls: true
    exported: true                  # alle Tasks müssen exportiert sein

  logging:
    use_trigger_logger: true        # nicht console.log
    no_sensitive_data: true

  db_access:
    # Dateien in denen service_role / DATABASE_URL erlaubt ist
    allowed_files:
      - "trigger/lib/supabase.ts"
      - "trigger/lib/db.ts"
    connection_pool_max: 10

# =============================================
# CLAUDE CODE AUDIT (Artikel 5)
# =============================================

claude_code:
  runs_on: "audit-runner"           # NICHT auf Produktion
  allowed_tools: "Read,Grep,Glob"   # kein Bash
  max_turns: 10
  schedule: "weekly"                # Sonntag 06:00
  alert_on: "KRITISCH"

  api_key:
    separate_ci_key: true
    budget_limit_monthly_usd: 50

# =============================================
# COMPLIANCE
# =============================================

compliance:
  cloud_act:
    us_providers_allowed: false
    reason: "CLOUD Act Risiko: US-Behörden können Datenzugriff erzwingen"
    allowed_jurisdictions:
      - "EU"
      - "BR"

  data_residency:
    required: true
    allowed_countries:
      - "DE"     # Deutschland (Hetzner)
      - "BR"     # Brasilien

  git:
    no_secrets_in_repo: true
    infrastructure_as_code: true
    no_manual_server_changes: true

The Baseline Check Script

This script reads the security-baseline.yml and checks every rule automatically. It runs daily on the audit server.

#!/bin/bash
# scripts/check-baseline.sh
# Prüft den Produktionsserver gegen die security-baseline.yml
# Läuft auf dem audit-runner (10.0.1.11)

set -euo pipefail

PROD_HOST="10.0.1.10"
EXTERNAL_HOST="app.example.com"
REPORT=""
CRITICAL=0
WARNING=0
PASS=0

check() {
  local level="$1"    # KRITISCH, WARNUNG, INFO
  local name="$2"
  local result="$3"   # PASS oder FAIL
  local detail="$4"

  if [ "$result" = "PASS" ]; then
    REPORT+="  OK ${name}\n"
    ((PASS++))
  else
    REPORT+="  FAIL [${level}] ${name}: ${detail}\n"
    [ "$level" = "KRITISCH" ] && ((CRITICAL++))
    [ "$level" = "WARNUNG" ] && ((WARNING++))
  fi
}

echo "=== Security Baseline Check $(date +%Y-%m-%d) ==="

# -------------------------------------------
# INFRASTRUKTUR
# -------------------------------------------
REPORT+="\n## Infrastruktur\n\n"

# Ports von aussen
OPEN_PORTS=$(nmap -p 22,80,443,3000,3040,5432,5433,8000,9000 "$EXTERNAL_HOST" \
  -oG - 2>/dev/null | grep -oP '\d+/open' | grep -v "443" || true)
if [ -z "$OPEN_PORTS" ]; then
  check "KRITISCH" "Nur Port 443 extern offen" "PASS" ""
else
  check "KRITISCH" "Nur Port 443 extern offen" "FAIL" "Offen: $OPEN_PORTS"
fi

# Firewall Drift
FW_DIFF=$(ssh deploy@${PROD_HOST} "iptables-save" 2>/dev/null | \
  diff /opt/baselines/firewall-baseline.txt - 2>&1 || true)
if [ -z "$FW_DIFF" ]; then
  check "WARNUNG" "Firewall unverändert" "PASS" ""
else
  check "WARNUNG" "Firewall unverändert" "FAIL" "Firewall hat sich geändert"
fi

# SSH Konfiguration
SSH_PW=$(ssh deploy@${PROD_HOST} "sshd -T 2>/dev/null | grep passwordauthentication" || true)
if echo "$SSH_PW" | grep -q "no"; then
  check "KRITISCH" "SSH Passwort-Login deaktiviert" "PASS" ""
else
  check "KRITISCH" "SSH Passwort-Login deaktiviert" "FAIL" "Passwort-Auth noch aktiv"
fi

SSH_ROOT=$(ssh deploy@${PROD_HOST} "sshd -T 2>/dev/null | grep permitrootlogin" || true)
if echo "$SSH_ROOT" | grep -q "no"; then
  check "KRITISCH" "SSH Root-Login deaktiviert" "PASS" ""
else
  check "KRITISCH" "SSH Root-Login deaktiviert" "FAIL" "Root-Login noch möglich"
fi

# TLS Zertifikat
CERT_DAYS=$(echo | openssl s_client -connect ${EXTERNAL_HOST}:443 2>/dev/null | \
  openssl x509 -noout -enddate 2>/dev/null | cut -d= -f2)
if [ -n "$CERT_DAYS" ]; then
  EPOCH_CERT=$(date -d "$CERT_DAYS" +%s 2>/dev/null || echo 0)
  EPOCH_NOW=$(date +%s)
  DAYS_LEFT=$(( (EPOCH_CERT - EPOCH_NOW) / 86400 ))
  if [ "$DAYS_LEFT" -ge 14 ]; then
    check "WARNUNG" "TLS Zertifikat gültig (${DAYS_LEFT} Tage)" "PASS" ""
  else
    check "WARNUNG" "TLS Zertifikat gültig" "FAIL" "Nur noch ${DAYS_LEFT} Tage"
  fi
fi

# Security Headers
for HEADER in "Strict-Transport-Security" "X-Frame-Options" "X-Content-Type-Options"; do
  if curl -sI "https://${EXTERNAL_HOST}" | grep -qi "$HEADER"; then
    check "WARNUNG" "Header: ${HEADER}" "PASS" ""
  else
    check "WARNUNG" "Header: ${HEADER}" "FAIL" "Header fehlt"
  fi
done

# Backup
LAST_BACKUP=$(ssh deploy@${PROD_HOST} "ls -t /opt/backups/*.gpg 2>/dev/null | head -1" || true)
if [ -n "$LAST_BACKUP" ]; then
  BACKUP_AGE=$(ssh deploy@${PROD_HOST} \
    "echo \$(( (\$(date +%s) - \$(stat -c %Y ${LAST_BACKUP})) / 3600 ))" 2>/dev/null || echo 999)
  if [ "$BACKUP_AGE" -le 26 ]; then
    check "KRITISCH" "Backup aktuell (${BACKUP_AGE}h alt)" "PASS" ""
  else
    check "KRITISCH" "Backup aktuell" "FAIL" "${BACKUP_AGE} Stunden alt"
  fi
else
  check "KRITISCH" "Backup aktuell" "FAIL" "Kein Backup gefunden"
fi

# Disk Space
DISK=$(ssh deploy@${PROD_HOST} "df -h / | tail -1 | awk '{print \$5}' | tr -d '%'" 2>/dev/null || echo 99)
if [ "$DISK" -le 85 ]; then
  check "WARNUNG" "Disk Usage (${DISK}%)" "PASS" ""
else
  check "WARNUNG" "Disk Usage" "FAIL" "${DISK}% belegt"
fi

# -------------------------------------------
# SUPABASE
# -------------------------------------------
REPORT+="\n## Supabase\n\n"

# Container laufen
STOPPED=$(ssh deploy@${PROD_HOST} "cd /opt/supabase && docker compose ps --format json 2>/dev/null" | \
  jq -r 'select(.State != "running") | .Name' 2>/dev/null || true)
if [ -z "$STOPPED" ]; then
  check "KRITISCH" "Alle Supabase Container running" "PASS" ""
else
  check "KRITISCH" "Alle Supabase Container running" "FAIL" "Gestoppt: $STOPPED"
fi

# Images gepinnt (kein :latest)
LATEST_IMAGES=$(ssh deploy@${PROD_HOST} "cd /opt/supabase && grep 'image:' docker-compose.yml | grep 'latest'" 2>/dev/null || true)
if [ -z "$LATEST_IMAGES" ]; then
  check "WARNUNG" "Alle Images versioniert (kein :latest)" "PASS" ""
else
  check "WARNUNG" "Alle Images versioniert" "FAIL" "latest gefunden"
fi

# RLS auf allen public-Tabellen
UNPROTECTED=$(ssh deploy@${PROD_HOST} "cd /opt/supabase && docker compose exec -T postgres psql -U postgres -t -c \
  \"SELECT count(*) FROM pg_tables WHERE schemaname = 'public' AND rowsecurity = false;\"" 2>/dev/null | tr -d ' ' || echo "?")
if [ "$UNPROTECTED" = "0" ]; then
  check "KRITISCH" "RLS auf allen public-Tabellen aktiv" "PASS" ""
elif [ "$UNPROTECTED" != "?" ]; then
  check "KRITISCH" "RLS auf allen public-Tabellen aktiv" "FAIL" "${UNPROTECTED} Tabellen ohne RLS"
fi

# Postgres nur intern
PG_LISTEN=$(ssh deploy@${PROD_HOST} "ss -tlnp | grep 5432" 2>/dev/null || true)
if echo "$PG_LISTEN" | grep -q "0.0.0.0:5432"; then
  check "KRITISCH" "Postgres nur auf internem Interface" "FAIL" "Lauscht auf 0.0.0.0"
else
  check "KRITISCH" "Postgres nur auf internem Interface" "PASS" ""
fi

# Studio nicht extern erreichbar
STUDIO=$(curl -s -o /dev/null -w "%{http_code}" --connect-timeout 3 "https://${EXTERNAL_HOST}:9000" 2>/dev/null || echo "000")
if [ "$STUDIO" = "000" ]; then
  check "KRITISCH" "Supabase Studio nicht extern erreichbar" "PASS" ""
else
  check "KRITISCH" "Supabase Studio nicht extern erreichbar" "FAIL" "Status: $STUDIO"
fi

# -------------------------------------------
# NEXT.JS
# -------------------------------------------
REPORT+="\n## Next.js\n\n"

# service_role im Client Code
SR_LEAK=$(ssh deploy@${PROD_HOST} "cd /opt/app && grep -rn 'SERVICE_ROLE\|service_role' app/ \
  --include='*.ts' --include='*.tsx' 2>/dev/null | grep -v 'lib/supabase/admin.ts' | grep -v node_modules" 2>/dev/null || true)
if [ -z "$SR_LEAK" ]; then
  check "KRITISCH" "service_role nicht im Client Code" "PASS" ""
else
  check "KRITISCH" "service_role nicht im Client Code" "FAIL" "Gefunden in: $(echo "$SR_LEAK" | head -3)"
fi

# NEXT_PUBLIC mit Secrets
PUB_SECRET=$(ssh deploy@${PROD_HOST} "cd /opt/app && grep 'NEXT_PUBLIC_' .env* 2>/dev/null | \
  grep -iE 'service_role|secret|private|database'" 2>/dev/null || true)
if [ -z "$PUB_SECRET" ]; then
  check "KRITISCH" "Keine Secrets in NEXT_PUBLIC Variablen" "PASS" ""
else
  check "KRITISCH" "Keine Secrets in NEXT_PUBLIC Variablen" "FAIL" "$PUB_SECRET"
fi

# middleware.ts existiert und nutzt getUser
MW_EXISTS=$(ssh deploy@${PROD_HOST} "test -f /opt/app/middleware.ts && echo yes || echo no" 2>/dev/null || echo "no")
if [ "$MW_EXISTS" = "yes" ]; then
  MW_GETUSER=$(ssh deploy@${PROD_HOST} "grep -c 'getUser' /opt/app/middleware.ts" 2>/dev/null || echo "0")
  if [ "$MW_GETUSER" -gt 0 ]; then
    check "KRITISCH" "middleware.ts existiert mit getUser()" "PASS" ""
  else
    check "KRITISCH" "middleware.ts mit getUser()" "FAIL" "getUser() fehlt"
  fi
else
  check "KRITISCH" "middleware.ts existiert" "FAIL" "Datei fehlt"
fi

# Server Actions ohne Auth
SA_NO_AUTH=$(ssh deploy@${PROD_HOST} "cd /opt/app && for file in \$(grep -rl \"'use server'\" app/ --include='*.ts' 2>/dev/null); do
  if ! grep -q 'getUser' \"\$file\"; then echo \"\$file\"; fi
done" 2>/dev/null || true)
if [ -z "$SA_NO_AUTH" ]; then
  check "WARNUNG" "Alle Server Actions haben Auth Check" "PASS" ""
else
  check "WARNUNG" "Server Actions ohne Auth" "FAIL" "$SA_NO_AUTH"
fi

# npm audit
NPM_HIGH=$(ssh deploy@${PROD_HOST} "cd /opt/app && npm audit --audit-level=high 2>&1 | \
  grep -c 'high\|critical'" 2>/dev/null || echo "0")
if [ "$NPM_HIGH" = "0" ]; then
  check "WARNUNG" "npm audit: keine high/critical" "PASS" ""
else
  check "WARNUNG" "npm audit" "FAIL" "${NPM_HIGH} high/critical Findings"
fi

# -------------------------------------------
# EDGE FUNCTIONS
# -------------------------------------------
REPORT+="\n## Edge Functions\n\n"

# Webhook Functions ohne Signaturprüfung
WH_NO_SIG=$(ssh deploy@${PROD_HOST} "for dir in /opt/supabase/volumes/functions/*-webhook/; do
  [ -d \"\$dir\" ] || continue
  name=\$(basename \"\$dir\")
  if ! grep -qE 'signature|verify|hmac|crypto' \"\$dir/index.ts\" 2>/dev/null; then
    echo \"\$name\"
  fi
done" 2>/dev/null || true)
if [ -z "$WH_NO_SIG" ]; then
  check "KRITISCH" "Alle Webhooks prüfen Signaturen" "PASS" ""
else
  check "KRITISCH" "Webhooks ohne Signaturprüfung" "FAIL" "$WH_NO_SIG"
fi

# Hardcoded Secrets
FN_SECRETS=$(ssh deploy@${PROD_HOST} "grep -rn 'sk_live\|sk_test\|whsec_\|Bearer ey' \
  /opt/supabase/volumes/functions/ --include='*.ts' 2>/dev/null" || true)
if [ -z "$FN_SECRETS" ]; then
  check "KRITISCH" "Keine hardcoded Secrets in Functions" "PASS" ""
else
  check "KRITISCH" "Hardcoded Secrets in Functions" "FAIL" "$FN_SECRETS"
fi

# -------------------------------------------
# TRIGGER.DEV
# -------------------------------------------
REPORT+="\n## Trigger.dev\n\n"

# Tasks ohne maxDuration
TD_NO_DUR=$(ssh deploy@${PROD_HOST} "cd /opt/app && for file in trigger/tasks/*.ts 2>/dev/null; do
  [ -f \"\$file\" ] || continue
  name=\$(basename \"\$file\" .ts)
  if ! grep -q 'maxDuration' \"\$file\"; then echo \"\$name\"; fi
done" 2>/dev/null || true)
if [ -z "$TD_NO_DUR" ]; then
  check "WARNUNG" "Alle Tasks haben maxDuration" "PASS" ""
else
  check "WARNUNG" "Tasks ohne maxDuration" "FAIL" "$TD_NO_DUR"
fi

# Tasks ohne Concurrency
TD_NO_CC=$(ssh deploy@${PROD_HOST} "cd /opt/app && for file in trigger/tasks/*.ts 2>/dev/null; do
  [ -f \"\$file\" ] || continue
  name=\$(basename \"\$file\" .ts)
  if ! grep -qE 'concurrencyLimit|queue:' \"\$file\"; then echo \"\$name\"; fi
done" 2>/dev/null || true)
if [ -z "$TD_NO_CC" ]; then
  check "WARNUNG" "Alle Tasks haben Concurrency Limit" "PASS" ""
else
  check "WARNUNG" "Tasks ohne Concurrency" "FAIL" "$TD_NO_CC"
fi

# Hardcoded Secrets in Tasks
TD_SECRETS=$(ssh deploy@${PROD_HOST} "grep -rn 'sk_live\|sk_test\|SG\.\|sk-' \
  /opt/app/trigger/ --include='*.ts' 2>/dev/null" || true)
if [ -z "$TD_SECRETS" ]; then
  check "KRITISCH" "Keine hardcoded Secrets in Tasks" "PASS" ""
else
  check "KRITISCH" "Hardcoded Secrets in Tasks" "FAIL" "$TD_SECRETS"
fi

# Trigger.dev Dashboard nicht extern
TD_DASH=$(curl -s -o /dev/null -w "%{http_code}" --connect-timeout 3 "https://${EXTERNAL_HOST}:3040" 2>/dev/null || echo "000")
if [ "$TD_DASH" = "000" ]; then
  check "KRITISCH" "Trigger.dev Dashboard nicht extern" "PASS" ""
else
  check "KRITISCH" "Trigger.dev Dashboard nicht extern" "FAIL" "Status: $TD_DASH"
fi

# -------------------------------------------
# COMPLIANCE
# -------------------------------------------
REPORT+="\n## Compliance\n\n"

# Git: keine Secrets im Repo
GIT_SECRETS=$(ssh deploy@${PROD_HOST} "cd /opt/app && git ls-files .env .env.functions .env.trigger 2>/dev/null" || true)
if [ -z "$GIT_SECRETS" ]; then
  check "KRITISCH" ".env Dateien nicht im Git" "PASS" ""
else
  check "KRITISCH" ".env Dateien im Git" "FAIL" "Committed: $GIT_SECRETS"
fi

# Git: keine uncommitted Changes auf dem Server
GIT_DIRTY=$(ssh deploy@${PROD_HOST} "cd /opt/app && git status --porcelain 2>/dev/null" || true)
if [ -z "$GIT_DIRTY" ]; then
  check "WARNUNG" "Keine manuellen Änderungen auf Server" "PASS" ""
else
  check "WARNUNG" "Manuelle Änderungen auf Server" "FAIL" "$(echo "$GIT_DIRTY" | wc -l) Dateien"
fi

# -------------------------------------------
# ZUSAMMENFASSUNG
# -------------------------------------------

SUMMARY="\n## Zusammenfassung\n\n"
SUMMARY+="Bestanden: ${PASS}\n"
SUMMARY+="Warnungen: ${WARNING}\n"
SUMMARY+="Kritisch:  ${CRITICAL}\n"

TOTAL_REPORT="${SUMMARY}\n${REPORT}"

# Report speichern
REPORT_FILE="/opt/audit/reports/baseline-$(date +%Y-%m-%d).md"
echo -e "# Security Baseline Check $(date +%Y-%m-%d)\n${TOTAL_REPORT}" > "$REPORT_FILE"

echo -e "$TOTAL_REPORT"

# Alert bei kritischen Findings
if [ "$CRITICAL" -gt 0 ]; then
  echo ""
  echo "!!! ${CRITICAL} KRITISCHE FINDINGS !!!"
  echo -e "$TOTAL_REPORT" | mail -s "KRITISCH: Baseline Check $(date)" ops@example.com
  exit 1
fi

exit 0

Integration with Claude Code (Article 5)

The baseline check script delivers facts. Claude Code interprets them. The weekly audit from Article 5 uses the baseline report as input:

# Im Gesamt-Audit-Script (scripts/full-security-audit.sh aus Artikel 5)
# Nach den deterministischen Checks:

# Baseline-Report des heutigen Tages einlesen
BASELINE_REPORT=$(cat /opt/audit/reports/baseline-$(date +%Y-%m-%d).md 2>/dev/null || echo "Kein Baseline-Report vorhanden")

# An Claude Code übergeben
echo "$BASELINE_REPORT" | claude -p \
  "Du erhältst den Baseline-Check-Report unseres self-hosted Stacks.
Die Baseline ist definiert in security-baseline.yml.

Analysiere:
1. Welche KRITISCHEN Findings brauchen sofortige Aufmerksamkeit?
2. Gibt es Muster in den WARNUNGEN die auf systematische Probleme hindeuten?
3. Haben sich Findings gegenüber der Vorwoche verändert? (Vergleiche mit letztem Report)
4. Welche Prioritäten empfiehlst du für diese Woche?

Antworte auf Deutsch. Sei konkret." \
  --allowedTools "Read,Grep,Glob" \
  --output-format text \
  --max-turns 5

Maintaining the Baseline

The security-baseline.yml is not a static document. It evolves with the stack.

When to update:

New table in Supabase               -> check rls.exceptions
New service in Docker stack          -> extend internal_only_ports
New NEXT_PUBLIC env var              -> extend allowed_public
New secret                           -> extend required secrets
New Edge Function                    -> check rules apply automatically
New Trigger.dev task                 -> check rules apply automatically
Hosting provider change              -> update server/provider

Workflow for baseline changes:

1. Change security-baseline.yml in Git
2. PR review (including justification for why the rule changes)
3. Claude Code reviews the baseline diff itself:
   "Is this relaxation of the rules justified?"
4. Merge after approval
5. Next baseline check uses the new rules

Cron Configuration

# crontab auf dem audit-runner

# Täglicher Baseline-Check (06:00 Uhr)
0 6 * * * /opt/audit/scripts/check-baseline.sh >> /var/log/baseline-check.log 2>&1

# Wöchentlicher Claude Code Audit (Sonntag 07:00 Uhr, nach dem Baseline-Check)
0 7 * * 0 /opt/audit/scripts/full-security-audit.sh >> /var/log/security-audit.log 2>&1

Deployment Checklist

Baseline File
  [ ] security-baseline.yml present in the repository
  [ ] All server IPs and hostnames up to date
  [ ] All expected secrets listed (without values)
  [ ] All allowed NEXT_PUBLIC variables listed
  [ ] All service_role-permitted files listed
  [ ] RLS exceptions documented (with justification)
  [ ] Compliance rules (CLOUD Act, data residency) defined

Check Script
  [ ] check-baseline.sh executable on audit-runner
  [ ] SSH key from audit-runner to prod server configured
  [ ] Firewall baseline saved (/opt/baselines/)
  [ ] Daily cron job active
  [ ] Alert delivery configured (email on CRITICAL)

Integration
  [ ] Claude Code installed on audit-runner
  [ ] Weekly audit cron active
  [ ] CLAUDE.md present in the repository (Article 5)
  [ ] .claude/commands/security-review.md present (Article 5)
  [ ] Reports are archived (/opt/audit/reports/)

Conclusion

The security baseline is the connecting element of the entire runbook series. Articles 1-4 define what needs to be securely configured. Article 5 defines how Claude Code checks those rules. Article 6 consolidates everything into a single YAML file that can be read by both scripts and Claude Code.

The daily baseline check delivers facts: passed or failed. The weekly Claude Code audit interprets those facts in context. A human decides what gets prioritized. This three-layer model covers both known and unexpected risks without any single layer being solely responsible for security.

Those who pursue this baseline together with a Cert-Ready-by-Design architecture build verifiable security instead of retroactive audits.

With this, the DevOps runbook series is complete.

Series Audit Checklists

Prepared prompts for Claude Code. Each checklist automatically verifies the security points from its respective runbook and reports PASS, WARNING, or CRITICAL.

Series Table of Contents

  1. Supabase Self-Hosting Runbook
  2. Running Next.js on Supabase Securely
  3. Deploying Supabase Edge Functions Securely
  4. Running Trigger.dev Background Jobs Securely
  5. Claude Code as a Security Control in the DevOps Workflow
  6. Security Baseline for the Full Stack - this article
Bert Gogolin

Bert Gogolin

CEO & Founder, Gosign

AI Governance Briefing

Enterprise AI, regulation, and infrastructure - once a month, directly from me.

No spam. Unsubscribe anytime. Privacy policy

Security Baseline DevOps Self-Hosting YAML Automation
Share this article

Frequently Asked Questions

What is the security baseline?

The security baseline is a YAML file in the infrastructure repository that defines the target state of the entire stack. It is checked automatically every day and analyzed contextually by Claude Code every week.

Does the baseline replace the individual runbooks?

No. The baseline consolidates the rules from articles 1 through 5 into a single machine-readable file. The runbooks remain as reference documentation.

What happens when the baseline check finds a critical violation?

The check script exits with a non-zero code and sends an alert email to the ops team. In CI/CD pipelines, the deployment gate blocks the release until the critical finding is resolved. Weekly Claude Code analysis then prioritizes the finding alongside other results.