Production-ready Helm chart for PgDog with high availability, security, and resource management features.
✅ Resource limits with guaranteed QoS (1GB:1CPU ratio)
✅ PodDisruptionBudget for high availability
✅ Pod anti-affinity for spreading across nodes
✅ ExternalSecrets integration for secure credential management
✅ ServiceAccount and RBAC with minimal permissions
✅ Pinned image versions for production deployments
- Install Helm
- Configure
kubectlto point to your K8s cluster - Add our Helm repository:
helm repo add pgdogdev https://helm.pgdog.dev - Configure databases and users in
values.yaml - Install:
helm install <name> pgdogdev/pgdog -f values.yaml
All resources will be created in <name> namespace.
Configuration is done via values.yaml. All PgDog settings from
pgdog.toml and users.toml are supported. General settings
([general] section) are top level. Use camelCase format instead
of snake_case, for example: checkout_timeout becomes
checkoutTimeout.
workers: 2
defaultPoolSize: 15
openMetricsPort: 9090Pin to a specific version for production deployments:
image:
repository: ghcr.io/pgdogdev/pgdog
tag: "v1.2.3" # Pin to specific version
pullPolicy: IfNotPresentLegacy format (still supported for backward compatibility):
image:
name: ghcr.io/pgdogdev/pgdog:main
pullPolicy: AlwaysAdd databases to databases list:
databases:
- name: "prod"
host: "10.0.0.1"Add users to users list:
users:
- name: "alice"
database: "prod"
password: "hunter2" # See ExternalSecrets for secure storageAdd mirrors to mirrors list. For example:
mirrors:
- sourceDb: "prod"
destinationDb: "staging"Ensures minimum pod availability during voluntary disruptions (enabled by default):
podDisruptionBudget:
enabled: true
minAvailable: 1 # At least 1 pod always availableSpreads pods across nodes for better reliability (enabled by default):
podAntiAffinity:
enabled: true
type: soft # "soft" (preferred) or "hard" (required)Securely manage credentials using ExternalSecrets operator:
Option 1: Create ExternalSecret with chart
externalSecrets:
enabled: true
create: true
secretStoreRef:
name: aws-secrets-manager
kind: SecretStore
remoteRefs:
- secretKey: users.toml
remoteRef:
key: pgdog/usersOption 2: Use existing ExternalSecret
externalSecrets:
enabled: true
create: false
name: "platform-managed-secret"
secretName: "my-secret" # Name of Secret it createsRBAC with minimal permissions is enabled by default:
serviceAccount:
create: true
annotations: {}
rbac:
create: trueDefault resources use Guaranteed QoS with 1GB:1CPU ratio:
resources:
requests:
cpu: 1000m # 1 CPU
memory: 1Gi # 1GB
limits:
cpu: 1000m
memory: 1GiPrometheus metrics can be collected with a sidecar. Enable by
configuring prometheusPort:
prometheusPort: 9091
# Resources for Prometheus sidecar
prometheusResources:
requests:
cpu: 100m
memory: 100Mi
limits:
cpu: 100m
memory: 100MiMake sure it's different from openMetricsPort. You can configure
Prometheus in templates/prom/config.yaml.
To send metrics to Grafana Cloud or a Grafana instance, configure the remote write settings:
grafanaRemoteWrite:
url: "https://prometheus-prod-XX-XXX.grafana.net/api/prom/push"
basicAuth:
username: "123456" # Grafana Cloud user ID
password: "your-api-key" # Grafana Cloud API key
queueConfig:
capacity: 10000
maxShards: 50
minShards: 1
maxSamplesPerSend: 5000
batchSendDeadline: 5s
minBackoff: 30ms
maxBackoff: 5sThe queueConfig settings use Prometheus defaults and can be tuned
for performance. Remote write is automatically enabled when url is
set.
Contributions are welcome. Please open a pull request / issue with requested changes.
MIT