Appearance
Zitadel AKS Deployment Guide
Deploy Zitadel on Azure Kubernetes Service using an existing Azure PostgreSQL database.
Prerequisites
- AKS cluster with
kubectlaccess - Azure PostgreSQL Flexible Server (v14+) running and accessible from the AKS cluster
- A domain for Zitadel (e.g.
auth-dev.exto360.com) - A TLS certificate (wildcard or domain-specific) for the domain
- Helm 3 installed
- The Azure PostgreSQL server admin credentials (used once to bootstrap the
zitadelrole)
1. Add AKS firewall rule to Azure PostgreSQL
Azure PostgreSQL blocks external connections by default. Allow your AKS cluster's outbound IP:
bash
# Get AKS outbound IP
az aks show -g <your-rg> -n <your-aks-cluster> \
--query "networkProfile.loadBalancerProfile.effectiveOutboundIPs[].id" -o tsv \
| xargs -I {} az network public-ip show --ids {} --query ipAddress -o tsv
# Add firewall rule
az postgres flexible-server firewall-rule create \
--resource-group <your-rg> \
--name <your-pg-server> \
--rule-name AllowAKS \
--start-ip-address <aks-outbound-ip> \
--end-ip-address <aks-outbound-ip>2. Prepare Azure PostgreSQL
Run a one-time pod to create the zitadel role and database using the server admin credentials:
bash
kubectl -n <namespace> run pg-setup --rm -it --restart=Never \
--image=postgres:16 -- psql \
"postgresql://<admin-user>:<admin-password>@<server>.postgres.database.azure.com:5432/postgres?sslmode=require" \
-c "CREATE ROLE zitadel LOGIN PASSWORD '<strong-password>';" \
-c "CREATE DATABASE zitadel OWNER zitadel;" \
-c "GRANT ALL PRIVILEGES ON DATABASE zitadel TO zitadel;"The server admin credentials are only used here. Zitadel runs with the dedicated
zitadelrole.
3. Create Kubernetes namespace
bash
kubectl create namespace console-dev
# or console-prod for production4. Create secrets
Generate a masterkey (32 chars, used for encryption at rest — save this, it cannot be changed later):
bash
head -c 16 /dev/urandom | xxd -pCreate the masterkey secret:
bash
kubectl -n console-dev create secret generic zitadel-masterkey \
--from-literal=masterkey="<generated-masterkey>"Create a TLS secret from your certificate:
bash
kubectl -n console-dev create secret tls <your-tls-secret-name> \
--cert=/path/to/fullchain.pem \
--key=/path/to/privkey.pem5. Create values file
Create zitadel-dev-values.yaml:
yaml
replicaCount: 1 # increase to 2 for prod
zitadel:
masterkeySecretName: zitadel-masterkey
# Database credentials — Helm creates a K8s Secret from this
secretConfig:
Database:
Postgres:
Admin:
Username: zitadel
Password: <zitadel-db-password>
User:
Username: zitadel
Password: <zitadel-db-password>
# Allow setup job retries (Azure PG can be slow)
setupJob:
backoffLimit: 5
configmapConfig:
DefaultInstance:
InstanceAggregateTimeout: 120s # increase for slow Azure PG
ExternalDomain: "auth-dev.exto360.com" # your Zitadel domain
ExternalSecure: true
ExternalPort: 443
TLS:
Enabled: false # TLS terminated at ingress
Database:
Postgres:
Host: <your-server>.postgres.database.azure.com
Port: 5432
Database: zitadel
MaxOpenConns: 15
MaxIdleConns: 10
MaxConnLifetime: 1h
MaxConnIdleTime: 5m
Admin:
ExistingDatabase: zitadel # skip CREATE DATABASE (we pre-created it)
SSL:
Mode: require
User:
SSL:
Mode: require
FirstInstance:
Org:
Human:
UserName: "admin"
Email: "admin@exto360.com"
Password: "ChangeMe123!"
PasswordChangeRequired: true
# Disable the bundled PostgreSQL — we use Azure PostgreSQL
postgresql:
enabled: false
ingress:
enabled: true
className: nginx
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
hosts:
- host: auth-dev.exto360.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: <your-tls-secret-name>
hosts:
- auth-dev.exto360.com
login:
ingress:
enabled: true
className: nginx
hosts:
- host: auth-dev.exto360.com
paths:
- path: /ui/v2/login
pathType: Prefix
tls:
- secretName: <your-tls-secret-name>
hosts:
- auth-dev.exto360.com
podDisruptionBudget:
enabled: true
minAvailable: 1Key configuration notes
secretConfig— not env vars or DSN. The Helm chart's init job reads from the merged config, not from env vars. The DSN approach (ZITADEL_DATABASE_POSTGRES_DSN) does not work for the init job.Admin.ExistingDatabase: zitadel— tells the init job the database already exists, skippingCREATE DATABASE(which requiresCREATEDBprivilege that Azure PG doesn't grant to regular roles).InstanceAggregateTimeout: 120s— Azure PG latency can cause the default instance creation to time out. Increase this for remote databases.setupJob.backoffLimit: 5— allows the setup job to retry if it times out.- No
backend-protocol: "GRPC"annotation — Zitadel's console uses gRPC-web over HTTP/1.1. TheGRPCannotation causes CSP errors by leaking nginx's internalupstream_balancerhostname into response headers.
Ingress notes
- Traefik: Replace
className: nginxwithclassName: traefikand use Traefik-specific annotations:yamlannotations: traefik.ingress.kubernetes.io/router.entrypoints: websecure traefik.ingress.kubernetes.io/router.tls: "true" - cert-manager: If using cert-manager instead of a pre-existing TLS cert, add
cert-manager.io/cluster-issuer: <issuer-name>to the ingress annotations.
6. Install Zitadel
bash
helm repo add zitadel https://charts.zitadel.com
helm repo update
helm install zitadel zitadel/zitadel \
--namespace console-dev \
--values zitadel-dev-values.yaml \
--wait --timeout 15mThe --wait flag ensures Helm waits for the init and setup jobs to complete before returning.
7. Verify
bash
# Watch pods come up
kubectl -n console-dev get pods -w
# Check init and setup jobs completed
kubectl -n console-dev get jobs
# Verify setup created the admin user (should return rows)
kubectl -n console-dev logs job/zitadel-setup -c zitadel-setup | grep "03_default_instance"Open https://auth-dev.exto360.com/ui/console in your browser.
Login: admin@zitadel.auth-dev.exto360.com / ChangeMe123!
Zitadel creates a default org named
zitadelwith domainzitadel.<ExternalDomain>. The admin login name uses that org domain.
Change the admin password immediately.
8. Run Console bootstrap
Once Zitadel is running, follow the Bootstrap Guide to create the Console OIDC apps and service account:
- Create a
console-adminmachine user in the Zitadel Console with IAM_OWNER role - Generate a PAT for it
- Create
.bootstrap-input.env:envZITADEL_URL=https://auth-dev.exto360.com ZITADEL_PAT=<the-pat> CONSOLE_URL=https://console-dev.exto360.com DEVELOPMENT_MODE=false - Run
make bootstrap - Copy values from
.bootstrap.envinto Console's Kustomize overlay patches - Delete the
console-adminmachine user (the PAT is not needed at runtime)
Production differences
| Setting | Dev | Prod |
|---|---|---|
| Namespace | console-dev | console-prod |
ExternalDomain | auth-dev.exto360.com | auth.exto360.com |
replicaCount | 1 | 2 |
SSL.Mode | require | verify-full |
| Admin password | temporary | strong, change immediately |
For prod, create a separate zitadel-values-prod.yaml with production domain and replica count.
Upgrading Zitadel
bash
helm repo update
helm upgrade zitadel zitadel/zitadel \
--namespace console-prod \
--values zitadel-values-prod.yaml \
--wait --timeout 15mZitadel handles database migrations automatically on startup.
Clean reinstall
If you need to start fresh (e.g. after a failed setup), drop all Zitadel schemas and reinstall:
bash
# Drop all schemas from the zitadel database
kubectl -n console-dev run pg-reset --rm -it --restart=Never \
--image=postgres:16 -- psql \
"postgresql://zitadel:<password>@<server>.postgres.database.azure.com:5432/zitadel?sslmode=require" \
-c "DROP SCHEMA IF EXISTS eventstore CASCADE;" \
-c "DROP SCHEMA IF EXISTS projections CASCADE;" \
-c "DROP SCHEMA IF EXISTS system CASCADE;" \
-c "DROP SCHEMA IF EXISTS auth CASCADE;" \
-c "DROP SCHEMA IF EXISTS adminapi CASCADE;" \
-c "DROP SCHEMA IF EXISTS cache CASCADE;" \
-c "DROP SCHEMA IF EXISTS logstore CASCADE;" \
-c "DROP SCHEMA IF EXISTS queue CASCADE;" \
-c "DROP SCHEMA IF EXISTS public CASCADE;" \
-c "CREATE SCHEMA public;" \
-c "GRANT ALL ON SCHEMA public TO zitadel;"
# Reinstall
helm uninstall zitadel --namespace console-dev
helm install zitadel zitadel/zitadel \
--namespace console-dev \
--values zitadel-dev-values.yaml \
--wait --timeout 15mTroubleshooting
| Issue | Fix |
|---|---|
permission denied to create database | Set Admin.ExistingDatabase: zitadel in config — Azure PG doesn't grant CREATEDB to regular roles |
failed SASL auth / no pg_hba.conf entry | SSL not configured. Add Admin.SSL.Mode: require and User.SSL.Mode: require |
hostname resolving error: lookup port=5432 | Database Host is empty in config. Don't rely on DSN — set Host explicitly in configmapConfig.Database.Postgres |
user=postgres database=postgres in error | Credentials not reaching init job. Use secretConfig (not env vars) for database credentials |
duplicate key ... unique_constraints_pkey | Previous partial setup left data. Drop schemas and reinstall (see Clean reinstall above) |
context deadline exceeded on 03_default_instance | Azure PG too slow. Increase DefaultInstance.InstanceAggregateTimeout (e.g. 120s) and setupJob.backoffLimit |
CSP errors / upstream_balancer in console | Remove nginx.ingress.kubernetes.io/backend-protocol: "GRPC" annotation — gRPC-web works over HTTP/1.1 |
relation "eventstore.events" does not exist | Init job didn't run. Check kubectl get jobs and job logs |
| Pods stuck in CrashLoopBackOff | Check logs — usually a DB connection issue. Verify firewall rules allow AKS to reach Azure PostgreSQL |
| Console loads but login fails | Check ExternalDomain and ExternalSecure match your ingress. Zitadel is strict about issuer URL matching |
| Admin user not found after setup | Check eventstore.unique_constraints for user_name entries. If empty, the 03_default_instance migration failed — clean and reinstall |

