Appearance
Console — Azure Deployment Guide
Deploy Console (API + Worker + Web SPA) on Azure Container Apps with Bicep.
Architecture
┌──────────────────────────────────────────────┐
│ Azure Front Door │
│ │
│ ┌─ console.exto360.com ──────────────────┐ │
│ │ /api/* → console-api (Container App) │ │
│ │ /* → console-web (Storage Blob) │ │
│ └────────────────────────────────────────┘ │
│ │
│ ┌─ id.exto360.com ──────────────────────┐ │
│ │ /ui/v2/login/* → zitadel-login (CA) │ │
│ │ /* → zitadel (CA) │ │
│ └───────────────────────────────────────┘ │
└──────┬──────────────────────────┬────────────┘
│ │
┌────────────┘ └────────────┐
▼ ▼
┌─────────────────────┐ ┌───────────────────┐
│ Container Apps Env │ │ Storage Account │
│ │ │ (Static Website) │
│ ┌──────────────┐ │ │ │
│ │ console-api │ │ │ Web SPA (React) │
│ └──────────────┘ │ └───────────────────┘
│ ┌──────────────┐ │
│ │console-worker│ │
│ └──────────────┘ │
│ ┌──────────────┐ │
│ │ zitadel │ │
│ └──────────────┘ │
│ ┌──────────────┐ │
│ │zitadel-login │ │
│ └──────────────┘ │
└─────────┬───────────┘
│
┌─────────▼─────────────┐ ┌──────────────┐
│ PostgreSQL Flexible │ │ Key Vault │
│ Server │ │ │
│ ├─ zitadel_auth │ │ (instance │
│ └─ console │ │ tokens) │
└───────────────────────┘ └──────────────┘Key: Each custom domain has its own Front Door endpoint.
id.exto360.comandconsole.exto360.comresolve to different*.azurefd.nethostnames. Front Door uses host-based routing to direct traffic to the correct origin group.
Prerequisites
- Azure CLI (
az) installed and logged in - Go 1.22+ (for bootstrap script)
- Docker (for building images and running Zitadel init)
- Access to the
gaeadevACR (in the GaeaGlobal subscription)
Phase 1 — Build and Push Docker Images
Build images for linux/amd64 before starting infrastructure deployment:
bash
make docker-all REGISTRY=gaeadev.azurecr.io TAG=dev-latest
# Or individually
make docker-api REGISTRY=gaeadev.azurecr.io TAG=dev-latest
make docker-worker REGISTRY=gaeadev.azurecr.io TAG=dev-latest
make docker-web REGISTRY=gaeadev.azurecr.io TAG=dev-latestApple Silicon: Add
--platform linux/amd64or setDOCKER_DEFAULT_PLATFORM=linux/amd64.
Push to ACR:
bash
az acr login -n gaeadev --subscription <GaeaGlobal-subscription-id>
docker push gaeadev.azurecr.io/console-api:dev-latest
docker push gaeadev.azurecr.io/console-worker:dev-latest
docker push gaeadev.azurecr.io/console-web:dev-latestPhase 2 — Infrastructure Deployment
2.1 Prepare secrets file
Copy the example and fill in secrets:
bash
cp infra/.env.prod.example infra/.env.prodRequired variables for the infra phase:
| Variable | Description |
|---|---|
AZURE_TENANT_ID | Azure AD tenant ID |
ACR_RESOURCE_ID | Full resource ID of the ACR (cross-subscription) |
POSTGRES_ADMIN_PASSWORD | PostgreSQL admin password |
2.2 Deploy infrastructure
bash
./infra/deploy.sh prod --phase infraThis creates: Resource Group, PostgreSQL Flexible Server, VNet, Key Vault, Storage Account, Log Analytics, Managed Identity, Container Apps Environment, and ACR Pull role assignment.
2.3 Pre-configure DNS validation records
After the infra phase completes, retrieve DNS values and create records before deploying Zitadel or Console. This prevents custom domain validation failures in later phases.
bash
# ── Container Apps domain verification (asuid TXT — same for both domains) ──
ASUID=$(az containerapp env show -n console-prod-env -g gg-ex-prod-console \
--query "properties.customDomainConfiguration.customDomainVerificationId" -o tsv)
# ── Front Door endpoints (each domain has its own endpoint) ──
FD_ID_HOST=$(az afd endpoint list --profile-name console-prod-fd -g gg-ex-prod-console \
--query "[?starts_with(name,'id-')].hostName" -o tsv 2>/dev/null || echo "N/A — deploy zitadel_frontdoor first")
FD_CONSOLE_HOST=$(az afd endpoint list --profile-name console-prod-fd -g gg-ex-prod-console \
--query "[?starts_with(name,'console-')].hostName" -o tsv 2>/dev/null || echo "N/A — deploy zitadel_frontdoor first")
# ── Front Door domain validation tokens (_dnsauth TXT) ──
DNSAUTH_ID=$(az afd custom-domain show --profile-name console-prod-fd -g gg-ex-prod-console \
--custom-domain-name id-exto360-com \
--query "validationProperties.validationToken" -o tsv 2>/dev/null || echo "N/A — deploy zitadel_frontdoor first")
DNSAUTH_CONSOLE=$(az afd custom-domain show --profile-name console-prod-fd -g gg-ex-prod-console \
--custom-domain-name console-exto360-com \
--query "validationProperties.validationToken" -o tsv 2>/dev/null || echo "N/A — deploy 'all' phase first")
echo ""
echo "=== id.exto360.com ==="
echo "CNAME id.exto360.com → $FD_ID_HOST"
echo "TXT _dnsauth.id.exto360.com → $DNSAUTH_ID"
echo "TXT asuid.id.exto360.com → $ASUID"
echo ""
echo "=== console.exto360.com ==="
echo "CNAME console.exto360.com → $FD_CONSOLE_HOST"
echo "TXT _dnsauth.console.exto360.com → $DNSAUTH_CONSOLE"
echo "TXT asuid.console.exto360.com → $ASUID"Each domain requires three DNS records. Not all are available immediately — they become available as you progress through deployment phases:
| Type | Name | Value | Available after |
|---|---|---|---|
| TXT | asuid.id.exto360.com | <customDomainVerificationId> | --phase infra |
| TXT | asuid.console.exto360.com | <customDomainVerificationId> | --phase infra |
| TXT | _dnsauth.id.exto360.com | <validationToken> | --phase zitadel_frontdoor |
| CNAME | id.exto360.com | <id-endpoint>.azurefd.net | --phase zitadel_frontdoor |
| CNAME | console.exto360.com | <console-endpoint>.azurefd.net | --phase zitadel_frontdoor |
| TXT | _dnsauth.console.exto360.com | <validationToken> | --phase all (1st attempt) |
Note: The
_dnsauthtokens are only available after the Front Door custom domain resource is created. Forconsole.exto360.com, the first--phase allattempt will fail with the required token in the error message — create the_dnsauthTXT record and retry.
Tip: The
asuidvalue is the same for all custom domains on the same Container Apps environment. Each domain has its own Front Door endpoint —id.exto360.comandconsole.exto360.compoint to different*.azurefd.nethostnames. Create all records as early as possible to avoid deploy failures.
Verify DNS propagation:
bash
dig TXT asuid.id.exto360.com +short
dig TXT asuid.console.exto360.com +short
dig TXT _dnsauth.id.exto360.com +short
dig TXT _dnsauth.console.exto360.com +short
dig CNAME id.exto360.com +short
dig CNAME console.exto360.com +short2.4 ACR Pull role (cross-subscription)
The Bicep template automatically assigns AcrPull on the ACR. If the managed identity is in a different subscription than the ACR, verify the role exists:
bash
az role assignment list \
--assignee $(az identity show -g gg-ex-prod-console -n console-prod-id --query principalId -o tsv) \
--scope <ACR_RESOURCE_ID> \
--role AcrPull -o tableIf missing, create manually:
bash
az role assignment create \
--assignee-object-id $(az identity show -g gg-ex-prod-console -n console-prod-id --query principalId -o tsv) \
--assignee-principal-type ServicePrincipal \
--role AcrPull \
--scope <ACR_RESOURCE_ID>Phase 3 — Database Setup
3.1 Allow your IP through the PostgreSQL firewall
bash
az postgres flexible-server firewall-rule create \
--resource-group gg-ex-prod-console \
--name console-prod-pg \
--rule-name AllowMyIP \
--start-ip-address <YOUR_IP> \
--end-ip-address <YOUR_IP>3.2 Connect and create roles/databases
Note: The
azure.extensionsallow-list (forpgcrypto) is configured automatically by the Bicep template during infra deployment.
bash
psql "postgresql://consoleadmin:<POSTGRES_ADMIN_PASSWORD>@console-prod-pg.postgres.database.azure.com:5432/postgres?sslmode=require"Run the following SQL:
sql
-- Zitadel database (created by Bicep, just set owner)
CREATE ROLE zitadel LOGIN PASSWORD '<ZITADEL_DB_PASSWORD>';
ALTER DATABASE zitadel_auth OWNER TO zitadel;
-- Console database (schema is auto-migrated on first startup via goose)
CREATE ROLE console_app LOGIN PASSWORD '<CONSOLE_DB_PASSWORD>';
GRANT azure_pg_admin TO console_app;
CREATE DATABASE console OWNER console_app;
\c console
GRANT ALL PRIVILEGES ON SCHEMA public TO console_app;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON TABLES TO console_app;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON SEQUENCES TO console_app;
\q3.3 Remove firewall rule
bash
az postgres flexible-server firewall-rule delete \
--resource-group gg-ex-prod-console \
--name console-prod-pg \
--rule-name AllowMyIP --yesPhase 4 — Zitadel Init (Local Docker)
Initialize Zitadel's database using a local Docker container. This creates the first instance, admin user, and machine user PATs.
4.1 Generate masterkey
bash
head -c 16 /dev/urandom | xxd -pSave this value — you'll need it in .env.prod as ZITADEL_MASTERKEY.
4.2 Allow your IP through the PostgreSQL firewall
Re-add if you removed it in Phase 3.3:
bash
az postgres flexible-server firewall-rule create \
--resource-group gg-ex-prod-console \
--name console-prod-pg \
--rule-name AllowMyIP \
--start-ip-address <YOUR_IP> \
--end-ip-address <YOUR_IP>4.3 Run Zitadel init
bash
docker run --rm \
-e ZITADEL_EVENTSTORE_PUSHTIMEOUT=300s \
-e ZITADEL_EXTERNALSCHEME=https \
-e ZITADEL_EXTERNALDOMAIN=id.exto360.com \
-e ZITADEL_EXTERNALSECURE=true \
-e ZITADEL_EXTERNALPORT=443 \
-e ZITADEL_TLS_ENABLED=false \
-e ZITADEL_DATABASE_POSTGRES_HOST=console-prod-pg.postgres.database.azure.com \
-e ZITADEL_DATABASE_POSTGRES_PORT=5432 \
-e ZITADEL_DATABASE_POSTGRES_DATABASE=zitadel_auth \
-e ZITADEL_DATABASE_POSTGRES_USER_USERNAME=zitadel \
-e ZITADEL_DATABASE_POSTGRES_USER_PASSWORD=<ZITADEL_DB_PASSWORD> \
-e ZITADEL_DATABASE_POSTGRES_USER_SSL_MODE=require \
-e ZITADEL_DATABASE_POSTGRES_ADMIN_USERNAME=zitadel \
-e ZITADEL_DATABASE_POSTGRES_ADMIN_PASSWORD=<ZITADEL_DB_PASSWORD> \
-e ZITADEL_DATABASE_POSTGRES_ADMIN_SSL_MODE=require \
-e ZITADEL_DATABASE_POSTGRES_ADMIN_EXISTINGDATABASE=zitadel_auth \
-e ZITADEL_MASTERKEY=<ZITADEL_MASTERKEY> \
-e ZITADEL_FIRSTINSTANCE_ORG_HUMAN_USERNAME=admin \
-e ZITADEL_FIRSTINSTANCE_ORG_HUMAN_EMAIL_ADDRESS=admin@exto360.com \
-e ZITADEL_FIRSTINSTANCE_ORG_HUMAN_PASSWORD=<INITIAL_ADMIN_PASSWORD> \
-e ZITADEL_FIRSTINSTANCE_ORG_HUMAN_PASSWORDCHANGEREQUIRED=true \
-e ZITADEL_FIRSTINSTANCE_ORG_LOGINCLIENT_MACHINE_USERNAME=login-service \
-e ZITADEL_FIRSTINSTANCE_ORG_LOGINCLIENT_MACHINE_NAME='Login Service' \
-e ZITADEL_FIRSTINSTANCE_ORG_LOGINCLIENT_PAT_EXPIRATIONDATE=2030-01-01T00:00:00Z \
-e ZITADEL_FIRSTINSTANCE_ORG_MACHINE_MACHINE_USERNAME=console-service \
-e ZITADEL_FIRSTINSTANCE_ORG_MACHINE_MACHINE_NAME='Console Service' \
-e ZITADEL_FIRSTINSTANCE_ORG_MACHINE_PAT_EXPIRATIONDATE=2030-01-01T00:00:00Z \
-e ZITADEL_MACHINE_IDENTIFICATION_HOSTNAME_ENABLED=true \
-e ZITADEL_MACHINE_IDENTIFICATION_WEBHOOK_ENABLED=false \
-e ZITADEL_DEFAULTINSTANCE_SMTPCONFIGURATION_SMTP_HOST='sandbox.smtp.mailtrap.io:587' \
-e ZITADEL_DEFAULTINSTANCE_SMTPCONFIGURATION_SMTP_USER='<MAILTRAP_USER>' \
-e ZITADEL_DEFAULTINSTANCE_SMTPCONFIGURATION_SMTP_PASSWORD='<MAILTRAP_PASSWORD>' \
-e ZITADEL_DEFAULTINSTANCE_SMTPCONFIGURATION_FROM=console-notifications@exto360.com \
-e ZITADEL_DEFAULTINSTANCE_SMTPCONFIGURATION_FROMNAME='Exto Console' \
-e ZITADEL_DEFAULTINSTANCE_SMTPCONFIGURATION_REPLYTOADDRESS='vimal@exto360.com' \
-p 8090:8080 \
ghcr.io/zitadel/zitadel:latest start-from-init --masterkeyFromEnv 2>&1 | tee /tmp/zitadel-init.logImportant: Your local IP must be allowed through the PostgreSQL firewall for this step. Ensure every
-eline ends with\(backslash) for proper shell continuation.
Admin loginname:
admin@zitadel.id.exto360.com— Zitadel expects the loginname as<username>@<org-primary-domain>. The first-instance org name defaults toZITADEL, so its primary domain becomeszitadel.<ZITADEL_EXTERNALDOMAIN>. Initial password is the<INITIAL_ADMIN_PASSWORD>passed above (must be changed on first login).
4.4 Extract PAT tokens
After the 03_default_instance migration completes, PAT tokens are logged:
bash
grep -A5 '03_default_instance' /tmp/zitadel-init.logThe output contains two PAT tokens (long alphanumeric strings on their own lines):
- First token — admin human user PAT
- Second token — login-service machine user PAT (this is the one used by
zitadel-login)
Add to .env.prod:
ZITADEL_LOGIN_SERVICE_TOKEN=<second-pat-token>
ZITADEL_FIRST_INSTANCE_PASSWORD=<INITIAL_ADMIN_PASSWORD>Critical: The PATs are encrypted with
ZITADEL_MASTERKEY. The deployed Zitadel container must use the exact same masterkey, or all tokens become invalid. If you seeErrors.Token.Invalid (AUTH-7fs1e), verify the masterkey matches.
4.5 Remove firewall rule
bash
az postgres flexible-server firewall-rule delete \
--resource-group gg-ex-prod-console \
--name console-prod-pg \
--rule-name AllowMyIP --yesPhase 5 — Deploy Zitadel + Front Door
WARNING: Do NOT run
--phase zitadel(without_frontdoor). That phase deploys Zitadel without Front Door, and the container will attemptstart-from-initwith wrong external domain/port settings, causing the migration to hang. If this happens, drop and recreate the database:sql-- Connect as admin to the postgres database (not zitadel_auth) SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname = 'zitadel_auth' AND pid <> pg_backend_pid(); DROP DATABASE zitadel_auth; CREATE DATABASE zitadel_auth OWNER zitadel; GRANT ALL PRIVILEGES ON DATABASE zitadel_auth TO zitadel;Then re-run from Phase 4.2.
5.1 Deploy Zitadel container app and Front Door
bash
./infra/deploy.sh prod --phase zitadel_frontdoor5.2 Configure DNS for id.exto360.com
Run the DNS retrieval script and create the following records:
| Type | Name | Value |
|---|---|---|
| CNAME | id.exto360.com | <id-endpoint>.azurefd.net |
| TXT | _dnsauth.id.exto360.com | <validationToken> |
The
asuid.id.exto360.comTXT record should already exist from Phase 2.3.
5.3 Verify Zitadel is accessible
bash
# Health check
curl -s https://id.exto360.com/debug/ready
# Expected: ok
# OIDC discovery
curl -s https://id.exto360.com/.well-known/openid-configuration | jq .issuer
# Expected: "https://id.exto360.com"5.4 Configure SMTP (if not set during init)
If SMTP was not configured during Phase 4.3 (e.g. due to a missing \ in the Docker command), configure it via the Zitadel console UI:
- Go to
https://id.exto360.com/ui/console/instance?id=smtpprovider - Click Generic SMTP and fill in the details
- Activate the provider after saving
5.5 Verify PAT tokens work
bash
# Test with the admin human user PAT (first token from init)
curl -s -H "Authorization: Bearer <admin-PAT>" \
https://id.exto360.com/auth/v1/users/me | jq .user.userNameIf you see Errors.Token.Invalid, the ZITADEL_MASTERKEY in the container doesn't match the one used during init. Update the secret and restart the container (changing secrets alone does NOT create a new revision):
bash
az containerapp update -n zitadel -g gg-ex-prod-console \
--set-env-vars "RESTART_TRIGGER=$(date +%s)"Phase 6 — Bootstrap Console Resources in Zitadel
6.1 Create bootstrap input file
bash
cat > .bootstrap-input.env << 'EOF'
ZITADEL_URL=https://id.exto360.com
ZITADEL_PAT=<admin-human-user-pat>
CONSOLE_URL=https://console.exto360.com
DEVELOPMENT_MODE=false
EOF6.2 Run bootstrap
bash
go run ./scripts/bootstrap/The script creates:
- "Exto Instances" project
- "console-worker" machine user with IAM Owner + client credentials
- "Console Portal" OIDC app (PKCE, web code flow)
- Light theme branding
- Webhook target for user profile sync
Output is written to .bootstrap.env.
Note: Webhook target creation may fail with
Errors.Target.DeniedURLifconsole.exto360.comdoesn't resolve yet. This is expected — re-run bootstrap after Phase 9.
Note: If
console-workeralready exists from a previous run, the script does NOT regenerate client credentials. Reuse the existing values.
6.3 Copy bootstrap output to .env.prod
bash
# From .bootstrap.env, add these to infra/.env.prod:
ZITADEL_ADMIN_ORG_ID=...
EXTOID_PROJECT_ID=...
CONSOLE_PORTAL_CLIENT_ID=...
CONSOLE_SERVICE_CLIENT_ID=...
CONSOLE_SERVICE_CLIENT_SECRET=...
ZITADEL_PAT=<reuse-the-login-service-pat>
# Generate webhook secret
ZITADEL_WEBHOOK_SECRET=$(openssl rand -hex 32)Phase 7 — Full Stack Deployment
7.1 Verify .env.prod has all required variables
AZURE_TENANT_ID=...
ACR_RESOURCE_ID=...
POSTGRES_ADMIN_PASSWORD=...
CONSOLE_DB_PASSWORD=...
ZITADEL_MASTERKEY=...
ZITADEL_DB_PASSWORD=...
ZITADEL_FIRST_INSTANCE_PASSWORD=...
ZITADEL_LOGIN_SERVICE_TOKEN=...
ZITADEL_ADMIN_ORG_ID=...
EXTOID_PROJECT_ID=...
CONSOLE_PORTAL_CLIENT_ID=...
CONSOLE_SERVICE_CLIENT_ID=...
CONSOLE_SERVICE_CLIENT_SECRET=...
ZITADEL_PAT=...
ZITADEL_WEBHOOK_SECRET=...
EMAIL_API_KEY=...7.2 Deploy
bash
./infra/deploy.sh prodThis deploys: Console API, Console Worker, Zitadel, Zitadel Login, and full Front Door configuration (both console and id endpoints with all routes).
7.3 Configure DNS for console.exto360.com
Run the DNS retrieval script to get the correct values:
| Type | Name | Value |
|---|---|---|
| CNAME | console.exto360.com | <console-endpoint>.azurefd.net |
| TXT | _dnsauth.console.exto360.com | <validationToken> |
Important: The CNAME for
console.exto360.compoints to a different Front Door endpoint thanid.exto360.com. Do not reuse the same CNAME value.
The
asuid.console.exto360.comTXT record should already exist from Phase 2.3.
7.4 Verify Console API is accessible
bash
curl -s https://console.exto360.com/api/healthzPhase 8 — Deploy Web SPA
8.1 Build and deploy SPA
bash
cd web && npm run build && cd ..
./infra/deploy.sh prod --deploy-web-onlyThe deploy script automatically assigns Storage Blob Data Contributor role to the current user if missing.
Or deploy SPA together with infrastructure:
bash
./infra/deploy.sh prod --deploy-web8.2 Verify
bash
curl -s -o /dev/null -w "%{http_code}" https://console.exto360.com/
# Expected: 200Phase 9 — Update Webhook Target
The webhook target creation likely failed in Phase 6 because console.exto360.com wasn't reachable yet. Now that everything is deployed, re-run bootstrap:
bash
go run ./scripts/bootstrap/Then redeploy to pick up the webhook secret:
bash
# Copy ZITADEL_WEBHOOK_SECRET from .bootstrap.env to infra/.env.prod (if changed)
./infra/deploy.sh prodUpdating
Rebuild and push images
bash
make docker-all REGISTRY=gaeadev.azurecr.io TAG=dev-latest
docker push gaeadev.azurecr.io/console-api:dev-latest
docker push gaeadev.azurecr.io/console-worker:dev-latestThen redeploy:
bash
./infra/deploy.sh prodDeploy with a specific image tag
bash
./infra/deploy.sh prod --image-tag git-48d64e3SPA-only update
bash
cd web && npm run build && cd ..
./infra/deploy.sh prod --deploy-web-onlyPreview changes (dry run)
bash
./infra/deploy.sh prod --what-ifTroubleshooting
Zitadel: Errors.Token.Invalid (AUTH-7fs1e)
Cause: ZITADEL_MASTERKEY mismatch between local Docker init and deployed container, or the container hasn't been restarted after updating the secret.
Fix:
- Verify
.env.prodZITADEL_MASTERKEYmatches the value used in Phase 4.3 - Redeploy or force a restart (secret changes don't create new revisions):bash
az containerapp update -n zitadel -g gg-ex-prod-console \ --set-env-vars "RESTART_TRIGGER=$(date +%s)"
Zitadel: Errors.Target.DeniedURL (webhook)
Cause: console.exto360.com is not reachable yet when bootstrap tries to create the webhook target.
Fix: Deploy Console first (Phase 7-8), then re-run go run ./scripts/bootstrap/.
ACR pull fails: "unable to pull image using Managed identity"
The managed identity needs AcrPull on the ACR resource. If the ACR is in a different subscription, the Bicep acrResourceId param must be set to the full resource ID. See Phase 2.4.
PostgreSQL: "extension uuid-ossp is not allow-listed"
Use pgcrypto instead of uuid-ossp. The migration uses CREATE EXTENSION IF NOT EXISTS "pgcrypto".
Console API: "preflight failed — dial tcp 127.0.0.1:5432: connection refused"
DATABASE_URL is not set. Ensure the Bicep template includes the database-url secret in the container app config and that CONSOLE_DB_PASSWORD is in .env.prod.
Zitadel: "CreateCallback (AUTH-AWfge): No matching permissions found"
The login-service machine user needs IAM_LOGIN_CLIENT role. This is automatically assigned during Zitadel init, but if lost:
bash
# Find the login-service user ID
curl -s -H "Authorization: Bearer <admin-PAT>" \
https://id.exto360.com/management/v1/users/_search \
-d '{"queries":[{"typeQuery":{"type":"TYPE_MACHINE"}}]}' | jq '.result[] | {id, userName}'
# Grant IAM_LOGIN_CLIENT
curl -X PUT "https://id.exto360.com/admin/v1/members/<user-id>" \
-H "Authorization: Bearer <admin-PAT>" \
-H "Content-Type: application/json" \
-d '{"roles": ["IAM_LOGIN_CLIENT"]}'Storage upload: "You do not have the required permissions"
The deploy script (--deploy-web / --deploy-web-only) automatically assigns Storage Blob Data Contributor. If it fails, assign manually:
bash
az role assignment create \
--assignee $(az ad signed-in-user show --query id -o tsv) \
--role "Storage Blob Data Contributor" \
--scope $(az storage account show -n extoconsoleweb -g gg-ex-prod-console --query id -o tsv)Role assignment can take 1-2 minutes to propagate.
DNS validation: InvalidCustomHostNameValidation
There are two different DNS validation mechanisms:
| Prefix | Purpose | Record type |
|---|---|---|
asuid.* | Container Apps custom domain | TXT |
_dnsauth.* | Front Door custom domain | TXT |
Both are required. Run the DNS retrieval script and ensure all records are created.
Resilience (Mission-Critical)
The following settings in prod.bicepparam enable a fully resilient deployment:
What's enabled
| Layer | Feature | Parameter | Effect |
|---|---|---|---|
| PostgreSQL | Zone-redundant HA | postgresHaEnabled = true | Standby replica in a different AZ; automatic failover (~30s) |
| PostgreSQL | GeneralPurpose SKU | postgresSkuName = 'Standard_D2ds_v4' | Required for HA (Burstable does not support zone-redundant HA) |
| PostgreSQL | 35-day backup retention | postgresBackupRetentionDays = 35 | Point-in-time restore up to 35 days |
| PostgreSQL | Geo-redundant backup | postgresGeoRedundantBackup = true | Backup replicated to paired Azure region |
| Container Apps | Zone redundancy | zoneRedundant = true | Replicas distributed across availability zones |
| Console API | Min 2 replicas | consoleApiMinReplicas = 2 | Zero-downtime deployments and AZ failure tolerance |
| Zitadel | Min 2 replicas | zitadelMinReplicas = 2 | Auth service stays up during single-AZ outage |
| Front Door | Built-in | Always on | Global edge caching, health probes, automatic origin failover |
Prerequisites
- VNet integration must be enabled (
enableVnet = true) — zone redundancy for Container Apps requires a VNet-integrated environment. - PostgreSQL HA requires GeneralPurpose or MemoryOptimized tier — Burstable SKUs do not support zone-redundant HA.
Cost impact
| Component | Non-resilient | Resilient | Approx. delta |
|---|---|---|---|
| PostgreSQL | Burstable B2s | GeneralPurpose D2ds_v4 + standby | ~3-4x |
| Container Apps | 3 replicas total | 5 replicas total (2 API + 2 Zitadel + 1 Worker) | ~1.7x |
| Backup | 7-day local | 35-day geo-redundant | ~2x storage |
Enabling resilience on an existing deployment
bash
# 1. Update prod.bicepparam with the resilient values shown above
# 2. Redeploy infrastructure (PostgreSQL SKU change + HA may take 10-15 min)
./infra/deploy.sh prod --phase infra
# 3. Redeploy full stack (picks up zone-redundant environment + replica counts)
./infra/deploy.sh prodNote: Changing PostgreSQL from Burstable to GeneralPurpose triggers a server restart (~2-5 min downtime). Plan a maintenance window.
Secrets Strategy
| Secret | Location | Purpose |
|---|---|---|
.env.prod | Local (gitignored) | All deployment secrets passed to Bicep |
DATABASE_URL | Container App secret | Console PostgreSQL connection string |
| Azure Key Vault | Runtime | Instance tokens written by Console API |
ZITADEL_MASTERKEY | Container App secret | Zitadel encryption key |
No secrets are stored in the Git repo. All secrets are passed via infra/.env.<env> files (gitignored) or environment variables.

