Skip to content

Troubleshooting

Common issues and fixes across all Exto Console components.

Zitadel

bash
# Pod status
kubectl -n zitadel get pods

# Setup job logs (first install)
kubectl -n zitadel logs job/zitadel-setup

# Runtime logs
kubectl -n zitadel logs deploy/zitadel --tail=50

Verify OIDC discovery:

bash
curl -s https://auth-dev.exto360.com/.well-known/openid-configuration | jq .issuer
# Should return: "https://auth-dev.exto360.com"
IssueFix
permission denied to create databaseSet Admin.ExistingDatabase: zitadel — Azure PG doesn't grant CREATEDB to regular roles
failed SASL auth / no pg_hba.conf entryAdd Admin.SSL.Mode: require and User.SSL.Mode: require
hostname resolving error: lookup port=5432Host is empty in config. Set it explicitly in configmapConfig.Database.Postgres
user=postgres database=postgres in errorCredentials not reaching init job. Use secretConfig (not env vars)
duplicate key ... unique_constraints_pkeyPrevious partial setup left data. Drop schemas and reinstall (see Zitadel AKS Setup)
context deadline exceeded on setupIncrease InstanceAggregateTimeout (e.g. 120s) and setupJob.backoffLimit
CSP errors / upstream_balancer in consoleRemove backend-protocol: "GRPC" annotation — gRPC-web works over HTTP/1.1
Ingress 502Zitadel not ready yet, or TLS misconfigured (ExternalSecure vs TLS.Enabled)
Login page not loadingCheck login.ingress config in Helm values
Console loads but login failsExternalDomain and ExternalSecure must match your ingress. Zitadel is strict about issuer URL

Console API

bash
kubectl -n console-dev logs deploy/console-api --tail=50
IssueFix
preflight: db connection failedCheck DATABASE_URL in console-secrets
preflight: zitadel unreachableCheck ZITADEL_ISSUER env var, verify Zitadel pod is running, check network policies
preflight: redis connection failedCheck REDIS_URL, verify Redis pod is running (kubectl -n console-dev get pods)
Pod in CrashLoopBackOffCheck preflight logs — usually a missing secret or bad connection string
401 on API callsCONSOLE_SERVICE_CLIENT_ID/SECRET mismatch — re-run bootstrap if the secret was rotated
webhook: invalid signatureZITADEL_WEBHOOK_SECRET in console-secrets doesn't match the signing key from bootstrap
Seed not runningCheck SEED_ADMIN_EMAIL is set in patch-api.yaml

Updating a secret value

bash
# Patch a single key in console-secrets
kubectl -n console-dev patch secret console-secrets --type merge \
  -p '{"data":{"ZITADEL_WEBHOOK_SECRET":"'$(echo -n '<new-value>' | base64)'"}}'

# Restart API to pick up the new secret
kubectl -n console-dev rollout restart deploy/console-api

Console Web

bash
kubectl -n console-dev logs deploy/console-web --tail=20
IssueFix
Blank pageCheck browser console for CORS or auth errors
Login redirect loopCONSOLE_PORTAL_CLIENT_ID mismatch, or Zitadel OIDC app redirect URI is wrong
404 on page refreshnginx SPA fallback not working — check deploy/docker/nginx.conf
Assets not loadingCheck the web pod is running and the image was built correctly

Console Worker

bash
kubectl -n console-dev logs deploy/console-worker --tail=50
IssueFix
Not processing jobsCheck Redis connection (REDIS_URL)
zitadel: token exchange failedCheck CONSOLE_SERVICE_CLIENT_ID/SECRET
Health poll returns 401 on instancesThe console-worker machine user's Access Token Type must be JWT in Zitadel (Users → console-worker → Settings). Opaque tokens cannot be validated by instance JWKS middleware.
Health poll returns invalid character '<'The instance ingress is routing /internal/* to the web frontend (SPA) instead of the Go backend. Add a dedicated ingress rule for /internal pointing to the Go service without rewrite-target.
Health poll returns 404The instance's exto-go build does not include the /internal/health route. Ensure the feat/console branch (or its changes) is merged and deployed.
Key Vault write failuresCheck Azure Key Vault RBAC — kubelet identity needs Key Vault Secrets Officer

Console-Agent

bash
kubectl -n <instance-namespace> logs deploy/console-agent --tail=30
IssueFix
console api: unreachableCheck CONSOLE_API_URL, verify network connectivity between clusters
auth failedInstance token mismatch — recreate console-agent-secrets with correct token
RBAC: forbiddenRole/RoleBinding not applied — re-run kubectl apply -k for the agent overlay
Pod not startingCheck image pull secret exists and image is in registry
ImagePullBackOffWrong registry URL, missing pull secret, or ACR not attached to cluster
Agent running but instance shows disconnected in ConsoleCheck CONSOLE_INSTANCE_ID matches the instance ID in Console

Recreating the agent secret

If the instance token was lost or needs rotation:

  1. Delete the old instance in Console UI
  2. Create a new instance — get the new Instance ID and Token
  3. Update the agent secret:
    bash
    kubectl -n <namespace> delete secret console-agent-secrets
    kubectl -n <namespace> create secret generic console-agent-secrets \
      --from-literal=instance-token='<new-token>'
  4. Update CONSOLE_INSTANCE_ID in the agent overlay's patch-agent.yaml
  5. Re-apply: kubectl apply -k deploy/console-agent/overlays/<name>/

General

Pod stuck in Pending

bash
kubectl -n <namespace> describe pod <pod-name>
# Look at Events section for scheduling failures

Common causes: insufficient CPU/memory, node selector mismatch, image pull secret missing.

Checking all console components at once

bash
# Console cluster
kubectl -n console-dev get pods
kubectl -n console-dev get ingress
kubectl -n console-dev get svc

# Instance cluster
kubectl -n <namespace> get pods
kubectl -n <namespace> get sa
kubectl -n <namespace> get role,rolebinding