Appearance
Troubleshooting
Common issues and fixes across all Exto Console components.
Zitadel
bash
# Pod status
kubectl -n zitadel get pods
# Setup job logs (first install)
kubectl -n zitadel logs job/zitadel-setup
# Runtime logs
kubectl -n zitadel logs deploy/zitadel --tail=50Verify OIDC discovery:
bash
curl -s https://auth-dev.exto360.com/.well-known/openid-configuration | jq .issuer
# Should return: "https://auth-dev.exto360.com"| Issue | Fix |
|---|---|
permission denied to create database | Set Admin.ExistingDatabase: zitadel — Azure PG doesn't grant CREATEDB to regular roles |
failed SASL auth / no pg_hba.conf entry | Add Admin.SSL.Mode: require and User.SSL.Mode: require |
hostname resolving error: lookup port=5432 | Host is empty in config. Set it explicitly in configmapConfig.Database.Postgres |
user=postgres database=postgres in error | Credentials not reaching init job. Use secretConfig (not env vars) |
duplicate key ... unique_constraints_pkey | Previous partial setup left data. Drop schemas and reinstall (see Zitadel AKS Setup) |
context deadline exceeded on setup | Increase InstanceAggregateTimeout (e.g. 120s) and setupJob.backoffLimit |
CSP errors / upstream_balancer in console | Remove backend-protocol: "GRPC" annotation — gRPC-web works over HTTP/1.1 |
| Ingress 502 | Zitadel not ready yet, or TLS misconfigured (ExternalSecure vs TLS.Enabled) |
| Login page not loading | Check login.ingress config in Helm values |
| Console loads but login fails | ExternalDomain and ExternalSecure must match your ingress. Zitadel is strict about issuer URL |
Console API
bash
kubectl -n console-dev logs deploy/console-api --tail=50| Issue | Fix |
|---|---|
preflight: db connection failed | Check DATABASE_URL in console-secrets |
preflight: zitadel unreachable | Check ZITADEL_ISSUER env var, verify Zitadel pod is running, check network policies |
preflight: redis connection failed | Check REDIS_URL, verify Redis pod is running (kubectl -n console-dev get pods) |
Pod in CrashLoopBackOff | Check preflight logs — usually a missing secret or bad connection string |
| 401 on API calls | CONSOLE_SERVICE_CLIENT_ID/SECRET mismatch — re-run bootstrap if the secret was rotated |
webhook: invalid signature | ZITADEL_WEBHOOK_SECRET in console-secrets doesn't match the signing key from bootstrap |
| Seed not running | Check SEED_ADMIN_EMAIL is set in patch-api.yaml |
Updating a secret value
bash
# Patch a single key in console-secrets
kubectl -n console-dev patch secret console-secrets --type merge \
-p '{"data":{"ZITADEL_WEBHOOK_SECRET":"'$(echo -n '<new-value>' | base64)'"}}'
# Restart API to pick up the new secret
kubectl -n console-dev rollout restart deploy/console-apiConsole Web
bash
kubectl -n console-dev logs deploy/console-web --tail=20| Issue | Fix |
|---|---|
| Blank page | Check browser console for CORS or auth errors |
| Login redirect loop | CONSOLE_PORTAL_CLIENT_ID mismatch, or Zitadel OIDC app redirect URI is wrong |
| 404 on page refresh | nginx SPA fallback not working — check deploy/docker/nginx.conf |
| Assets not loading | Check the web pod is running and the image was built correctly |
Console Worker
bash
kubectl -n console-dev logs deploy/console-worker --tail=50| Issue | Fix |
|---|---|
| Not processing jobs | Check Redis connection (REDIS_URL) |
zitadel: token exchange failed | Check CONSOLE_SERVICE_CLIENT_ID/SECRET |
Health poll returns 401 on instances | The console-worker machine user's Access Token Type must be JWT in Zitadel (Users → console-worker → Settings). Opaque tokens cannot be validated by instance JWKS middleware. |
Health poll returns invalid character '<' | The instance ingress is routing /internal/* to the web frontend (SPA) instead of the Go backend. Add a dedicated ingress rule for /internal pointing to the Go service without rewrite-target. |
Health poll returns 404 | The instance's exto-go build does not include the /internal/health route. Ensure the feat/console branch (or its changes) is merged and deployed. |
| Key Vault write failures | Check Azure Key Vault RBAC — kubelet identity needs Key Vault Secrets Officer |
Console-Agent
bash
kubectl -n <instance-namespace> logs deploy/console-agent --tail=30| Issue | Fix |
|---|---|
console api: unreachable | Check CONSOLE_API_URL, verify network connectivity between clusters |
auth failed | Instance token mismatch — recreate console-agent-secrets with correct token |
RBAC: forbidden | Role/RoleBinding not applied — re-run kubectl apply -k for the agent overlay |
| Pod not starting | Check image pull secret exists and image is in registry |
ImagePullBackOff | Wrong registry URL, missing pull secret, or ACR not attached to cluster |
| Agent running but instance shows disconnected in Console | Check CONSOLE_INSTANCE_ID matches the instance ID in Console |
Recreating the agent secret
If the instance token was lost or needs rotation:
- Delete the old instance in Console UI
- Create a new instance — get the new Instance ID and Token
- Update the agent secret:bash
kubectl -n <namespace> delete secret console-agent-secrets kubectl -n <namespace> create secret generic console-agent-secrets \ --from-literal=instance-token='<new-token>' - Update
CONSOLE_INSTANCE_IDin the agent overlay'spatch-agent.yaml - Re-apply:
kubectl apply -k deploy/console-agent/overlays/<name>/
General
Pod stuck in Pending
bash
kubectl -n <namespace> describe pod <pod-name>
# Look at Events section for scheduling failuresCommon causes: insufficient CPU/memory, node selector mismatch, image pull secret missing.
Checking all console components at once
bash
# Console cluster
kubectl -n console-dev get pods
kubectl -n console-dev get ingress
kubectl -n console-dev get svc
# Instance cluster
kubectl -n <namespace> get pods
kubectl -n <namespace> get sa
kubectl -n <namespace> get role,rolebinding
