Skip to content

Secrets Store Setup

The Console uses Azure Key Vault to store three secrets per instance, all keyed by instanceId:

Secret key patternContentWritten by
instance-{instanceId}Registration token (raw)Console API on create/rotate
instance-{instanceId}-dburiDatabase connection URIConsole API on connection upsert
instance-{instanceId}-oidc-secretOIDC client secretConsole API on connection upsert

PostgreSQL only stores key references (e.g. instance-abc123-dburi) — never the raw values.

Authentication to the vault uses DefaultAzureCredential, which tries in order:

  1. Managed Identity — used in AKS (production)
  2. Environment variables — used in CI/CD (AZURE_CLIENT_ID, AZURE_CLIENT_SECRET, AZURE_TENANT_ID)
  3. Azure CLI session — used in local dev (az login)

Local Development

Prerequisites

  • Azure CLI installed: brew install azure-cli
  • Access to the exto-console-dev key vault

One-time setup

1. Log in with Azure CLI:

bash
az login

2. Grant yourself Secrets Officer on the dev vault:

bash
az role assignment create \
  --role "Key Vault Secrets Officer" \
  --assignee $(az ad signed-in-user show --query id -o tsv) \
  --scope $(az keyvault show --name exto-console-dev --query id -o tsv)

Role assignments take ~1 minute to propagate.

3. Add to your .env:

env
SECRET_STORE_PROVIDER=azure-keyvault
AZURE_KEY_VAULT_URL=https://exto-console-dev.vault.azure.net/

4. Run as normal:

bash
make api
make worker

On startup you should see:

secret store: azure key vault vault=https://exto-console-dev.vault.azure.net/

Opting out (no vault needed)

Leave SECRET_STORE_PROVIDER unset or set it to noop. The app starts without vault access but logs a warning on every secret operation. Instance tokens and connection credentials are not persisted — fine for UI/routing work but breaks any flow that needs real DB connections or token auth.


CI/CD (GitHub Actions)

CI uses a service principal with scoped vault access.

One-time setup (done once per environment)

1. Create a service principal:

bash
SP=$(az ad sp create-for-rbac --name "console-ci-staging" --skip-assignment)
echo $SP  # save CLIENT_ID, CLIENT_SECRET, TENANT_ID

2. Grant it Secrets Officer on the target vault:

bash
az role assignment create \
  --role "Key Vault Secrets Officer" \
  --assignee <CLIENT_ID from above> \
  --scope $(az keyvault show --name exto-console-<env> --query id -o tsv)

3. Add to GitHub repository secrets (Settings → Secrets → Actions):

AZURE_TENANT_ID       = <tenantId from SP output>
AZURE_CLIENT_ID       = <clientId from SP output>
AZURE_CLIENT_SECRET   = <clientSecret from SP output>
AZURE_KEY_VAULT_URL   = https://exto-console-<env>.vault.azure.net/

Workflow snippet

yaml
- name: Run integration tests
  env:
    SECRET_STORE_PROVIDER: azure-keyvault
    AZURE_KEY_VAULT_URL: ${{ secrets.AZURE_KEY_VAULT_URL }}
    AZURE_TENANT_ID: ${{ secrets.AZURE_TENANT_ID }}
    AZURE_CLIENT_ID: ${{ secrets.AZURE_CLIENT_ID }}
    AZURE_CLIENT_SECRET: ${{ secrets.AZURE_CLIENT_SECRET }}
    MONGO_URI: ${{ secrets.MONGO_URI }}
  run: go test ./...

DefaultAzureCredential picks up the three AZURE_* env vars automatically — no code change needed.


Production (AKS + Workload Identity)

In production the Console pods authenticate via Azure Workload Identity — no credentials stored anywhere, the pod identity is granted vault access directly.

One-time Azure setup

1. Enable OIDC issuer and Workload Identity on the AKS cluster:

bash
az aks update \
  --name <cluster-name> \
  --resource-group <rg> \
  --enable-oidc-issuer \
  --enable-workload-identity

2. Create a managed identity for the Console:

bash
az identity create \
  --name console-identity \
  --resource-group <rg>

3. Grant it Secrets Officer on the production vault:

bash
az role assignment create \
  --role "Key Vault Secrets Officer" \
  --assignee $(az identity show --name console-identity --resource-group <rg> --query clientId -o tsv) \
  --scope $(az keyvault show --name exto-console-prod --query id -o tsv)

4. Create a federated credential linking the managed identity to the Console's Kubernetes service account:

bash
AKS_OIDC=$(az aks show --name <cluster-name> --resource-group <rg> --query oidcIssuerProfile.issuerUrl -o tsv)

az identity federated-credential create \
  --name console-api-federated \
  --identity-name console-identity \
  --resource-group <rg> \
  --issuer $AKS_OIDC \
  --subject "system:serviceaccount:console:console-api" \
  --audience api://AzureADTokenExchange

Kubernetes manifests

ServiceAccount (annotated with the managed identity):

yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: console-api
  namespace: console
  annotations:
    azure.workload.identity/client-id: "<managed-identity-client-id>"

Deployment (label the pod so the webhook injects the token):

yaml
spec:
  template:
    metadata:
      labels:
        azure.workload.identity/use: "true"
    spec:
      serviceAccountName: console-api
      containers:
        - name: console-api
          env:
            - name: SECRET_STORE_PROVIDER
              value: azure-keyvault
            - name: AZURE_KEY_VAULT_URL
              value: https://exto-console-prod.vault.azure.net/
            # No AZURE_CLIENT_ID / AZURE_CLIENT_SECRET needed —
            # Workload Identity injects the token automatically.

How it works at runtime

  1. The AKS Workload Identity webhook mounts a projected service account token into the pod.
  2. DefaultAzureCredential detects the AZURE_FEDERATED_TOKEN_FILE env var injected by the webhook and uses it to exchange for a Key Vault access token.
  3. No credentials are stored in environment variables, Kubernetes secrets, or code.

Instance Access to Key Vault

Instances read all three of their own secrets from the vault directly:

What an instance reads

Secret keyContentWhen
instance-{instanceId}Registration token (Bearer for Console calls)On startup + periodic refresh
instance-{instanceId}-dburiMongoDB connection URIOn startup to connect to its own DB
instance-{instanceId}-oidc-secretOIDC client secret (Zitadel app)On startup for OIDC auth config

Required RBAC role

Instances need Key Vault Secrets User (read-only) scoped to their own three secrets.

bash
VAULT_ID=$(az keyvault show --name exto-console-prod --query id -o tsv)
IDENTITY=$(az identity show --name instance-<instanceId> --resource-group <rg> --query clientId -o tsv)

# Grant read access to all three secrets for this instance
for SECRET in \
  "instance-<instanceId>" \
  "instance-<instanceId>-dburi" \
  "instance-<instanceId>-oidc-secret"; do
  az role assignment create \
    --role "Key Vault Secrets User" \
    --assignee $IDENTITY \
    --scope "$VAULT_ID/secrets/$SECRET"
done

Kubernetes setup (AKS Workload Identity)

Each instance pod needs its own managed identity and federated credential, just like the Console.

1. Create a managed identity per instance:

bash
az identity create \
  --name instance-<instanceId> \
  --resource-group <rg>

2. Grant it Secrets User on the vault:

bash
az role assignment create \
  --role "Key Vault Secrets User" \
  --assignee $(az identity show --name instance-<instanceId> --resource-group <rg> --query clientId -o tsv) \
  --scope $(az keyvault show --name exto-console-prod --query id -o tsv)/secrets/instance-<instanceId>

3. Create a federated credential linking it to the instance's Kubernetes service account:

bash
AKS_OIDC=$(az aks show --name <cluster-name> --resource-group <rg> --query oidcIssuerProfile.issuerUrl -o tsv)

az identity federated-credential create \
  --name instance-<instanceId>-federated \
  --identity-name instance-<instanceId> \
  --resource-group <rg> \
  --issuer $AKS_OIDC \
  --subject "system:serviceaccount:<namespace>:<serviceaccount-name>" \
  --audience api://AzureADTokenExchange

4. Annotate the instance's ServiceAccount:

yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: <serviceaccount-name>
  namespace: <namespace>
  annotations:
    azure.workload.identity/client-id: "<instance-managed-identity-client-id>"

5. Label the instance pod:

yaml
spec:
  template:
    metadata:
      labels:
        azure.workload.identity/use: "true"
    spec:
      serviceAccountName: <serviceaccount-name>

How the instance reads its token at runtime

The instance uses the Azure SDK with DefaultAzureCredential (same as the Console). On AKS it resolves via Workload Identity automatically.

Secret name:  instance-{instanceId}
Vault URL:    https://exto-console-prod.vault.azure.net/

The instance should:

  1. Read the secret on startup.
  2. Cache the token in memory.
  3. Re-read the secret periodically (or on 401 from the Console) to pick up rotations without a restart.

The Console's rotate-token endpoint overwrites the same secret name — Azure Key Vault auto-versions the old value, so the instance gets the new token on its next read with no coordination needed.

RBAC summary

PrincipalRoleScope
Console API podKey Vault Secrets OfficerEntire vault
Console Worker podKey Vault Secrets OfficerEntire vault
Instance podKey Vault Secrets Userinstance-{instanceId} secret only

Vault naming reference

All secret names use only alphanumeric characters and hyphens (Azure KV requirement).

TypeKey formatExample
Instance tokeninstance-{instanceId}instance-507f1f77bcf86cd799439011
DB URIinstance-{instanceId}-dburiinstance-507f1f77bcf86cd799439011-dburi
OIDC client secretinstance-{instanceId}-oidc-secretinstance-507f1f77bcf86cd799439011-oidc-secret

The {instanceId} is the primary key of the instances row, set at insert time. The value stored in db_uri_ref and oidc_client_secret_ref is exactly this key name.


Verifying secrets in the vault

bash
# List all Console secrets
az keyvault secret list --vault-name exto-console-dev --query "[].name" -o tsv

# Read a specific secret
az keyvault secret show --vault-name exto-console-dev --name instance-<instanceId>-dburi --query value -o tsv

# Delete a secret (e.g. after decommissioning an instance)
az keyvault secret delete --vault-name exto-console-dev --name instance-<instanceId>