Skip to content

Console-Agent Setup

Deploy console-agent to instance clusters. Each customer instance runs an agent that reports health back to Console and executes commands (rollouts, rollbacks).

Architecture

┌──────────────────────────────────────────────┐
│  Instance Cluster                            │
│                                              │
│  ┌───────────┐    ┌─────────────────────┐    │
│  │ console-agent │ │ exto-go (app)       │    │
│  │               │─▶│ deployments, pods   │    │
│  └───────┬───────┘  └─────────────────────┘    │
│          │                                     │
│          │ HTTPS (reports + SSE commands)       │
└──────────┼─────────────────────────────────────┘


   Console API (console-dev / console-prod)

The agent uses a Kubernetes Role + RoleBinding (not ClusterRole) — it can only access resources in its own namespace.

ResourceVerbsPurpose
deploymentsget, list, watch, update, patchObserve state, execute rollouts
replicasetsget, list, watchTrack rollout history
podsget, list, watchReport pod health

Prerequisites

  • Console is deployed and running — see Console Deployment
  • kubectl access to the instance cluster
  • ACR credentials or ACR-AKS attachment for image pull
  • An instance created in Console UI (provides Instance ID + Instance Token)

1. Create an instance in Console

  1. Open Console UI (https://console-dev.exto360.com)
  2. Navigate to Instances > Create Instance
  3. Fill in instance details (name, domain, etc.)
  4. After creation, Console displays:
    • Instance ID — e.g. 69c07e5a3d88632a232f24e0
    • Instance Token — a one-time-display token for agent authentication
  5. Copy both values immediately — the token is shown only once

2. Create namespace and secrets

On the instance cluster (may be a different AKS cluster than Console):

bash
# Create namespace (if not already present)
kubectl create namespace <instance-namespace>

# Create image pull secret (if ACR not attached)
kubectl -n <instance-namespace> create secret docker-registry <pull-secret-name> \
  --docker-server=gaeadev.azurecr.io \
  --docker-username=<acr-username> \
  --docker-password=<acr-password>

# Create the agent secret with the instance token from Console
kubectl -n <instance-namespace> create secret generic console-agent-secrets \
  --from-literal=instance-token='<instance-token-from-console>'

3. Build and push the agent image

bash
make docker-agent REGISTRY=gaeadev.azurecr.io TAG=dev-latest
docker push gaeadev.azurecr.io/console-agent:dev-latest

Apple Silicon note: Add --platform linux/amd64 if building on macOS ARM.

4. Prepare the agent overlay

Copy the example overlay:

bash
cp -r deploy/console-agent/overlays/example deploy/console-agent/overlays/<instance-name>

Edit the new overlay:

kustomization.yaml:

yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: <instance-namespace>

resources:
  - ../../base

patches:
  - path: patch-agent.yaml

images:
  - name: registry.io/exto/console-agent
    newName: gaeadev.azurecr.io/console-agent
    newTag: "dev-latest"

patch-agent.yaml:

yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: console-agent
spec:
  template:
    spec:
      serviceAccountName: console-agent
      imagePullSecrets:
        - name: <pull-secret-name>
      containers:
        - name: console-agent
          imagePullPolicy: Always
          env:
            - name: CONSOLE_API_URL
              value: "https://console-dev.exto360.com/api/v1"
            - name: CONSOLE_INSTANCE_ID
              value: "<instance-id-from-console>"
            - name: CONSOLE_INSTANCE_TOKEN
              valueFrom:
                secretKeyRef:
                  name: console-agent-secrets
                  key: instance-token
            - name: LOG_LEVEL
              value: "debug"
            - name: REPORT_INTERVAL
              value: "60s"

5. Deploy

Single namespace (manual)

Switch kubectl context to the instance cluster, then:

bash
# Preview
kubectl apply -k deploy/console-agent/overlays/<instance-name> --dry-run=client -o yaml

# Deploy
kubectl apply -k deploy/console-agent/overlays/<instance-name>

Use the deploy script with the shared overlay to deploy across multiple namespaces without creating per-namespace overlay directories.

Prerequisites per namespace

Each namespace needs a console-agent-secrets secret containing the instance credentials from Console:

bash
kubectl -n <namespace> create secret generic console-agent-secrets \
  --from-literal=console.api.url='https://your-console-api/api/v1' \
  --from-literal=console.instance.id='<instance-id>' \
  --from-literal=console.instance.token='<instance-token>'

Also ensure the image pull secret exists in each namespace. By default the script expects exto-<namespace>-pull-secret.

Deploy to namespaces

bash
# Deploy to multiple namespaces on one server (comma-separated)
./scripts/deploy-console-agent.sh <context> <ns1>,<ns2>,<ns3>

# Example: deploy to 5 namespaces on the dev cluster
./scripts/deploy-console-agent.sh gt-ex-dev-aks-01 qa1,qa2,qa3,staging1,staging2

Override pull secret name

By default the script uses exto-<namespace>-pull-secret. If a namespace has a different pull secret name, use the --pull-secret flag:

bash
./scripts/deploy-console-agent.sh gt-ex-dev-aks-01 special-ns --pull-secret my-custom-pull-secret

Note: --pull-secret applies to all namespaces in that invocation. If only one namespace has a different pull secret, run it separately.

Dry run

Preview the rendered manifests without applying:

bash
./scripts/deploy-console-agent.sh gt-ex-dev-aks-01 qa1,qa2 --dry-run

Deploying across multiple servers

Run the script once per server with that server's namespaces:

bash
./scripts/deploy-console-agent.sh gt-ex-dev-aks-01  qa1,qa2,qa3
./scripts/deploy-console-agent.sh gt-ex-dev-aks-02  demo1,demo2,demo3
./scripts/deploy-console-agent.sh gt-ex-prod-aks-01 prod1,prod2,prod3
./scripts/deploy-console-agent.sh gt-ex-prod-aks-02 prod4,prod5,prod6
./scripts/deploy-console-agent.sh gt-ex-prod-aks-03 prod7,prod8
./scripts/deploy-console-agent.sh gt-ex-prod-aks-04 prod9

6. Deploy exto-go

Each instance also needs the exto-go application deployed in the same namespace. The agent monitors exto-go's deployments:

  • Console-agent watches Deployment, ReplicaSet, and Pod resources in its namespace
  • It reports pod health and deployment status back to Console
  • Console can send commands (rollout, rollback) that the agent executes

Refer to the exto-go deployment docs for application-specific setup.

7. Verify

bash
# Agent pod should be Running
kubectl -n <instance-namespace> get pods

# Agent logs — look for: "console-agent starting", "console api: reachable"
kubectl -n <instance-namespace> logs deploy/console-agent --tail=20

# In Console UI: the instance should show as "connected" / healthy

End-to-end checklist

  1. Create instance in Console UI → copy Instance ID and Instance Token
  2. On instance cluster:
    • Create namespace
    • Create image pull secret
    • Create console-agent-secrets with the instance token
  3. Copy and configure agent overlay (deploy/console-agent/overlays/<name>/)
  4. kubectl apply -k deploy/console-agent/overlays/<name>/
  5. Deploy exto-go application in the same namespace
  6. Verify agent logs show "console api: reachable"
  7. Verify Console UI shows instance as connected

Directory Structure

deploy/console-agent/
├── base/
│   ├── kustomization.yaml
│   ├── deployment.yaml
│   ├── serviceaccount.yaml
│   ├── role.yaml               # apps/deployments, replicasets, pods
│   └── rolebinding.yaml
└── overlays/
    ├── shared/                 # Reusable overlay for multi-namespace deploy script
    │   ├── kustomization.yaml
    │   └── patch-agent.yaml
    ├── example/                # Copy for custom per-instance overlays
    ├── qa1/
    └── qa2/

scripts/
└── deploy-console-agent.sh    # Deploy to one or more namespaces