Alauda Build of Kiali

TOC

Using Alauda Build of Kiali

The Alauda Build of Kiali provides observability and visualization capabilities for applications deployed within the service mesh. After adding an application to the mesh, the Alauda Build of Kiali can be used to inspect traffic flow and monitor mesh behavior.

About Kiali

The Alauda Build of Kiali is derived from the open source Kiali project and serves as the management console for Alauda Service Mesh.

It provides:

  • Visualization of mesh topology and real-time traffic flow
  • Insight into application health status and performance metrics
  • Centralized access to configuration and validation tools
  • Integration with Grafana for metric dashboards
  • Support for distributed tracing via Jaeger or OpenTelemetry

These capabilities enable users to diagnose service behavior, identify potential issues, and optimize mesh configuration from a unified interface.

Installing Alauda Build of Kiali

The following steps show how to install Alauda Build of Kiali.

Installing via the web console

Prerequisites

  • The Alauda Build of Kiali must be uploaded.
  • You are logged in to the Alauda Container Platform web console as cluster-admin.

Procedure

  1. In the Alauda Container Platform web console, navigate to Administrator.
  2. Select Marketplace > OperatorHub.
  3. Search for the Alauda Build of Kiali.
  4. Locate the Alauda Build of Kiali, and click to select it.
  5. Click Install.
  6. Click Install and Confirm to install the Operator.

Verification

Verify that the Operator installation status is reported as Succeeded in the Installation Info section.

Installing via the CLI

Prerequisites

  • The Alauda Build of Kiali must be uploaded.
  • An active ACP CLI (kubectl) session by a cluster administrator with the cluster-admin role.

Procedure

  1. Check available versions

    (
      echo -e "CHANNEL\tNAME\tVERSION"
      kubectl get packagemanifest kiali-operator -o json | jq -r '
        .status.channels[] |
        .name as $channel |
        .entries[] |
        [$channel, .name, .version] | @tsv
      '
    ) | column -t -s $'\t'

    Example output

    CHANNEL  NAME                       VERSION
    stable   kiali-operator.v2.17.1-r1  2.17.1-r1

    Fields:

    • CHANNEL: Operator channel name
    • NAME: CSV resource name
    • VERSION: Operator version
  2. Confirm catalogSource

    kubectl get packagemanifests kiali-operator -ojsonpath='{.status.catalogSource}'

    Example output

    platform

    This indicates the kiali-operator comes from the platform catalogSource.

  3. Create a namespace

    kubectl get namespace kiali-operator || kubectl create namespace kiali-operator
  4. Create a Subscription

    kubectl apply -f - <<EOF
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      annotations:
        cpaas.io/target-namespaces: ""
      labels:
        catalog: platform
      name: kiali-operator
      namespace: kiali-operator
    spec:
      channel: stable
      installPlanApproval: Manual
      name: kiali-operator
      source: platform
      sourceNamespace: cpaas-system
      startingCSV: kiali-operator.v2.17.1-r1
    EOF

    Field explanations

    • annotation cpaas.io/target-namespaces: It is recommended to set this to empty; empty indicates cluster-wide installation.
    • .metadata.name: Subscription name (DNS-compliant, max 253 characters).
    • .metadata.namespace: Namespace where the Operator will be installed.
    • .spec.channel: Subscribed Operator channel.
    • .spec.installPlanApproval: Approval strategy (Manual or Automatic). Here, Manual requires manual approval for install/upgrade.
    • .spec.source: Operator catalogSource.
    • .spec.sourceNamespace: Must be set to cpaas-system because all catalogSources provided by the platform are located in this namespace.
    • .spec.startingCSV: Specifies the version to install for Manual approval; defaults to the latest in the channel if empty. Not required for Automatic.
  5. Check Subscription status

    kubectl -n kiali-operator get subscriptions kiali-operator -o yaml

    Key output

    • .status.state: UpgradePending indicates the Operator is awaiting installation or upgrade.
    • Condition InstallPlanPending = True: Waiting for manual approval.
    • .status.currentCSV: Latest subscribed CSV.
    • .status.installPlanRef: Associated InstallPlan; must be approved before installation proceeds.

    Wait for the InstallPlanPending condition to be True:

    kubectl -n kiali-operator wait --for=condition=InstallPlanPending subscription kiali-operator --timeout=2m
  6. Approve InstallPlan

    kubectl -n kiali-operator get installplan \
      "$(kubectl -n kiali-operator get subscriptions kiali-operator -o jsonpath='{.status.installPlanRef.name}')"

    Example output

    NAME            CSV                         APPROVAL   APPROVED
    install-ddh84   kiali-operator.v2.17.1-r1   Manual     false

    Approve manually

    PLAN="$(kubectl -n kiali-operator get subscription kiali-operator -o jsonpath='{.status.installPlanRef.name}')"
    kubectl -n kiali-operator patch installplan "$PLAN" --type=json -p='[{"op": "replace", "path": "/spec/approved", "value": true}]'

Verification

Wait for CSV creation; Phase changes to Succeeded:

kubectl wait --for=jsonpath='{.status.phase}'=Succeeded csv --all -n kiali-operator --timeout=3m

Check CSV status:

kubectl -n kiali-operator get csv

Example output

NAME                        DISPLAY                 VERSION     REPLACES   PHASE
kiali-operator.v2.17.1-r1   Alauda Build of Kiali   2.17.1-r1              Succeeded

Fields

  • NAME: Installed CSV name
  • DISPLAY: Operator display name
  • VERSION: Operator version
  • REPLACES: CSV replaced during upgrade
  • PHASE: Installation status (Succeeded indicates success)

Configuring Monitoring with Kiali

The following steps show how to integrate the Alauda Build of Kiali with user-workload monitoring.

Prerequisites

Procedure

Retrieve the CA certificate for Alauda Container Platform in Global cluster:

NOTE

Run the following command in the Global cluster

# CA certificate for ACP - base64-encoded
kubectl -ncpaas-system get secret dex.tls -o jsonpath='{.data.ca\.crt}'

The output is a base64-encoded certificate. Store this value for use in later steps.

Note: If the command returns an empty output, contact your administrator to obtain the ACP CA certificate.

Retrieve platform configuration from the business cluster:

export PLATFORM_URL=$(kubectl -nkube-public get configmap global-info -o jsonpath='{.data.platformURL}')
export CLUSTER_NAME=$(kubectl -nkube-public get configmap global-info -o jsonpath='{.data.clusterName}')
export ALB_CLASS_NAME=$(kubectl -nkube-public get configmap global-info -o jsonpath='{.data.systemAlbIngressClassName}')

export OIDC_ISSUER=$(kubectl -nkube-public get configmap global-info -o jsonpath='{.data.oidcIssuer}')
export OIDC_CLIENT_ID=$(kubectl -nkube-public get configmap global-info -o jsonpath='{.data.oidcClientID}')
export OIDC_CLIENT_SECRET=$(kubectl -nkube-public get configmap global-info -o jsonpath='{.data.oidcClientSecret}')

export MONITORING_URL=$(kubectl get feature monitoring -o jsonpath='{.spec.accessInfo.database.address}')

Create a Secret named kiali in the istio-system namespace for OpenID authentication:

kubectl create secret generic kiali --from-literal="oidc-secret=$OIDC_CLIENT_SECRET" -nistio-system

Example output:

secret/kiali created

Create a Secret for monitoring database credentials:

SECRET_NAME=$(kubectl get feature monitoring -o jsonpath='{.spec.accessInfo.database.basicAuth.secretName}')

AUTH_USERNAME=$(kubectl -ncpaas-system get secret "$SECRET_NAME" -o jsonpath="{.data.username}" | base64 -d)
AUTH_PASSWORD=$(kubectl -ncpaas-system get secret "$SECRET_NAME" -o jsonpath="{.data.password}" | base64 -d)

kubectl create secret generic "kiali-monitoring-basic-auth" \
  --from-literal="username=$AUTH_USERNAME" \
  --from-literal="password=$AUTH_PASSWORD" \
  -n istio-system

Example output:

secret/kiali-monitoring-basic-auth created

Create a file named kiali.yaml with the following content. Replace placeholder values as needed:

kiali.yaml
apiVersion: kiali.io/v1alpha1
kind: Kiali
metadata:
  name: kiali
  namespace: istio-system
spec:
  server:
    web_port: "443"
    web_root: /clusters/${CLUSTER_NAME}/kiali
  auth:
    openid:
      api_proxy: ${PLATFORM_URL}/kubernetes/${CLUSTER_NAME}
      api_proxy_ca_data: ${PLATFORM_CA}
      insecure_skip_verify_tls: true
      issuer_uri: ${OIDC_ISSUER}
      client_id: ${OIDC_CLIENT_ID}
      username_claim: email
    strategy: openid
  deployment:
    view_only_mode: false
    replicas: 1
    resources:
      requests:
        cpu: "100m"
        memory: "64Mi"
      limits:
        cpu: "2000m"
        memory: "1Gi"
    ingress:
      enabled: true
      class_name: ${ALB_CLASS_NAME}
  external_services:
    grafana:
      enabled: false  # Since Grafana is not bundled in ACP anymore, it is disabled by default
    prometheus:
      # query_scope only required in multi cluster
      # query_scope:
      #   mesh_id: <mesh_id>
      auth:
        type: basic
        username: secret:kiali-monitoring-basic-auth:username
        password: secret:kiali-monitoring-basic-auth:password
        insecure_skip_verify: true
      # Define thanos_proxy if Prometheus is to be queried through a Thanos proxy (it is required when using VictoriaMetrics)
      thanos_proxy:
        enabled: true
        retention_period: 7d
        scrape_interval: 60s
      url: ${MONITORING_URL}
  kiali_feature_flags:
    ui_defaults:
      i18n:
        language: en
        show_selector: true
  1. web_port (string) is the port for accessing the Kiali dashboard.
  2. web_root is the path under platform url for accessing the Kiali dashboard.
  3. api_proxy points to erebus to map ACP user tokens to Kubernetes tokens.
  4. api_proxy_ca_data is the base64‑encoded CA certificate used by erebus.
  5. issuer_uri is the OIDC issuer URL for dex.
  6. client_id is the OIDC client ID for dex.
  7. replicas specifies the number of replicas for the Kiali deployment, should be at least 2 in production environments.
  8. class_name is the ingress class name for the Kiali ingress.
  9. Required in multi cluster mesh. <mesh_id> should be the same as the .spec.values.global.meshId in Istio resource.
  10. username references the monitoring basic‑auth username stored in the kiali-monitoring-basic-auth Secret.
  11. password references the monitoring basic‑auth password stored in the kiali-monitoring-basic-auth Secret.
  12. Define thanos_proxy if Prometheus is to be queried through a Thanos proxy (it is required when using VictoriaMetrics).
  13. url is the monitoring endpoint for Prometheus or VictoriaMetrics.
  14. i18n specifies the default language and whether to show the language selector.

Apply the configuration, render the manifest with envsubst:

# Replace <platform-ca> with the real base64-encoded CA certificate saved previously.
export PLATFORM_CA=<platform-ca>
  1. Replace <platform-ca> with the real base64-encoded CA certificate saved previously.
envsubst < kiali.yaml | kubectl apply -f -

Access the Kiali console:

When the Kiali resource is ready, access the Kiali dashboard at <platform-url>/clusters/<cluster>/kiali.

Integrating distributed tracing platform with Alauda Build of Kiali

After integration with a distributed tracing platform, the Alauda Build of Kiali enables visualization of request traces directly in the Kiali console. These traces provide insight into inter-service communication within the service mesh and can help identify latency, failures, or bottlenecks in request paths.

This capability supports the analysis of request flow behavior, aiding in root cause identification and performance optimization across services in the mesh.

Prerequisites

  • Alauda Service Mesh is installed.
  • Distributed tracing platform such as Alauda Build of Jaeger is installed and successfully configured.

Procedure

  1. Update the Kiali resource spec configuration for tracing:

    Example Kiali resource spec configuration for tracing

    spec:
      external_services:
        tracing:
          # query_scope only required in multi cluster
          # query_scope:
            # istio.mesh_id: <mesh_id>
          enabled: true
          provider: jaeger
          use_grpc: true
          internal_url: "http://jaeger-prod-query.istio-system:16685/jaeger"
          # (Optional) Public facing URL of Jaeger
          # external_url: "<platform-url>/clusters/<cluster>/istio/jaeger"
          # When external_url is not defined, disable_version_check should be set to true
          disable_version_check: true
    1. Required in multi cluster mesh. <mesh_id> should be the same as the .spec.values.global.meshId in Istio resource.
    2. Specifies whether tracing is enabled.
    3. Specifies the tracing provider (jaeger or tempo).
    4. Specifies the internal URL for the Jaeger or Tempo API.
  2. Save the updated spec in kiali_cr.yaml.

  3. Run the following command to apply the configuration:

    kubectl -nistio-system patch kiali kiali --type merge -p "$(cat kiali_cr.yaml)"

    Example output:

    kiali.kiali.io/kiali patched

Verification

  1. Navigate to the Kiali UI.
  2. Navigate to Workload Traces tab to see traces in the Kiali UI.