Installing Alauda Build of OpenTelemetry v2

Installing the Alauda Build of OpenTelemetry v2 consists of the following steps:

  • Installing the Alauda Build of OpenTelemetry v2 Operator
  • Creating a namespace for the OpenTelemetry Collector
  • Deploying the OpenTelemetry Collector instance
WARNING
  • Do not install Alauda Build of OpenTelemetry and Alauda Build of OpenTelemetry v2 in the same Kubernetes cluster, as this will result in functional conflicts.
  • Do not install Alauda Service Mesh and Alauda Build of OpenTelemetry v2 in the same Kubernetes cluster, as this will result in functional conflicts (Alauda Service Mesh v2 supports integration with Alauda Build of OpenTelemetry v2).
  • Do not deploy the OpenTelemetry Collector in the same namespace as the Operator. Create a separate namespace for the Collector instance.

Installing the Alauda Build of OpenTelemetry v2 Operator

Installing via the web console

Prerequisites

  • The Alauda Build of OpenTelemetry v2 must be uploaded.
  • You are logged in to the Alauda Container Platform web console as cluster-admin.

Procedure

  1. In the Alauda Container Platform web console, navigate to Administrator.
  2. Select Marketplace > OperatorHub.
  3. Search for the Alauda Build of OpenTelemetry v2.
  4. Locate the Alauda Build of OpenTelemetry v2, and click to select it.
  5. Click Install.
  6. On the Install Alauda Build of OpenTelemetry v2 dialogue, perform the following steps:
    1. Select the stable channel to install the latest stable version of the Alauda Build of OpenTelemetry v2 Operator.
  7. Click Install and Confirm to install the Operator.

Verification

Verify that the Operator installation status is reported as Succeeded in the Installation Info section.

Installing via the CLI

Prerequisites

  • The Alauda Build of OpenTelemetry v2 must be uploaded.
  • An active ACP CLI (kubectl) session by a cluster administrator with the cluster-admin role.

Procedure

  1. Check available versions

    (
      echo -e "CHANNEL\tNAME\tVERSION"
      kubectl get packagemanifest opentelemetry-operator2 -o json | jq -r '
        .status.channels[] |
        .name as $channel |
        .entries[] |
        [$channel, .name, .version] | @tsv
      '
    ) | column -t -s $'\t'

    Example output

    CHANNEL  NAME                                    VERSION
    stable   opentelemetry-operator2.v0.146.0-r0     0.146.0-r0

    Fields:

    • CHANNEL: Operator channel name
    • NAME: CSV resource name
    • VERSION: Operator version
  2. Confirm catalogSource

    kubectl get packagemanifests opentelemetry-operator2 -ojsonpath='{.status.catalogSource}'

    Example output

    platform

    This indicates the opentelemetry-operator2 comes from the platform catalogSource.

  3. Create a namespace

    kubectl get namespace opentelemetry-operator2 || kubectl create namespace opentelemetry-operator2
  4. Create a Subscription

    kubectl apply -f - <<EOF
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      annotations:
        cpaas.io/target-namespaces: ""
      labels:
        catalog: platform
      name: opentelemetry-operator2
      namespace: opentelemetry-operator2
    spec:
      channel: stable
      installPlanApproval: Manual
      name: opentelemetry-operator2
      source: platform
      sourceNamespace: cpaas-system
      startingCSV: opentelemetry-operator2.v0.146.0-r0
    EOF

    Field explanations

    • annotation cpaas.io/target-namespaces: It is recommended to set this to empty; empty indicates cluster-wide installation.
    • .metadata.name: Subscription name (DNS-compliant, max 253 characters).
    • .metadata.namespace: Namespace where the Operator will be installed.
    • .spec.channel: Subscribed Operator channel.
    • .spec.installPlanApproval: Approval strategy (Manual or Automatic). Here, Manual requires manual approval for install/upgrade.
    • .spec.source: Operator catalogSource.
    • .spec.sourceNamespace: Must be set to cpaas-system because all catalogSources provided by the platform are located in this namespace.
    • .spec.startingCSV: Specifies the version to install for Manual approval; defaults to the latest in the channel if empty. Not required for Automatic.
  5. Check Subscription status

    kubectl -n opentelemetry-operator2 get subscriptions opentelemetry-operator2 -o yaml

    Key output

    • .status.state: UpgradePending indicates the Operator is awaiting installation or upgrade.
    • Condition InstallPlanPending = True: Waiting for manual approval.
    • .status.currentCSV: Latest subscribed CSV.
    • .status.installPlanRef: Associated InstallPlan; must be approved before installation proceeds.

    Wait for the InstallPlanPending condition to be True:

    kubectl -n opentelemetry-operator2 wait --for=condition=InstallPlanPending subscription opentelemetry-operator2 --timeout=2m
  6. Approve InstallPlan

    kubectl -n opentelemetry-operator2 get installplan \
      "$(kubectl -n opentelemetry-operator2 get subscriptions opentelemetry-operator2 -o jsonpath='{.status.installPlanRef.name}')"

    Example output

    NAME            CSV                                     APPROVAL   APPROVED
    install-abc12   opentelemetry-operator2.v0.146.0-r0     Manual     false

    Approve manually

    PLAN="$(kubectl -n opentelemetry-operator2 get subscription opentelemetry-operator2 -o jsonpath='{.status.installPlanRef.name}')"
    kubectl -n opentelemetry-operator2 patch installplan "$PLAN" --type=json -p='[{"op": "replace", "path": "/spec/approved", "value": true}]'

Verification

Wait for CSV creation; Phase changes to Succeeded:

kubectl wait --for=jsonpath='{.status.phase}'=Succeeded csv --all -n opentelemetry-operator2 --timeout=3m

Check CSV status:

kubectl -n opentelemetry-operator2 get csv

Example output

NAME                                    DISPLAY                                  VERSION      REPLACES   PHASE
opentelemetry-operator2.v0.146.0-r0     Alauda Build of OpenTelemetry v2         0.146.0-r0              Succeeded

Fields

  • NAME: Installed CSV name
  • DISPLAY: Operator display name
  • VERSION: Operator version
  • REPLACES: CSV replaced during upgrade
  • PHASE: Installation status (Succeeded indicates success)

Deploying the OpenTelemetry Collector

After successfully installing the Alauda Build of OpenTelemetry v2 Operator, deploy the OpenTelemetry Collector by creating an OpenTelemetryCollector custom resource.

NOTE

Multiple OpenTelemetry Collector instances can coexist in separate namespaces. Each instance is independent and managed by the Operator.

Creating a namespace for the Collector

Before deploying the OpenTelemetry Collector, create a dedicated namespace for the Collector instance. The Collector must not be deployed in the same namespace as the Operator.

TIP

The opentelemetry-collector namespace used throughout this guide is an example. You can create and use a namespace with any name that suits your organization's naming conventions.

kubectl get namespace opentelemetry-collector || \
  kubectl create namespace opentelemetry-collector

Deploying via the web console

Prerequisites

  • The Alauda Build of OpenTelemetry v2 Operator must be installed.
  • You are logged in to the Alauda Container Platform web console as cluster-admin.
  • A dedicated namespace for the Collector instance has been created.

Procedure

  1. In the Alauda Container Platform web console, navigate to Administrator.

  2. Select Marketplace > OperatorHub.

  3. Search for the Alauda Build of OpenTelemetry v2.

  4. Locate the Alauda Build of OpenTelemetry v2, and click to select it.

  5. Click All Instances tab.

  6. Click Create.

  7. Locate and Select OpenTelemetryCollector and then click Create.

  8. Select the Collector namespace from the Namespace drop down.

  9. Click YAML tab.

  10. Customize the OpenTelemetryCollector custom resource (CR) in the YAML code editor:

    Example OpenTelemetryCollector CR

    apiVersion: opentelemetry.io/v1beta1
    kind: OpenTelemetryCollector
    metadata:
      name: otel
      namespace: opentelemetry-collector
    spec:
      mode: deployment
      replicas: 1
      config:
        receivers:
          otlp:
            protocols:
              grpc: {}
              http: {}
          jaeger:
            protocols:
              grpc: {}
              thrift_binary: {}
              thrift_compact: {}
              thrift_http: {}
          zipkin: {}
        processors:
          batch: {}
          memory_limiter:
            check_interval: 1s
            limit_percentage: 80
            spike_limit_percentage: 20
        exporters:
          debug: {}
        service:
          pipelines:
            traces:
              receivers: [otlp, jaeger, zipkin]
              processors: [memory_limiter, batch]
              exporters: [debug]
    1. The namespace for deploying the OpenTelemetry Collector instance. Here, opentelemetry-collector is used as an example; replace it with the namespace you created for the Collector. This namespace must be different from the Operator's namespace.
    2. The Collector deployment mode. Supported values: deployment (default), daemonset, statefulset, or sidecar.
    3. Receivers define how telemetry data enters the Collector. This example configures OTLP, Jaeger, and Zipkin protocol receivers.
    4. Processors handle data processing between receiving and exporting. This example uses batch for batching telemetry data and memory_limiter for controlling memory usage.
    5. Exporters define the telemetry data destinations. This example uses the debug exporter, which outputs data to the Collector logs.
  11. Click Create.

Verification

Wait until the pods of the OpenTelemetry Collector are running.

Deploying via the CLI

Prerequisites

  • An active ACP CLI (kubectl) session by a cluster administrator with the cluster-admin role.
  • The Alauda Build of OpenTelemetry v2 Operator must be installed.
  • A dedicated namespace for the Collector instance has been created.

Procedure

  1. Customize and apply the OpenTelemetryCollector custom resource (CR):

    kubectl apply -f - <<EOF
    apiVersion: opentelemetry.io/v1beta1
    kind: OpenTelemetryCollector
    metadata:
      name: otel
      namespace: opentelemetry-collector
    spec:
      mode: deployment
      replicas: 1
      config:
        receivers:
          otlp:
            protocols:
              grpc: {}
              http: {}
          jaeger:
            protocols:
              grpc: {}
              thrift_binary: {}
              thrift_compact: {}
              thrift_http: {}
          zipkin: {}
        processors:
          batch: {}
          memory_limiter:
            check_interval: 1s
            limit_percentage: 80
            spike_limit_percentage: 20
        exporters:
          debug: {}
        service:
          pipelines:
            traces:
              receivers: [otlp, jaeger, zipkin]
              processors: [memory_limiter, batch]
              exporters: [debug]
    EOF
    NOTE

    For detailed field descriptions of the OpenTelemetryCollector CR, see the CR example in the web console deployment section above.

  2. Wait for the Collector pods to be ready:

    kubectl wait --for=condition=Ready pod -l app.kubernetes.io/managed-by=opentelemetry-operator -n opentelemetry-collector --timeout=3m

Verification

Check that the Collector pods are running:

kubectl get pods -n opentelemetry-collector

Example output

NAME                              READY   STATUS    RESTARTS   AGE
otel-collector-7877b4cdcf-bj7br   1/1     Running   0          69s