Installing Alauda Build of OpenTelemetry v2
Installing the Alauda Build of OpenTelemetry v2 consists of the following steps:
- Installing the Alauda Build of OpenTelemetry v2 Operator
- Creating a namespace for the OpenTelemetry Collector
- Deploying the OpenTelemetry Collector instance
- Do not install
Alauda Build of OpenTelemetryandAlauda Build of OpenTelemetry v2in the same Kubernetes cluster, as this will result in functional conflicts. - Do not install
Alauda Service MeshandAlauda Build of OpenTelemetry v2in the same Kubernetes cluster, as this will result in functional conflicts (Alauda Service Mesh v2supports integration withAlauda Build of OpenTelemetry v2). - Do not deploy the OpenTelemetry Collector in the same namespace as the Operator. Create a separate namespace for the Collector instance.
TOC
Installing the Alauda Build of OpenTelemetry v2 OperatorInstalling via the web consoleInstalling via the CLIDeploying the OpenTelemetry CollectorCreating a namespace for the CollectorDeploying via the web consoleDeploying via the CLIInstalling the Alauda Build of OpenTelemetry v2 Operator
Installing via the web console
Prerequisites
- The Alauda Build of OpenTelemetry v2 must be uploaded.
- You are logged in to the Alauda Container Platform web console as cluster-admin.
Procedure
- In the Alauda Container Platform web console, navigate to Administrator.
- Select Marketplace > OperatorHub.
- Search for the Alauda Build of OpenTelemetry v2.
- Locate the Alauda Build of OpenTelemetry v2, and click to select it.
- Click Install.
- On the Install Alauda Build of OpenTelemetry v2 dialogue, perform the following steps:
- Select the stable channel to install the latest stable version of the Alauda Build of OpenTelemetry v2 Operator.
- Click Install and Confirm to install the Operator.
Verification
Verify that the Operator installation status is reported as Succeeded in the Installation Info section.
Installing via the CLI
Prerequisites
- The Alauda Build of OpenTelemetry v2 must be uploaded.
- An active ACP CLI (
kubectl) session by a cluster administrator with thecluster-adminrole.
Procedure
-
Check available versions
Example output
Fields:
- CHANNEL: Operator channel name
- NAME: CSV resource name
- VERSION: Operator version
-
Confirm catalogSource
Example output
This indicates the
opentelemetry-operator2comes from theplatformcatalogSource. -
Create a namespace
-
Create a Subscription
Field explanations
- annotation
cpaas.io/target-namespaces: It is recommended to set this to empty; empty indicates cluster-wide installation. - .metadata.name: Subscription name (DNS-compliant, max 253 characters).
- .metadata.namespace: Namespace where the Operator will be installed.
- .spec.channel: Subscribed Operator channel.
- .spec.installPlanApproval: Approval strategy (
ManualorAutomatic). Here,Manualrequires manual approval for install/upgrade. - .spec.source: Operator catalogSource.
- .spec.sourceNamespace: Must be set to cpaas-system because all catalogSources provided by the platform are located in this namespace.
- .spec.startingCSV: Specifies the version to install for Manual approval; defaults to the latest in the channel if empty. Not required for Automatic.
- annotation
-
Check Subscription status
Key output
- .status.state:
UpgradePendingindicates the Operator is awaiting installation or upgrade. - Condition InstallPlanPending = True: Waiting for manual approval.
- .status.currentCSV: Latest subscribed CSV.
- .status.installPlanRef: Associated InstallPlan; must be approved before installation proceeds.
Wait for the
InstallPlanPendingcondition to beTrue: - .status.state:
-
Approve InstallPlan
Example output
Approve manually
Verification
Wait for CSV creation; Phase changes to Succeeded:
Check CSV status:
Example output
Fields
- NAME: Installed CSV name
- DISPLAY: Operator display name
- VERSION: Operator version
- REPLACES: CSV replaced during upgrade
- PHASE: Installation status (
Succeededindicates success)
Deploying the OpenTelemetry Collector
After successfully installing the Alauda Build of OpenTelemetry v2 Operator, deploy the OpenTelemetry Collector by creating an OpenTelemetryCollector custom resource.
Multiple OpenTelemetry Collector instances can coexist in separate namespaces. Each instance is independent and managed by the Operator.
Creating a namespace for the Collector
Before deploying the OpenTelemetry Collector, create a dedicated namespace for the Collector instance. The Collector must not be deployed in the same namespace as the Operator.
The opentelemetry-collector namespace used throughout this guide is an example. You can create and use a namespace with any name that suits your organization's naming conventions.
Deploying via the web console
Prerequisites
- The Alauda Build of OpenTelemetry v2 Operator must be installed.
- You are logged in to the Alauda Container Platform web console as cluster-admin.
- A dedicated namespace for the Collector instance has been created.
Procedure
-
In the Alauda Container Platform web console, navigate to Administrator.
-
Select Marketplace > OperatorHub.
-
Search for the Alauda Build of OpenTelemetry v2.
-
Locate the Alauda Build of OpenTelemetry v2, and click to select it.
-
Click All Instances tab.
-
Click Create.
-
Locate and Select OpenTelemetryCollector and then click Create.
-
Select the Collector namespace from the Namespace drop down.
-
Click YAML tab.
-
Customize the
OpenTelemetryCollectorcustom resource (CR) in the YAML code editor:Example
OpenTelemetryCollectorCR- The namespace for deploying the OpenTelemetry Collector instance. Here,
opentelemetry-collectoris used as an example; replace it with the namespace you created for the Collector. This namespace must be different from the Operator's namespace. - The Collector deployment mode. Supported values:
deployment(default),daemonset,statefulset, orsidecar. - Receivers define how telemetry data enters the Collector. This example configures OTLP, Jaeger, and Zipkin protocol receivers.
- Processors handle data processing between receiving and exporting. This example uses
batchfor batching telemetry data andmemory_limiterfor controlling memory usage. - Exporters define the telemetry data destinations. This example uses the
debugexporter, which outputs data to the Collector logs.
- The namespace for deploying the OpenTelemetry Collector instance. Here,
-
Click Create.
Verification
Wait until the pods of the OpenTelemetry Collector are running.
Deploying via the CLI
Prerequisites
- An active ACP CLI (
kubectl) session by a cluster administrator with thecluster-adminrole. - The Alauda Build of OpenTelemetry v2 Operator must be installed.
- A dedicated namespace for the Collector instance has been created.
Procedure
-
Customize and apply the
OpenTelemetryCollectorcustom resource (CR):NOTEFor detailed field descriptions of the
OpenTelemetryCollectorCR, see the CR example in the web console deployment section above. -
Wait for the Collector pods to be ready:
Verification
Check that the Collector pods are running:
Example output