Installing a multi-primary multi-network mesh

Install Istio in the multi-primary multi-network topology on two clusters.

NOTE

In this procedure, CLUSTER1 is the East cluster and CLUSTER2 is the West cluster.

You can adapt these instructions for a mesh spanning more than two clusters.

Topology

Service workloads across cluster boundaries communicate indirectly, via dedicated gateways for east-west traffic. The gateway in each cluster must be reachable from the other cluster.

Multi-Primary Multi-Network Topology

Prerequisites

  • You have installed the Alauda Container Platform Networking for Multus plugin all of the clusters that comprise the mesh, and kube-ovn must be v4.1.5 or later.
  • You have access to two clusters with external load balancer support.
  • You have installed the Alauda Service Mesh v2 Operator on all of the clusters that comprise the mesh.
  • You have completed Creating certificates for a multi-cluster mesh.
  • You have completed Applying certificates to a multi-cluster topology.
  • You have istioctl installed locally so that you can use to run these instructions.

TOC

Procedure

Create an ISTIO_VERSION environment variable that defines the Istio version to install

export ISTIO_VERSION=1.28.1

Install IstioCNI on the East cluster

Install the IstioCNI resource on the East cluster by running the following command:

kubectl --context "${CTX_CLUSTER1}" create namespace istio-cni
cat <<EOF | kubectl --context "${CTX_CLUSTER1}" apply -f -
apiVersion: sailoperator.io/v1
kind: IstioCNI
metadata:
  name: default
spec:
  version: v${ISTIO_VERSION}
  namespace: istio-cni
  values:
    cni:
      cniConfDir: /etc/cni/multus/net.d # /etc/cni/net.d in ACP 4.0
      excludeNamespaces:
        - istio-cni
        - kube-system
EOF

Install Istio on the East cluster

  1. Create an Istio resource on the East cluster by running the following command:

    cat <<EOF | kubectl --context "${CTX_CLUSTER1}" apply -f -
    apiVersion: sailoperator.io/v1
    kind: Istio
    metadata:
      name: default
    spec:
      version: v${ISTIO_VERSION}
      namespace: istio-system
      values:
        global:
          meshID: mesh1
          network: network1
          multiCluster:
            clusterName: cluster1
    EOF
  2. Wait for the control plane to return the Ready status condition by running the following command:

    kubectl --context "${CTX_CLUSTER1}" wait --for condition=Ready istio/default --timeout=3m
  3. Create an East-West gateway on the East cluster by running the following command:

    WARNING

    For nodes running Linux kernel versions earlier than 4.11 (e.g., CentOS 7), additional configuration is required prior to gateway installation.

    kubectl --context "${CTX_CLUSTER1}" apply -f https://raw.githubusercontent.com/alauda-mesh/sail-operator/main/docs/deployment-models/resources/east-west-gateway-net1.yaml
    Optional : Deploy the East-West gateway to Infra Nodes (click to expand)

    Run the following command to patch the gateway deployment:

    kubectl --context "${CTX_CLUSTER1}" patch deployment istio-eastwestgateway -n istio-system \
      --type='merge' \
      --patch '{
        "spec": {
          "template": {
            "spec": {
              "nodeSelector": {
                "node-role.kubernetes.io/infra": ""
              },
              "tolerations": [
                {
                  "effect": "NoSchedule",
                  "key": "node-role.kubernetes.io/infra",
                  "value": "reserved",
                  "operator": "Equal"
                }
              ]
            }
          }
        }
      }'
  4. Expose the services through the gateway by running the following command:

    kubectl --context "${CTX_CLUSTER1}" apply -n istio-system -f https://raw.githubusercontent.com/alauda-mesh/sail-operator/main/docs/deployment-models/resources/expose-services.yaml

Install IstioCNI on the West cluster

Install the IstioCNI resource on the West cluster by running the following command:

kubectl --context "${CTX_CLUSTER2}" create namespace istio-cni
cat <<EOF | kubectl --context "${CTX_CLUSTER2}" apply -f -
apiVersion: sailoperator.io/v1
kind: IstioCNI
metadata:
  name: default
spec:
  version: v${ISTIO_VERSION}
  namespace: istio-cni
  values:
    cni:
      cniConfDir: /etc/cni/multus/net.d # /etc/cni/net.d in ACP 4.0
      excludeNamespaces:
        - istio-cni
        - kube-system
EOF

Install Istio on the West cluster

  1. Create an Istio resource on the West cluster by running the following command:

    cat <<EOF | kubectl --context "${CTX_CLUSTER2}" apply -f -
    apiVersion: sailoperator.io/v1
    kind: Istio
    metadata:
      name: default
    spec:
      version: v${ISTIO_VERSION}
      namespace: istio-system
      values:
        global:
          meshID: mesh1
          network: network2
          multiCluster:
            clusterName: cluster2
    EOF
  2. Wait for the control plane to return the Ready status condition by running the following command:

    kubectl --context "${CTX_CLUSTER2}" wait --for condition=Ready istio/default --timeout=3m
  3. Create an East-West gateway on the West cluster by running the following command:

    WARNING

    For nodes running Linux kernel versions earlier than 4.11 (e.g., CentOS 7), additional configuration is required prior to gateway installation.

    kubectl --context "${CTX_CLUSTER2}" apply -f https://raw.githubusercontent.com/alauda-mesh/sail-operator/main/docs/deployment-models/resources/east-west-gateway-net2.yaml
    Optional : Deploy the East-West gateway to Infra Nodes (click to expand)

    Run the following command to patch the gateway deployment:

    kubectl --context "${CTX_CLUSTER2}" patch deployment istio-eastwestgateway -n istio-system \
      --type='merge' \
      --patch '{
        "spec": {
          "template": {
            "spec": {
              "nodeSelector": {
                "node-role.kubernetes.io/infra": ""
              },
              "tolerations": [
                {
                  "effect": "NoSchedule",
                  "key": "node-role.kubernetes.io/infra",
                  "value": "reserved",
                  "operator": "Equal"
                }
              ]
            }
          }
        }
      }'
  4. Expose the services through the gateway by running the following command:

    kubectl --context "${CTX_CLUSTER2}" apply -n istio-system -f https://raw.githubusercontent.com/alauda-mesh/sail-operator/main/docs/deployment-models/resources/expose-services.yaml

Install a remote secret on the East cluster that provides access to the API server on the West cluster

istioctl create-remote-secret \
  --context="${CTX_CLUSTER2}" \
  --name=cluster2 \
  --create-service-account=false | \
  kubectl --context="${CTX_CLUSTER1}" apply -f -

Install a remote secret on the West cluster that provides access to the API server on the East cluster

istioctl create-remote-secret \
  --context="${CTX_CLUSTER1}" \
  --name=cluster1 \
  --create-service-account=false | \
  kubectl --context="${CTX_CLUSTER2}" apply -f -

Verifying a multi-cluster topology

To confirm that your multi-cluster topology is functioning correctly, you will deploy sample applications onto two separate Alauda Container Platform clusters. The goal is to establish a baseline environment where cross-cluster traffic can be generated and observed.

Procedure

Begin by deploying the necessary sample applications onto the East cluster.

This cluster will host the v1 version of the helloworld service.

  1. Create a dedicated namespace for the applications on the East cluster.

    kubectl --context="${CTX_CLUSTER1}" create namespace sample
  2. Enable automatic Istio sidecar injection for the sample namespace by applying the required label.

    kubectl --context="${CTX_CLUSTER1}" label namespace sample istio-injection=enabled
  3. Deploy the helloworld application components.

    a. First, establish the helloworld service endpoint.

    kubectl --context="${CTX_CLUSTER1}" apply \
      -f https://raw.githubusercontent.com/istio/istio/refs/heads/master/samples/helloworld/helloworld.yaml \
      -l service=helloworld -n sample

    b. Then, deploy the v1 instance of the helloworld application.

    kubectl --context="${CTX_CLUSTER1}" apply \
      -f https://raw.githubusercontent.com/istio/istio/refs/heads/master/samples/helloworld/helloworld.yaml \
      -l version=v1 -n sample
  4. Deploy the sleep application, which will act as a client for sending test requests.

    kubectl --context="${CTX_CLUSTER1}" apply \
      -f https://raw.githubusercontent.com/istio/istio/refs/heads/master/samples/sleep/sleep.yaml -n sample
  5. Pause the process until the helloworld-v1 deployment is fully available and ready.

    kubectl --context="${CTX_CLUSTER1}" wait --for condition=available -n sample deployment/helloworld-v1
  6. Likewise, wait for the sleep deployment to report a Ready status.

    kubectl --context="${CTX_CLUSTER1}" wait --for condition=available -n sample deployment/sleep

Replicate the setup on the West cluster.

This cluster will host the v2 version of the helloworld service.

  1. Create the sample namespace on the West cluster.

    kubectl --context="${CTX_CLUSTER2}" create namespace sample
  2. Enable Istio sidecar injection for this namespace as well.

    kubectl --context="${CTX_CLUSTER2}" label namespace sample istio-injection=enabled
  3. Deploy the helloworld application components.

    a. Create the common helloworld service endpoint on the West cluster.

    kubectl --context="${CTX_CLUSTER2}" apply \
      -f https://raw.githubusercontent.com/istio/istio/refs/heads/master/samples/helloworld/helloworld.yaml \
      -l service=helloworld -n sample

    b. Deploy the v2 instance of the helloworld application.

    kubectl --context="${CTX_CLUSTER2}" apply \
      -f https://raw.githubusercontent.com/istio/istio/refs/heads/master/samples/helloworld/helloworld.yaml \
      -l version=v2 -n sample
  4. Deploy the client sleep application on the West cluster.

    kubectl --context="${CTX_CLUSTER2}" apply \
      -f https://raw.githubusercontent.com/istio/istio/refs/heads/master/samples/sleep/sleep.yaml -n sample
  5. Wait for the helloworld-v2 deployment to become fully available.

    kubectl --context="${CTX_CLUSTER2}" wait --for condition=available -n sample deployment/helloworld-v2
  6. Finally, ensure the sleep deployment on the West cluster is ready.

    kubectl --context="${CTX_CLUSTER2}" wait --for condition=available -n sample deployment/sleep

Verifying traffic flows between clusters

With the applications deployed and running on both clusters, the next step is to send requests and confirm that traffic is being correctly load-balanced across the entire service mesh.

  1. From a pod within the East cluster, send a series of 10 requests to the helloworld service.

    for i in {0..9}; do \
      kubectl --context="${CTX_CLUSTER1}" exec -n sample deploy/sleep -c sleep -- curl -sS helloworld.sample:5000/hello; \
    done

    The expected outcome is a mix of responses from both helloworld-v1 (East) and helloworld-v2 (West), proving that the service mesh is routing requests across cluster boundaries.

    Example output
    Hello version: v1, instance: helloworld-v1-644474db4b-7cwhz
    Hello version: v2, instance: helloworld-v2-645cb7fc46-lnbb7
    Hello version: v1, instance: helloworld-v1-644474db4b-7cwhz
    Hello version: v1, instance: helloworld-v1-644474db4b-7cwhz
    Hello version: v1, instance: helloworld-v1-644474db4b-7cwhz
    Hello version: v2, instance: helloworld-v2-645cb7fc46-lnbb7
    Hello version: v2, instance: helloworld-v2-645cb7fc46-lnbb7
    Hello version: v1, instance: helloworld-v1-644474db4b-7cwhz
    Hello version: v2, instance: helloworld-v2-645cb7fc46-lnbb7
    Hello version: v1, instance: helloworld-v1-644474db4b-7cwhz
  2. Perform the same test from the West cluster.

    for i in {0..9}; do \
      kubectl --context="${CTX_CLUSTER2}" exec -n sample deploy/sleep -c sleep -- curl -sS helloworld.sample:5000/hello; \
    done

    Again, you should observe responses from both v1 and v2 of the service, confirming that the multi-cluster load balancing is working correctly regardless of where the request originates.

Removing a multi-cluster topology from a development environment

After completing your verification and experimentation, you should dismantle the multi-cluster configuration to clean up the development environment and release resources.

Procedure

  1. Execute a single command to remove all Istio components and the sample applications from the East cluster.

    kubectl --context="${CTX_CLUSTER1}" delete istio/default istiocni/default ns/sample ns/istio-system ns/istio-cni
  2. Run the corresponding command to perform the same cleanup operation on the West cluster.

    kubectl --context="${CTX_CLUSTER2}" delete istio/default istiocni/default ns/sample ns/istio-system ns/istio-cni