Installing a primary-remote multi-network mesh

Install Istio in a primary-remote multi-network topology on two clusters.

NOTE

In this procedure, CLUSTER1 is the East cluster and CLUSTER2 is the West cluster. The East cluster is the primary cluster and the West cluster is the remote cluster.

You can adapt these instructions for a mesh spanning more than two clusters.

TOC

Topology

Service workloads across cluster boundaries communicate indirectly, via dedicated gateways for east-west traffic. The gateway in each cluster must be reachable from the other cluster.

Services in cluster2 will reach the control plane in cluster1 via the same east-west gateway.

Primary-Remote Multi-Network Topology

Prerequisites

  • You have installed the Alauda Container Platform Networking for Multus plugin all of the clusters that comprise the mesh, and kube-ovn must be v4.1.5 or later.
  • You have installed the Alauda Service Mesh v2 Operator on all of the clusters that comprise the mesh.
  • You have completed Creating certificates for a multi-cluster mesh.
  • You have completed Applying certificates to a multi-cluster topology.
  • You have istioctl installed locally so that you can use to run these instructions.

Procedure

Create an ISTIO_VERSION environment variable that defines the Istio version to install

export ISTIO_VERSION=1.26.3

Install IstioCNI on the East cluster

Install the IstioCNI resource on the East cluster by running the following command:

kubectl --context "${CTX_CLUSTER1}" create namespace istio-cni
cat <<EOF | kubectl --context "${CTX_CLUSTER1}" apply -f -
apiVersion: sailoperator.io/v1
kind: IstioCNI
metadata:
  name: default
spec:
  version: v${ISTIO_VERSION}
  namespace: istio-cni
  values:
    cni:
      cniConfDir: /etc/cni/multus/net.d # /etc/cni/net.d in ACP 4.0
      excludeNamespaces:
        - istio-cni
        - kube-system
EOF

Install Istio on the East cluster

  1. Create an Istio resource on the East cluster by running the following command:

    Save the following Istio resource to istio-external.yaml:

    istio-external.yaml
    apiVersion: sailoperator.io/v1
    kind: Istio
    metadata:
      name: default
    spec:
      version: v${ISTIO_VERSION}
      namespace: istio-system
      values:
        global:
          meshID: mesh1
          multiCluster:
            clusterName: cluster1
          network: network1
          externalIstiod: true
    1. This enables the control plane installed on the East cluster to serve as an external control plane for other remote clusters.

    Using kubectl to apply the Istio resource:

    envsubst < istio-external.yaml | kubectl --context "${CTX_CLUSTER1}" apply -f -
  2. Wait for the control plane to return the "Ready" status condition by running the following command:

    kubectl --context "${CTX_CLUSTER1}" wait --for condition=Ready istio/default --timeout=3m
  3. Create an East-West gateway on the East cluster by running the following command:

    WARNING

    For nodes running Linux kernel versions earlier than 4.11 (e.g., CentOS 7), additional configuration is required prior to gateway installation.

    kubectl --context "${CTX_CLUSTER1}" apply -f https://raw.githubusercontent.com/alauda-mesh/sail-operator/main/docs/deployment-models/resources/east-west-gateway-net1.yaml
  4. Expose the control plane through the gateway so that services in the West cluster can access the control plane by running the following command:

    kubectl --context "${CTX_CLUSTER1}" apply -n istio-system -f https://raw.githubusercontent.com/alauda-mesh/sail-operator/main/docs/deployment-models/resources/expose-istiod.yaml
  5. Expose the application services through the gateway by running the following command:

    kubectl --context "${CTX_CLUSTER1}" apply -n istio-system -f https://raw.githubusercontent.com/alauda-mesh/sail-operator/main/docs/deployment-models/resources/expose-services.yaml

Install IstioCNI on the West cluster

Install the IstioCNI resource on the West cluster by running the following command:

kubectl --context "${CTX_CLUSTER2}" create namespace istio-cni
cat <<EOF | kubectl --context "${CTX_CLUSTER2}" apply -f -
apiVersion: sailoperator.io/v1
kind: IstioCNI
metadata:
  name: default
spec:
  version: v${ISTIO_VERSION}
  namespace: istio-cni
  values:
    cni:
      cniConfDir: /etc/cni/multus/net.d # /etc/cni/net.d in ACP 4.0
      excludeNamespaces:
        - istio-cni
        - kube-system
EOF

Install Istio on the West cluster

  1. Save the IP address of the East-West gateway running in the East cluster by running the following command:

    export DISCOVERY_ADDRESS=$(kubectl --context="${CTX_CLUSTER1}" \
       -n istio-system get svc istio-eastwestgateway \
       -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
  2. Create an Istio resource on the West cluster by running the following command:

    cat <<EOF | kubectl --context "${CTX_CLUSTER2}" apply -f -
    apiVersion: sailoperator.io/v1
    kind: Istio
    metadata:
      name: default
    spec:
      version: v${ISTIO_VERSION}
      namespace: istio-system
      profile: remote
      values:
        istiodRemote:
          injectionPath: /inject/cluster/cluster2/net/network2
        global:
          remotePilotAddress: ${DISCOVERY_ADDRESS}
    EOF
  3. Annotate the istio-system namespace in the West cluster so that it is managed by the control plane in the East cluster by running the following command:

    kubectl --context="${CTX_CLUSTER2}" annotate namespace istio-system topology.istio.io/controlPlaneClusters=cluster1
  4. Install a remote secret on the East cluster that provides access to the API server on the West cluster by running the following command:

    istioctl create-remote-secret \
      --context="${CTX_CLUSTER2}" \
      --name=cluster2 | \
      kubectl --context="${CTX_CLUSTER1}" apply -f -
  5. Wait for the Istio resource to return the "Ready" status condition by running the following command:

    kubectl --context "${CTX_CLUSTER2}" wait --for condition=Ready istio/default --timeout=3m
  6. Create an East-West gateway on the West cluster by running the following command:

    WARNING

    For nodes running Linux kernel versions earlier than 4.11 (e.g., CentOS 7), additional configuration is required prior to gateway installation.

    kubectl --context "${CTX_CLUSTER2}" apply -f https://raw.githubusercontent.com/alauda-mesh/sail-operator/main/docs/deployment-models/resources/east-west-gateway-net2.yaml
    NOTE

    Since the West cluster is installed with a remote profile, exposing the application services on the East cluster exposes them on the East-West gateways of both clusters.

Verifying a primary-remote topology

To confirm that your primary-remote topology is functioning correctly, you will deploy sample applications onto two separate Alauda Container Platform clusters. The goal is to establish a baseline environment where cross-cluster traffic can be generated and observed.

Procedure

Begin by deploying the necessary sample applications onto the East cluster.

This cluster will host the v1 version of the helloworld service.

  1. Create a dedicated namespace for the applications on the East cluster.

    kubectl --context="${CTX_CLUSTER1}" create namespace sample
  2. Enable automatic Istio sidecar injection for the sample namespace by applying the required label.

    kubectl --context="${CTX_CLUSTER1}" label namespace sample istio-injection=enabled
  3. Deploy the helloworld application components.

    a. First, establish the helloworld service endpoint.

    kubectl --context="${CTX_CLUSTER1}" apply \
      -f https://raw.githubusercontent.com/istio/istio/refs/heads/master/samples/helloworld/helloworld.yaml \
      -l service=helloworld -n sample

    b. Then, deploy the v1 instance of the helloworld application.

    kubectl --context="${CTX_CLUSTER1}" apply \
      -f https://raw.githubusercontent.com/istio/istio/refs/heads/master/samples/helloworld/helloworld.yaml \
      -l version=v1 -n sample
  4. Deploy the sleep application, which will act as a client for sending test requests.

    kubectl --context="${CTX_CLUSTER1}" apply \
      -f https://raw.githubusercontent.com/istio/istio/refs/heads/master/samples/sleep/sleep.yaml -n sample
  5. Pause the process until the helloworld-v1 deployment is fully available and ready.

    kubectl --context="${CTX_CLUSTER1}" wait --for condition=available -n sample deployment/helloworld-v1
  6. Likewise, wait for the sleep deployment to report a Ready status.

    kubectl --context="${CTX_CLUSTER1}" wait --for condition=available -n sample deployment/sleep

Replicate the setup on the West cluster.

This cluster will host the v2 version of the helloworld service.

  1. Create the sample namespace on the West cluster.

    kubectl --context="${CTX_CLUSTER2}" create namespace sample
  2. Enable Istio sidecar injection for this namespace as well.

    kubectl --context="${CTX_CLUSTER2}" label namespace sample istio-injection=enabled
  3. Deploy the helloworld application components.

    a. Create the common helloworld service endpoint on the West cluster.

    kubectl --context="${CTX_CLUSTER2}" apply \
      -f https://raw.githubusercontent.com/istio/istio/refs/heads/master/samples/helloworld/helloworld.yaml \
      -l service=helloworld -n sample

    b. Deploy the v2 instance of the helloworld application.

    kubectl --context="${CTX_CLUSTER2}" apply \
      -f https://raw.githubusercontent.com/istio/istio/refs/heads/master/samples/helloworld/helloworld.yaml \
      -l version=v2 -n sample
  4. Deploy the client sleep application on the West cluster.

    kubectl --context="${CTX_CLUSTER2}" apply \
      -f https://raw.githubusercontent.com/istio/istio/refs/heads/master/samples/sleep/sleep.yaml -n sample
  5. Wait for the helloworld-v2 deployment to become fully available.

    kubectl --context="${CTX_CLUSTER2}" wait --for condition=available -n sample deployment/helloworld-v2
  6. Finally, ensure the sleep deployment on the West cluster is ready.

    kubectl --context="${CTX_CLUSTER2}" wait --for condition=available -n sample deployment/sleep

Verifying traffic flows between clusters

With the applications deployed and running on both clusters, the next step is to send requests and confirm that traffic is being correctly load-balanced across the entire service mesh.

  1. From a pod within the East cluster, send a series of 10 requests to the helloworld service.

    for i in {0..9}; do \
      kubectl --context="${CTX_CLUSTER1}" exec -n sample deploy/sleep -c sleep -- curl -sS helloworld.sample:5000/hello; \
    done

    The expected outcome is a mix of responses from both helloworld-v1 (East) and helloworld-v2 (West), proving that the service mesh is routing requests across cluster boundaries.

    Example output
    Hello version: v1, instance: helloworld-v1-644474db4b-7cwhz
    Hello version: v2, instance: helloworld-v2-645cb7fc46-lnbb7
    Hello version: v1, instance: helloworld-v1-644474db4b-7cwhz
    Hello version: v1, instance: helloworld-v1-644474db4b-7cwhz
    Hello version: v1, instance: helloworld-v1-644474db4b-7cwhz
    Hello version: v2, instance: helloworld-v2-645cb7fc46-lnbb7
    Hello version: v2, instance: helloworld-v2-645cb7fc46-lnbb7
    Hello version: v1, instance: helloworld-v1-644474db4b-7cwhz
    Hello version: v2, instance: helloworld-v2-645cb7fc46-lnbb7
    Hello version: v1, instance: helloworld-v1-644474db4b-7cwhz
  2. Perform the same test from the West cluster.

    for i in {0..9}; do \
      kubectl --context="${CTX_CLUSTER2}" exec -n sample deploy/sleep -c sleep -- curl -sS helloworld.sample:5000/hello; \
    done

    Again, you should observe responses from both v1 and v2 of the service, confirming that the primary-remote load balancing is working correctly regardless of where the request originates.

Removing a primary-remote topology from a development environment

After completing your verification and experimentation, you should dismantle the primary-remote configuration to clean up the development environment and release resources.

Procedure

  1. Execute a single command to remove all Istio components and the sample applications from the East cluster.

    kubectl --context="${CTX_CLUSTER1}" delete istio/default istiocni/default ns/sample ns/istio-system ns/istio-cni
  2. Run the corresponding command to perform the same cleanup operation on the West cluster.

    kubectl --context="${CTX_CLUSTER2}" delete istio/default istiocni/default ns/sample ns/istio-system ns/istio-cni