Generate and apply a cluster pair spec


In Kubernetes, you must define a trust object called a ClusterPair. Portworx requires this object to communicate between source and destination clusters. The ClusterPair object pairs the Portworx storage driver with the Kubernetes scheduler, allowing volumes and resources to be migrated between clusters.

IMPORTANT:

  • For IBM environments, skip this section and refer to Migration using service account, as you will need to use a service account to create the ClusterPair object.
  • In all examples, <migrationnamespace> is considered the admin namespace that will migrate all namespaces of your source cluster to the destination cluster. You can also specify a non-admin namespace, but only that namespace will be migrated. To learn how to set up an admin namespace, refer to the Set up a Cluster Admin namespace for Migration page.
  • You must run the pxctl commands in this document either on your worker nodes or from the Portworx containers on your Kubernetes control plane node.

Create object store credentials for cloud clusters

Create object store credentials on your source and destination clusters. In cloud-based Kubernetes clusters, the options for creating object store credentials differ depending on which object store you are using.

IMPORTANT: You must create object store credentials on both the destination and source clusters before you create a cluster pair.

Create Amazon s3 or s3 compatible object store credentials

  1. Find the UUID of your destination cluster by running the following command. Save this ID for use in the next step:

      PX_POD=$(kubectl get pods -l name=portworx -n <namespace> -o jsonpath='{.items[0].metadata.name}')
      kubectl exec $PX_POD -n <namespace> --  /opt/pwx/bin/pxctl status | grep UUID | awk '{print $3}'
  2. Create the credentials by running the pxctl credentials create command with the following flags, as shown in the example:

    • The --provider flag with the name of the cloud provider (s3).
    • The --s3-access-key flag with your secret access key.
    • The --s3-secret-key flag with your access key ID.
    • The --s3-region flag with the name of the s3 region (for example, us-east-1. Update this value to your object store region).
    • The --s3-endpoint flag with the name of the endpoint. For Amazon s3 object stores, use s3.amazonaws.com. For s3 compatible object stores, specify that object store’s endpoint in the https://<your-end-point.com> format.
    • The optional --s3-storage-class flag with either the STANDARD or STANDARD-IA value, depending on which storage class you prefer.
    • clusterPair_<UUID-of-destination-cluster> with the UUID of your destination cluster retrieved in the previous step.

      /opt/pwx/bin/pxctl credentials create \
      --provider s3 \
      --s3-access-key <secret-access-key> \
      --s3-secret-key <access-key-id> \
      --s3-region us-east-1  \
      --s3-endpoint s3.amazonaws.com \
      --s3-storage-class STANDARD \
      clusterPair_<UUID-of-destination-cluster>

Create Microsoft Azure credentials

  1. Find the UUID of your destination cluster by running the following command. Save this ID for use in the next step:

      PX_POD=$(kubectl get pods -l name=portworx -n <namespace> -o jsonpath='{.items[0].metadata.name}')
      kubectl exec $PX_POD -n <namespace> --  /opt/pwx/bin/pxctl status | grep UUID | awk '{print $3}'
  2. Create the credentials by running the pxctl credentials create command with the following flags, as shown in the example:

    • --provider as azure.
    • --azure-account-name flag with the name of your Azure account.
    • --azure-account-key flag with your Azure account key.
    • clusterPair_<UUID-of-destination-cluster> with the UUID of your destination cluster retrieved in the previous step.

      /opt/pwx/bin/pxctl credentials create \
      --provider azure \
      --azure-account-name <your-azure-account-name> \
      --azure-account-key <your-azure-account-key> \
      clusterPair_<UUID-of-destination-cluster>

Create Google Cloud Platform credentials

  1. Find the UUID of your destination cluster by running the following command. Save this ID for use in the next step:

      PX_POD=$(kubectl get pods -l name=portworx -n <namespace> -o jsonpath='{.items[0].metadata.name}')
      kubectl exec $PX_POD -n <namespace> --  /opt/pwx/bin/pxctl status | grep UUID | awk '{print $3}'
  2. Create the credentials by running the pxctl credentials create command with the following flags, as shown in the example:

    • --provider as google
    • --google-project-id with the string of your Google project ID
    • --google-json-key-file with the filename of your GCP JSON key file
    • clusterPair_<UUID-of-destination-cluster> with the UUID of your destination cluster retrieved in the previous step.

      /opt/pwx/bin/pxctl credentials create \
      --provider google \
      --google-project-id <your-google-project-ID> \
      --google-json-key-file <your-GCP-JSON-key-file> \
      clusterPair_<UUID-of-destination-cluster>

Generate a ClusterPair spec

You can choose from two types of ClusterPairs:

  • Unidirectional
  • Bidirectional

You may choose to create a bidirectional ClusterPair in the following scenarios:

  • When both the source and destination clusters are in the same type of environment
  • When you want to set up failover and failback for the same namespace

For more complex scenarios, where you require greater control over namespaces or need to failover between clusters in different environments, you may create a unidirectional ClusterPair.

Choose whether you want to create a unidirectional or bidirectional ClusterPair, then proceed to the appropriate section below.

Generate a unidirectional ClusterPair on the destination cluster

Perform the following steps from your destination cluster:

  1. Run the following command by specifying the <remotecluster> and <migrationnamespace>to create a ClusterPair and save the resulting spec to a file named clusterpair.yaml:

    • <remotecluster>: is the Kubernetes object that will be created on the source cluster representing the pair relationship.
    • <migrationnamespace>: is the Kubernetes namespace for the source cluster that you want to migrate to the destination cluster.

      storkctl generate clusterpair -n <migrationnamespace> <remotecluster> -o yaml > clusterpair.yaml
  2. Run the following command from the master Portworx node to get the destination cluster token and save it for use in the next step:

    PX_POD=$(kubectl get pods -l name=portworx -n <namespace> -o jsonpath='{.items[0].metadata.name}')
    kubectl exec $PX_POD -n <namespace> --  /opt/pwx/bin/pxctl cluster token show
  3. Specify the following fields in the options section of the clusterpair.yaml file that you saved previously to pair the storage:

    • ip: Specify the IP address of any remote Portworx nodes. If using an external load balancer, specify the IP address of the external load balancer.
    • port: Specify the port of the remote Portworx node mentioned in the Network Connectivity section on the Prerequisites page.
    • token: Specify the token of the destination cluster obtained from the previous step.
    • mode: Specify DisasterRecovery. By default, every seventh migration is a full migration. If you specify mode: DisasterRecovery, then every migration is incremental. When doing a one-time migration (and not DR), skip this option.
  4. (Optional) If you are using PX-Security on both source and destination clusters, you will need to add the following two annotations to the ClusterPair and MigrationSchedule specs:

    annotations:
      openstorage.io/auth-secret-namespace -> Points to the namespace where the k8s secret holding the auth token resides.
      openstorage.io/auth-secret-name -> Points to the name of the k8s secret which holds the auth token.

    NOTE: The values of all parameters in the options section of the updated specs should be in double quotes, as shown in the following example:

    options:
       token: "XXXX"
       ip: "192.0.2.0"
       port: "9001"
       mode: "DisasterRecovery"

Apply the ClusterPair spec on the source cluster

Apply the updated ClusterPair spec on your source cluster. Run the following command from a location where you have kubectl access to your source cluster:

kubectl apply -f clusterpair.yaml -n <migrationnamespace>

Create a bidirectional ClusterPair

CAUTION: In order to use a bidirectional ClusterPair:

  • Both the source and destination clusters must be in the same type of environment, either both on the cloud or both on-prem.
  • Failover and failback operations must apply to the same namespace in both clusters.
  1. Create the object store credentials on both the source and destination clusters. Follow the instructions in the object store credentials section to first find the UUIDs for each cluster, then run the credentials create command twice on each cluster using both clusterPair_<UUID-of-destination-cluster> and clusterPair_<UUID-of-source-cluster> strings.

  2. Run the following command on your source and destination clusters to obtain the token for each cluster, and save this information for use in the next step:

    PX_POD=$(kubectl get pods -l name=portworx -n <namespace> -o jsonpath='{.items[0].metadata.name}')
    kubectl exec $PX_POD -n <namespace> --  /opt/pwx/bin/pxctl cluster token show
  3. Run the following command to create the bidirectional ClusterPair:

    storkctl create clusterpair <your-clusterpair-name>  \
    -n <migrationnamespace> \
    --src-kube-file <source-kubeconfig> \
    --dest-kube-file <destination-kubeconfig> \
    --src-ip <src-workernode-IP/loadbalancer-IP> \
    --dest-ip <dest-workernode-IP/loadbalancer-IP> \
    --src-token <source-token> \
    --dest-token <destination-token>

Check the status of your ClusterPair

Verify that the ClusterPair spec is generated, and it is ready:

storkctl get clusterpair -n <migrationnamespace>
NAME            STORAGE-STATUS   SCHEDULER-STATUS   CREATED
remotecluster   Ready            Ready              09 Nov 22 00:22 UTC

On a successful pairing, you should see the STORAGE-STATUS and SCHEDULER-STATUS as Ready.

If you see an error, run the following command to get more information:

kubectl describe clusterpair remotecluster -n <migrationnamespace>


Last edited: Tuesday, May 23, 2023