Synchronize your clusters or schedule migrations


Once the specified namespaces on your source and destination clusters are paired, in order to keep the Kubernetes resources in sync between your paired source and destination clusters in those namespaces, you need to create a schedule to migrate Kubernetes resources periodically. Because you have only one Portworx cluster stretched across your source and destination clusters, you will only migrate Kubernetes resources and not your volumes.

Create a schedule policy

You can create either a SchedulePolicy or NamespacedSchedulePolicy. The SchedulePolicy is cluster-scoped (cluster wide) while the NamespacedSchedulePolicy is namespace-scoped.

Perform the following steps from your source cluster to create a schedule policy:

  1. Create SchedulePolicy spec file:

    apiVersion: stork.libopenstorage.org/v1alpha1
    kind: SchedulePolicy
    metadata:
      name: <your-schedule-policy>
    policy:
      interval:
        intervalMinutes: 30
        

    For a list of parameters that you can use to create a schedule policy, see the Schedule Policy page.

  2. Apply your policy:

    kubectl apply -f <your-schedule-policy>.yaml
  3. Verify if the policy has been created:

    storkctl get schedulepolicy <your-schedule-policy>
    NAME                      INTERVAL-MINUTES    DAILY     WEEKLY             MONTHLY
    <your-schedule-policy>          30             N/A       N/A                N/A
NOTE: You can also use the schedule policies that are installed by default. Run the storkctl get schedulepolicy command to get the list of these policies, then specify a policy name in the next section for creating a migration schedule.

Create a migration schedule on your source cluster

  1. Copy and paste the following spec into a file called migrationschedule.yaml. Modify the following spec to use a different migration schedule name and/or namespace. Ensure that the clusterPair name is correct.

    apiVersion: stork.libopenstorage.org/v1alpha1
    kind: MigrationSchedule
    metadata:
      name: migrationschedule
      namespace: <migrationnamespace> 
    spec:
      template:
        spec:
          clusterPair: <your-clusterpair-name>
          includeResources: true
          startApplications: false
          includeVolumes: false
          namespaces:
          - <migrationnamespace>
      schedulePolicyName: <your-schedule-policy>
      suspend: false
      autoSuspend: true

    NOTE:

    • The option startApplications must be set to false in the spec to ensure that the application pods do not start on the remote cluster when the Kubernetes resources are being migrated.
    • The option includeVolumes is set to false because the volumes are already present on the destination cluster as this is a single Portworx cluster.

    • If you are running Stork 23.2 version or later, you can set the autoSuspend to true, as shown in the above spec. In case of a disaster, this will suspend the DR migration schedules automatically on your source cluster, and you will be able to migrate your application to an active Kubernetes cluster. If you are using an older version of Stork, refer to the Failover an application page for achieving failover for your application.

  2. Apply your migrationschedule.yaml by entering the following command:

    kubectl apply -f migrationschedule.yaml

    If the policy name is missing or invalid there will be events logged against the schedule object. Success and failures of the migrations created by the schedule will also result in events being logged against the object. These events can be seen by running a kubectl describe command on the object.

Check your migration status

  1. Run the following command from your source cluster to check the status of your migration:

    kubectl describe migrationschedule migrationschedule -n <migrationnamespace>
    Name:         migrationschedule
    Namespace:    <migrationnamespace>
    Labels:       <none>
    Annotations:  <none>
    API Version:  stork.libopenstorage.org/v1alpha1
    Kind:         MigrationSchedule
    Metadata:
    Creation Timestamp:  2022-11-17T19:24:32Z
    Finalizers:
      stork.libopenstorage.org/finalizer-cleanup
    Generation:  43
    Managed Fields:
      API Version:  stork.libopenstorage.org/v1alpha1
      Fields Type:  FieldsV1
    fieldsV1:
    .......
    ..........
    ...............  
      Manager:         stork
      Operation:       Update
      Time:            2022-11-17T19:24:33Z
    Resource Version:  23587269
    UID:               5ae0cdb9-e4ba-4562-8094-8d7763d99ed1
    Spec:
    Schedule Policy Name:  <your-schedule-policy>
    Suspend:               false
    Template:
      Spec:
        Admin Cluster Pair:                
        Cluster Pair:                      <your-clusterpair-name>
        Include Network Policy With CIDR:  <nil>
        Include Optional Resource Types:   <nil>
        Include Resources:                 true
        Include Volumes:                   <nil>
        Namespaces:
          <migrationnamespace>
        Post Exec Rule:           
        Pre Exec Rule:            
        Purge Deleted Resources:  <nil>
        Selectors:                <nil>
        Skip Deleted Namespaces:  <nil>
        Skip Service Update:      <nil>
        Start Applications:       false
    Status:
    Application Activated:  false
    Items:
      Interval:
        Creation Timestamp:  2022-11-17T23:55:11Z
        Finish Timestamp:    2022-11-17T23:56:39Z
        Name:                migrationschedule-interval-2022-11-17-235511
        Status:              Successful
        Creation Timestamp:  2022-11-18T00:25:20Z
        Finish Timestamp:    <nil>
        Name:                migrationschedule-interval-2022-11-18-002520
        Status:              Successful
    Events:
    Type     Reason      Age               From   Message
    ----     ------      ----              ----   -------
    Normal   Successful  59m               stork  Scheduled migration (migrationschedule-interval-2022-11-17-232511) completed successfully
    Normal   Successful  30m               stork  Scheduled migration (migrationschedule-interval-2022-11-17-235511) completed successfully
    • On a successful migration, you will see the same in the status of the migrated namespace. Also, other parameters such as Schedule Policy Name and Suspend will be configured as previously.
    • The output of kubectl describe will also show the status of the migrations that were triggered for each of the policies along with the start and finish times. The statuses will be maintained for the last successful migration and any Failed or InProgress migrations for each policy type.
  2. Run the following command to check if your migration is in progress on your source cluster. This command lists the names of migration objects with timestamp that are associated with your migration:

    storkctl get migration -n <migrationnamespace>
    NAME                                              AGE
    migrationschedule-interval-2022-11-18-002520      9m42s

    You can also run the kubectl describe migration -n <migrationnamespace> to check details about specific migrations.

  3. Verify the status of your migration by running the following command on your destination cluster.

    kubectl get statefulset -n <migrationnamespace>
    NAME   READY   AGE
    zk     0/0     6m44s

    This confirms that the respective namespace has been created and the applications (for example, Zookeeper) are installed. However, the application pods will not be not running because they are running on the source cluster.



Last edited: Tuesday, May 9, 2023