Install Portworx on Red Hat OpenShift Services on AWS


Prerequisites

  • You must have a Red Hat OpenShift Services on AWS (ROSA) cluster deployed on infrastructure that meets the minimum requirements for Portworx
  • Your cluster must be running OpenShift 4 or higher
  • Your instance size must be at least m5.xlarge with 3 compute nodes and have 3 availability zones
  • Your cluster must meet AWS prerequisites for ROSA
  • Ensure that OCP service is enabled from your AWS console
  • AWS CLI installed and configured
  • ROSA CLI installed and configured
  • Ensure that any underlying nodes used for Portworx in OCP have Secure Boot disabled

Configure your environment

Follow the instructions in this section to setup your environment before deploying Portworx on a Red Hat OpenShift Services on AWS (ROSA) cluster.

Create an IAM policy

Follow these instructions from your AWS IAM console to grant the required permissions to Portworx:

  1. From the IAM page, click Roles in the left pane.
  2. On the Roles page, type your cluster name in the search bar and press enter. Click your cluster’s worker role from the search results.
  3. From the worker role summary page, click Add permissions on the Permissions subpage, and then click Create inline policy from the dropdown menu.

  4. Copy and paste the following into the JSON tab text-box and click Review policy:

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "",
                "Effect": "Allow",
                "Action": [
                  "ec2:AttachVolume",
                  "ec2:ModifyVolume",
                  "ec2:DetachVolume",
                  "ec2:CreateTags",
                  "ec2:CreateVolume",
                  "ec2:DeleteTags",
                  "ec2:DeleteVolume",
                  "ec2:DescribeTags",
                  "ec2:DescribeVolumeAttribute",
                  "ec2:DescribeVolumesModifications",
                  "ec2:DescribeVolumeStatus",
                  "ec2:DescribeVolumes",
                  "ec2:DescribeInstances",
                  "autoscaling:DescribeAutoScalingGroups"
                ],
                "Resource": [
                  "*"
                ]
            }
        ]
    }
  5. Provide Name and click Create policy. Once your policy is successfully created for your cluster’s worker role, it will be listed in the Permissions policies section.

Open ports for worker nodes

Perform the following to add the inbound rules so that the AWS EC2 instance uses your specified security groups to control the incoming traffic.

  1. From the EC2 page of your AWS console, click Security Groups, under Network & Security, in the left pane.

  2. On the Security Groups page, type your ROSA cluster name in the search bar and press enter. You will see a list of security groups associated with your cluster. Click the link under Security group ID of your cluster’s worker security group:

    Security group

  3. From your security group page, click Actions in the upper-right corner, and choose Edit inbound rules from the dropdown menu.

  4. Click Add Rule at the bottom of the screen to add each of the following rules:

    • Allow inbound Custom TCP traffic with Protocol: TCP on ports 17001 - 17022
    • Allow inbound Custom TCP traffic with Protocol: TCP on port 20048
    • Allow inbound Custom TCP traffic with Protocol: TCP on port 111
    • Allow inbound Custom UDP traffic with Protocol: UDP on port 17002
    • Allow inbound NFS traffic with Protocol: TCP on port 2049

    Make sure to specify the security group ID of the same worker security group that is mentioned in step 2.

  5. Click Save rule.

Install Portworx

Follow the instructions in this section to deploy Portworx.

Generate Portworx spec

  1. Navigate to PX-Central and log in, or create an account.

  2. Select Portworx Enterprise from the Product Catalog page.

  3. On the Product Line page, choose any option depending on which license you intend to use, then click Continue to start the spec generator.

  4. For Platform, choose AWS. Under Distribution Name, select Red Hat OpenShift Service on AWS (ROSA). Enter your cluster’s Kubernetes version, then click Save Spec to generate the specs.

Log in to OpenShift UI

Log in to the OpenShift console by following the quick access instructions on the Accessing your cluster quickly page in the Red Hat OpenShift Service on AWS documentation.

Install Portworx Operator using the OpenShift UI

  1. From your OpenShift console, select OperatorHub in the left pane.

  2. On the OperatorHub page, search for Portworx and select the Portworx Enterprise or Portworx Essential card:

    PX-operator from OperatorHub

  3. Click Install:

    select catalog

  4. The Portworx Operator begins to install and takes you to the Install Operator page. On this page, select the A specific namespace on the cluster option for Installation mode. Select the Create Project option from the Installed Namespace dropdown:

    Portworx namespace

  5. On the Create Project window, enter the name as portworx and click Create to create a namespace called portworx.

  6. Click Install to install Portworx Operator in the portworx namespace.

Apply Portworx spec using OpenShift UI

  1. Once the Operator is installed successfully, create a StorageCluster object from the same page by clicking Create StorageCluster:

    Portworx Operator

  2. On the Create StorageCluster page, choose YAML view to configure a StorageCluster.

  3. Copy and paste the above Portworx spec into the text-editor, and click Create to deploy Portworx:

    Install Portworx from OpenShift Console

  4. Verify that Portworx has deployed successfully by navigating to the Storage Cluster tab of the Installed Operators page. Once Portworx has been fully deployed, the status will show as Online:

    Portworx status

Verify your Portworx installation

Once you’ve installed Portworx, you can perform the following tasks to verify that Portworx has installed correctly.

Verify if all pods are running

Enter the following oc get pods command to list and filter the results for Portworx pods:

oc get pods -n portworx -o wide | grep -e portworx -e px
portworx-api-774c2                                      1/1     Running   0                2m55s   192.168.121.196   username-k8s1-node0    <none>           <none>
portworx-api-t4lf9                                      1/1     Running   0                2m55s   192.168.121.99    username-k8s1-node1    <none>           <none>
portworx-kvdb-94bpk                                     1/1     Running   0                4s      192.168.121.196   username-k8s1-node0    <none>           <none>
portworx-operator-58967ddd6d-kmz6c                      1/1     Running   0                4m1s    10.244.1.99       username-k8s1-node0    <none>           <none>
prometheus-px-prometheus-0                              2/2     Running   0                2m41s   10.244.1.105      username-k8s1-node0    <none>           <none>
px-cluster-1c3edc42-4541-48fc-b173-3e9bf3cd834d-9gs79   2/2     Running   0                2m55s   192.168.121.196   username-k8s1-node0    <none>           <none>
px-cluster-1c3edc42-4541-48fc-b173-3e9bf3cd834d-vpptx   1/2     Running   0                2m55s   192.168.121.99    username-k8s1-node1    <none>           <none>
px-csi-ext-868fcb9fc6-54bmc                             4/4     Running   0                3m5s    10.244.1.103      username-k8s1-node0    <none>           <none>
px-csi-ext-868fcb9fc6-8tk79                             4/4     Running   0                3m5s    10.244.1.102      username-k8s1-node0    <none>           <none>
px-csi-ext-868fcb9fc6-vbqzk                             4/4     Running   0                3m5s    10.244.3.107      username-k8s1-node1    <none>           <none>
px-prometheus-operator-59b98b5897-9nwfv                 1/1     Running   0                3m3s    10.244.1.104      username-k8s1-node0    <none>           <none>

Note the name of one of your px-cluster pods. You’ll run pxctl commands from these pods in following steps.

Verify Portworx cluster status

You can find the status of the Portworx cluster by running pxctl status commands from a pod. Enter the following oc exec command, specifying the pod name you retrieved in the previous section:

oc exec px-cluster-1c3edc42-4541-48fc-b173-3e9bf3cd834d-vpptx -n portworx -- /opt/pwx/bin/pxctl status
Defaulted container "portworx" out of: portworx, csi-node-driver-registrar
Status: PX is operational
Telemetry: Disabled or Unhealthy
Metering: Disabled or Unhealthy
License: Trial (expires in 31 days)
Node ID: 788bf810-57c4-4df1-9a5a-70c31d0f478e
        IP: 192.168.121.99 
        Local Storage Pool: 1 pool
        POOL    IO_PRIORITY     RAID_LEVEL      USABLE  USED    STATUS  ZONE    REGION
        0       HIGH            raid0           3.0 TiB 10 GiB  Online  default default
        Local Storage Devices: 3 devices
        Device  Path            Media Type              Size            Last-Scan
        0:1     /dev/vdb        STORAGE_MEDIUM_MAGNETIC 1.0 TiB         14 Jul 22 22:03 UTC
        0:2     /dev/vdc        STORAGE_MEDIUM_MAGNETIC 1.0 TiB         14 Jul 22 22:03 UTC
        0:3     /dev/vdd        STORAGE_MEDIUM_MAGNETIC 1.0 TiB         14 Jul 22 22:03 UTC
        * Internal kvdb on this node is sharing this storage device /dev/vdc  to store its data.
        total           -       3.0 TiB
        Cache Devices:
         * No cache devices
Cluster Summary
        Cluster ID: px-cluster-1c3edc42-4541-48fc-b173-3e9bf3cd834d
        Cluster UUID: 33a82fe9-d93b-435b-943e-6f3fd5522eae
        Scheduler: kubernetes
        Nodes: 2 node(s) with storage (2 online)
        IP              ID                                      SchedulerNodeName       Auth            StorageNode     Used    Capacity        Status  StorageStatus       Version         Kernel                  OS
        192.168.121.196 f6d87392-81f4-459a-b3d4-fad8c65b8edc    username-k8s1-node0      Disabled        Yes             10 GiB  3.0 TiB         Online  Up 2.11.0-81faacc   3.10.0-1127.el7.x86_64  CentOS Linux 7 (Core)
        192.168.121.99  788bf810-57c4-4df1-9a5a-70c31d0f478e    username-k8s1-node1      Disabled        Yes             10 GiB  3.0 TiB         Online  Up (This node)      2.11.0-81faacc  3.10.0-1127.el7.x86_64  CentOS Linux 7 (Core)
Global Storage Pool
        Total Used      :  20 GiB
        Total Capacity  :  6.0 TiB

The Portworx status will display PX is operational if your cluster is running as intended.

Verify pxctl cluster provision status

  • Find the storage cluster, the status should show as Online:

    oc -n portworx get storagecluster
    NAME                                              CLUSTER UUID                           STATUS   VERSION   AGE
    px-cluster-1c3edc42-4541-48fc-b173-3e9bf3cd834d   33a82fe9-d93b-435b-943e-6f3fd5522eae   Online   2.11.0    10m
  • Find the storage nodes, the statuses should show as Online:

    oc -n portworx get storagenodes
    NAME                  ID                                     STATUS   VERSION          AGE
    username-k8s1-node0   f6d87392-81f4-459a-b3d4-fad8c65b8edc   Online   2.11.0-81faacc   11m
    username-k8s1-node1   788bf810-57c4-4df1-9a5a-70c31d0f478e   Online   2.11.0-81faacc   11m
  • Verify the Portworx cluster provision status . Enter the following oc exec command, specifying the pod name you retrieved in the previous section:

    oc exec px-cluster-1c3edc42-4541-48fc-b173-3e9bf3cd834d-vpptx -n portworx -- /opt/pwx/bin/pxctl cluster provision-status
    Defaulted container "portworx" out of: portworx, csi-node-driver-registrar
    NODE                                    NODE STATUS     POOL                                            POOL STATUS     IO_PRIORITY     SIZE    AVAILABLE  USED     PROVISIONED     ZONE    REGION  RACK
    788bf810-57c4-4df1-9a5a-70c31d0f478e    Up              0 ( 96e7ff01-fcff-4715-b61b-4d74ecc7e159 )      Online          HIGH            3.0 TiB 3.0 TiB    10 GiB   0 B             default default default
    f6d87392-81f4-459a-b3d4-fad8c65b8edc    Up              0 ( e06386e7-b769-4ce0-b674-97e4359e57c0 )      Online          HIGH            3.0 TiB 3.0 TiB    10 GiB   0 B             default default default

Create your first PVC

For your apps to use persistent volumes powered by Portworx, you must use a StorageClass that references Portworx as the provisioner. Portworx includes a number of default StorageClasses, which you can reference with PersistentVolumeClaims (PVCs) you create. For a more general overview of how storage works within Kubernetes, refer to the Persistent Volumes section of the Kubernetes documentation.

Perform the following steps to create a PVC:

  1. Create a PVC referencing the px-csi-db default StorageClass and save the file:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
        name: px-check-pvc
    spec:
        storageClassName: px-csi-db
        accessModes:
            - ReadWriteOnce
        resources:
            requests:
                storage: 2Gi
  2. Run the oc apply command to create a PVC:

    oc apply -f <your-pvc-name>.yaml
    persistentvolumeclaim/example-pvc created

Verify your StorageClass and PVC

  1. Enter the following oc get storageclass command, specify the name of the StorageClass you created in the steps above:

    oc get storageclass <your-storageclass-name>
    NAME                   PROVISIONER        RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
    example-storageclass   pxd.portworx.com   Delete          Immediate           false                  24m

    oc will return details about your storageClass if it was created correctly. Verify the configuration details appear as you intended.

  2. Enter the oc get pvc command, if this is the only StorageClass and PVC you’ve created, you should see only one entry in the output:

    oc get pvc <your-pvc-name>
    NAME          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS           AGE
    example-pvc   Bound    pvc-dce346e8-ff02-4dfb-935c-2377767c8ce0   2Gi        RWO            example-storageclass   3m7s

    oc will return details about your PVC if it was created correctly. Verify the configuration details appear as you intended.



Last edited: Monday, Jun 12, 2023