Install Portworx on air-gapped OpenShift Container Platform on vSphere


Follow the instructions on this page to deploy Portworx and its required packages on an air-gapped OpenShift Container Platform cluster on vSphere using the internal OpenShift cluster registry.

Prerequisites

  • You must have an OpenShift Container Platform cluster deployed with infrastructure that meets the minimum requirements for Portworx (such as having SecureBoot disabled).
  • During this procedure, your OpenShift cluster must temporarily have its internal registry reachable externally to the cluster using the procedure here.
  • You must also have a Linux host with internet access that has either Podman or Docker or installed.

Configure your environment

  1. On your internet-connected host, set an environment variable for the Kubernetes version that you are using:

    KBVER=$(oc version | awk -F'[v+_-]' '/Kubernetes/ {print $2}')
  2. Set an environment variable to the latest major version of Portworx:

    PXVER=<portworx-version>
  3. Download the air-gapped-install bootstrap script for the Kubernetes and Portworx versions that you specified:

    curl -o px-ag-install.sh -L "https://install.portworx.com/$PXVER/air-gapped?kbver=$KBVER"
  4. Pull the container images required for the specified versions:

    sh px-ag-install.sh pull
  5. Authenticate to the OpenShift internal registry.

    For example:

    oc login -u admin -p password https://api.lab.ocp.lan:6443
    Login successful.
    [...]
    Using project "default".
  6. Log in to your registry, substituting docker for podman if you are not using Podman.

    For example:

    podman login -u admin -p $(oc whoami -t) default-route-openshift-image-registry.apps.lab.ocp.lan
    Login Succeeded!
    NOTE: If the host you’re running Podman from does not have the cluster’s certificate authority in its trusted-stores, you will need to pass the --tls-verify=false flag to the login command.
  7. Push the container images to your internal OpenShift cluster registry.

    For example:

    sh px-ag-install.sh push default-route-openshift-image-registry.apps.lab.ocp.lan/kube-system
  8. Create a secret for the Operator to use that contains the registry credentials.

    For example:

    oc -n kube-system create secret docker-registry px-image-repository \
        --docker-server=image-registry.openshift-image-registry.svc:5000 \
        --docker-username=admin \
        --docker-password=$(oc whoami -t)
    Login Succeeded!

Create a version manifest configmap for Portworx Operator

  1. Download the Portworx version manifest:

    curl -o versions "https://install.portworx.com/$PXVER/version?kbver=$KBVER"
  2. Create a configmap from the downloaded version manifest:

    oc -n kube-system create configmap px-versions --from-file=versions

Deploy Portworx using the Operator

The Portworx Enterprise Operator takes a custom Kubernetes resource called StorageCluster as input. The StorageCluster is a representation of your Portworx cluster configuration. Once the StorageCluster object is created, the Operator will deploy a Portworx cluster corresponding to the specification in the StorageCluster object. The Operator will watch for changes on the StorageCluster and update your cluster according to the latest specifications.

For more information about the StorageCluster object and how the Operator manages changes, refer to the StorageCluster article.

Create a vCenter user for Portworx

Provide Portworx with a vCenter server user that has the following minimum vSphere privileges using your vSphere console:

  • Datastore
    • Allocate space
    • Browse datastore
    • Low level file operations
    • Remove file
  • Host
    • Local operations
    • Reconfigure virtual machine
  • Virtual machine
    • Change Configuration
    • Add existing disk
    • Add new disk
    • Add or remove device
    • Advanced configuration
    • Change Settings
    • Extend virtual disk
    • Modify device settings
    • Remove disk

If you create a custom role as above, make sure to select Propagate to children when assigning the user to the role.

Provide the vCenter user credentials

In order to grant Portworx the necessary permissions for managing the storage block devices that the storage nodes require, create a secret with user credentials.

  1. Create a secret using the credentials from your own environment for the vCenter user that has the required permissions:

    oc -n kube-system create secret generic px-vsphere-secret \
        --from-literal='VSPHERE_USER=<yourusername@vsphere.local>' \
        --from-literal='VSPHERE_PASSWORD=<yourpasswordhere>'
  2. If you’re running a Portworx Essentials cluster, then create the following secret with your Essential Entitlement ID:

    oc -n kube-system create secret generic px-essential \
        --from-literal=px-essen-user-id=YOUR_ESSENTIAL_ENTITLEMENT_ID \
        --from-literal=px-osb-endpoint='https://pxessentials.portworx.com/osb/billing/v1/register'

Generate a Portworx spec

  1. Navigate to PX-Central and log in, or create an account.

  2. Select Portworx Enterprise from the product catalog and click Continue.

  3. On the Product Line page, choose any option depending on which license you intend to use, then click Continue.

  4. For Platform, select vSphere, then click Customize at the bottom of the Summary section.

  5. On the Basic page, ensure that the Use the Portworx Operator and Built-in ETCD options are selected. For Portworx version, select the same value from the dropdown that you have set as your Portworx version in the previous section, then click Next.

  6. On the Storage page, choose Cloud as your environment and vSphere as your cloud platform. Specify your values for vCenter Endpoint, vCenter Port, vCenter datastore prefix, and Kubernetes Secret Name, then click Next.

  7. Choose your network options and click Next.

  8. In the Customize page, select OpenShift 4+ for the Are you running on either of these? option. In the Registry And Image Settings section, provide your internal-cluster address for the internal registry path, and the secret px-image-repository created earlier. Also, clear the Enable Telemetry option under Advanced Settings.

    NOTE: The details to be specified here for how to fetch the images from your cluster-internal registry differ from the parameters specified earlier in the px-ag-install.sh script, as they are referenced here from within the cluster. So, as per the previous example, what you’d specify here would be: image-registry.openshift-image-registry.svc:5000/kube-system (rather than the FQDN specified earlier, where they were externally referenced).

    Click the Finish button to generate your specs.

Apply specs

Install the Operator and apply the StorageCluster specs you generated in the section above by performing the following steps:

  1. Either install the Portwox Operator from the Openshift Operatorhub as detailed here, or if your Openshift cluster’s Operatorhub does not have the Portworx Operator available, it can be installed using the command:

    oc apply -f 'https://install.portworx.com/<PXVER>?comp=pxoperator&reg=image-registry.openshift-image-registry.svc:5000/kube-system'
  2. Deploy the StorageCluster using the command PX-Central provided, replacing kubectl with oc. The provided command will look similar to the following:

    kubectl apply -f '<storagecluster-deployment-URL>'
    storagecluster.core.libopenstorage.org/px-cluster-<randomUUID> created
  3. If you did NOT use the OperatorHub and manually installed the Portworx Operator, you’ll then also need to annotate the newly created StorageCluster to make it aware of this being an OpenShift environment:

    oc -n kube-system annotate stc $(oc -n kube-system get stc -o jsonpath='{.items[0].metadata.name}') 'portworx.io/is-openshift=true'

Verify your Portworx installation

Once you’ve installed Portworx, you can perform the following tasks to verify that Portworx has installed correctly.

Verify if all pods are running

Enter the following oc get pods command to list and filter the results for Portworx pods:

oc get pods -n portworx -o wide | grep -e portworx -e px
portworx-api-774c2                                      1/1     Running   0                2m55s   192.168.121.196   username-k8s1-node0    <none>           <none>
portworx-api-t4lf9                                      1/1     Running   0                2m55s   192.168.121.99    username-k8s1-node1    <none>           <none>
portworx-kvdb-94bpk                                     1/1     Running   0                4s      192.168.121.196   username-k8s1-node0    <none>           <none>
portworx-operator-58967ddd6d-kmz6c                      1/1     Running   0                4m1s    10.244.1.99       username-k8s1-node0    <none>           <none>
prometheus-px-prometheus-0                              2/2     Running   0                2m41s   10.244.1.105      username-k8s1-node0    <none>           <none>
px-cluster-1c3edc42-4541-48fc-b173-3e9bf3cd834d-9gs79   2/2     Running   0                2m55s   192.168.121.196   username-k8s1-node0    <none>           <none>
px-cluster-1c3edc42-4541-48fc-b173-3e9bf3cd834d-vpptx   1/2     Running   0                2m55s   192.168.121.99    username-k8s1-node1    <none>           <none>
px-csi-ext-868fcb9fc6-54bmc                             4/4     Running   0                3m5s    10.244.1.103      username-k8s1-node0    <none>           <none>
px-csi-ext-868fcb9fc6-8tk79                             4/4     Running   0                3m5s    10.244.1.102      username-k8s1-node0    <none>           <none>
px-csi-ext-868fcb9fc6-vbqzk                             4/4     Running   0                3m5s    10.244.3.107      username-k8s1-node1    <none>           <none>
px-prometheus-operator-59b98b5897-9nwfv                 1/1     Running   0                3m3s    10.244.1.104      username-k8s1-node0    <none>           <none>

Note the name of one of your px-cluster pods. You’ll run pxctl commands from these pods in following steps.

Verify Portworx cluster status

You can find the status of the Portworx cluster by running pxctl status commands from a pod. Enter the following oc exec command, specifying the pod name you retrieved in the previous section:

oc exec px-cluster-1c3edc42-4541-48fc-b173-3e9bf3cd834d-vpptx -n portworx -- /opt/pwx/bin/pxctl status
Defaulted container "portworx" out of: portworx, csi-node-driver-registrar
Status: PX is operational
Telemetry: Disabled or Unhealthy
Metering: Disabled or Unhealthy
License: Trial (expires in 31 days)
Node ID: 788bf810-57c4-4df1-9a5a-70c31d0f478e
        IP: 192.168.121.99 
        Local Storage Pool: 1 pool
        POOL    IO_PRIORITY     RAID_LEVEL      USABLE  USED    STATUS  ZONE    REGION
        0       HIGH            raid0           3.0 TiB 10 GiB  Online  default default
        Local Storage Devices: 3 devices
        Device  Path            Media Type              Size            Last-Scan
        0:1     /dev/vdb        STORAGE_MEDIUM_MAGNETIC 1.0 TiB         14 Jul 22 22:03 UTC
        0:2     /dev/vdc        STORAGE_MEDIUM_MAGNETIC 1.0 TiB         14 Jul 22 22:03 UTC
        0:3     /dev/vdd        STORAGE_MEDIUM_MAGNETIC 1.0 TiB         14 Jul 22 22:03 UTC
        * Internal kvdb on this node is sharing this storage device /dev/vdc  to store its data.
        total           -       3.0 TiB
        Cache Devices:
         * No cache devices
Cluster Summary
        Cluster ID: px-cluster-1c3edc42-4541-48fc-b173-3e9bf3cd834d
        Cluster UUID: 33a82fe9-d93b-435b-943e-6f3fd5522eae
        Scheduler: kubernetes
        Nodes: 2 node(s) with storage (2 online)
        IP              ID                                      SchedulerNodeName       Auth            StorageNode     Used    Capacity        Status  StorageStatus       Version         Kernel                  OS
        192.168.121.196 f6d87392-81f4-459a-b3d4-fad8c65b8edc    username-k8s1-node0      Disabled        Yes             10 GiB  3.0 TiB         Online  Up 2.11.0-81faacc   3.10.0-1127.el7.x86_64  CentOS Linux 7 (Core)
        192.168.121.99  788bf810-57c4-4df1-9a5a-70c31d0f478e    username-k8s1-node1      Disabled        Yes             10 GiB  3.0 TiB         Online  Up (This node)      2.11.0-81faacc  3.10.0-1127.el7.x86_64  CentOS Linux 7 (Core)
Global Storage Pool
        Total Used      :  20 GiB
        Total Capacity  :  6.0 TiB

The Portworx status will display PX is operational if your cluster is running as intended.

Verify pxctl cluster provision status

  • Find the storage cluster, the status should show as Online:

    oc -n portworx get storagecluster
    NAME                                              CLUSTER UUID                           STATUS   VERSION   AGE
    px-cluster-1c3edc42-4541-48fc-b173-3e9bf3cd834d   33a82fe9-d93b-435b-943e-6f3fd5522eae   Online   2.11.0    10m
  • Find the storage nodes, the statuses should show as Online:

    oc -n portworx get storagenodes
    NAME                  ID                                     STATUS   VERSION          AGE
    username-k8s1-node0   f6d87392-81f4-459a-b3d4-fad8c65b8edc   Online   2.11.0-81faacc   11m
    username-k8s1-node1   788bf810-57c4-4df1-9a5a-70c31d0f478e   Online   2.11.0-81faacc   11m
  • Verify the Portworx cluster provision status . Enter the following oc exec command, specifying the pod name you retrieved in the previous section:

    oc exec px-cluster-1c3edc42-4541-48fc-b173-3e9bf3cd834d-vpptx -n portworx -- /opt/pwx/bin/pxctl cluster provision-status
    Defaulted container "portworx" out of: portworx, csi-node-driver-registrar
    NODE                                    NODE STATUS     POOL                                            POOL STATUS     IO_PRIORITY     SIZE    AVAILABLE  USED     PROVISIONED     ZONE    REGION  RACK
    788bf810-57c4-4df1-9a5a-70c31d0f478e    Up              0 ( 96e7ff01-fcff-4715-b61b-4d74ecc7e159 )      Online          HIGH            3.0 TiB 3.0 TiB    10 GiB   0 B             default default default
    f6d87392-81f4-459a-b3d4-fad8c65b8edc    Up              0 ( e06386e7-b769-4ce0-b674-97e4359e57c0 )      Online          HIGH            3.0 TiB 3.0 TiB    10 GiB   0 B             default default default

Create your first PVC

For your apps to use persistent volumes powered by Portworx, you must use a StorageClass that references Portworx as the provisioner. Portworx includes a number of default StorageClasses, which you can reference with PersistentVolumeClaims (PVCs) you create. For a more general overview of how storage works within Kubernetes, refer to the Persistent Volumes section of the Kubernetes documentation.

Perform the following steps to create a PVC:

  1. Create a PVC referencing the px-csi-db default StorageClass and save the file:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
        name: px-check-pvc
    spec:
        storageClassName: px-csi-db
        accessModes:
            - ReadWriteOnce
        resources:
            requests:
                storage: 2Gi
  2. Run the oc apply command to create a PVC:

    oc apply -f <your-pvc-name>.yaml
    persistentvolumeclaim/example-pvc created

Verify your StorageClass and PVC

  1. Enter the following oc get storageclass command, specify the name of the StorageClass you created in the steps above:

    oc get storageclass <your-storageclass-name>
    NAME                   PROVISIONER        RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
    example-storageclass   pxd.portworx.com   Delete          Immediate           false                  24m

    oc will return details about your storageClass if it was created correctly. Verify the configuration details appear as you intended.

  2. Enter the oc get pvc command, if this is the only StorageClass and PVC you’ve created, you should see only one entry in the output:

    oc get pvc <your-pvc-name>
    NAME          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS           AGE
    example-pvc   Bound    pvc-dce346e8-ff02-4dfb-935c-2377767c8ce0   2Gi        RWO            example-storageclass   3m7s

    oc will return details about your PVC if it was created correctly. Verify the configuration details appear as you intended.


Last edited: Monday, Jun 12, 2023