Install Portworx on vSphere with Amazon EKS Anywhere

Portworx can be installed on a Kubernetes cluster running on vSphere and managed by Amazon EKS Anywhere.


Before you install Portworx on vSphere, ensure that you meet the following prerequisites:

Environment Resources

Note: Default Bottlerocket base image is not supported.
Ubuntu 20.04.4 LTS
Ubuntu 18.04.6 LTS
Deployment Host

Note: The same vSphere host where you deploy EKS-Anywhere.
VM OS: Linux
vCPU: 4
Memory: 16 GB
Disk storage: 200 GB
Control-Plane VMs Minimum: 1
Recommended: 3
vCPUs: 2
OS Volume: 25 GB
Worker Node VMs Minimum: 3 (for storage cluster)
vCPUs: 8
RAM: 16 GB
OS Volume: 25 GB

Step 1: vCenter user for Portworx

Provide Portworx with a vCenter server user that has either the full admin role, or for increased security, a custom-created role with the following minimum vSphere privileges:

  • Datastore
    • Allocate space
    • Browse datastore
    • Low level file operations
    • Remove file
  • Host
    • Local operations
    • Reconfigure virtual machine
  • Virtual machine
    • Change Configuration
    • Add existing disk
    • Add new disk
    • Add or remove device
    • Advanced configuration
    • Change Settings
    • Extend virtual disk
    • Modify device settings
    • Remove disk

If you created a custom role with the permissions above, select Propagate to children when assigning the user to the role.

NOTE: All commands in the subsequent steps need to be run on a machine with kubectl access.

Step 2: Create a Kubernetes secret with your vCenter user and password

Update the following items in the Secret template below to match your environment:

  • VSPHERE_USER: Use output of echo '<vcenter-server-user>' | base64
  • VSPHERE_PASSWORD: Use output of echo '<vcenter-server-password>' | base64

    apiVersion: v1
    kind: Secret
     name: px-vsphere-secret
     namespace: kube-system
    type: Opaque

kubectl apply the above spec after you update the above template with your user and password.

Step 3: Generate Portworx spec

  1. Navigate to PX-Central and log in, or create an account.
  2. Select Portworx Enterprise and click Continue.
  3. On the Basic page, ensure that Use the Portworx Operator is selected. Select the latest version of Portworx from the Portworx Version drop-down, the Built-in option for etcd, and then click Next.

  4. On the Storage page, select Cloud as your environment, and vSphere as your cloud platform. Select the Create Using a Spec option to configure your storage devices, specify the following, and click Next:

    • In the vCenter Endpoint field, specify the hostname or IP address of your vCenter server.
    • In the vCenter datastore prefix field, specify the prefix name of the vCenter datastore you want to use.
    • In the Kubernetes Secret Name field, specify the name of the secret that you specified in Step 2, which is px-vsphere-secret.
    • In the vCenter Port field, enter port 443, if it is not auto filled.
  5. Choose your network and click Next.

  6. On the Customize page, enable Stork, CSI, Monitoring, and Telemetry in Advanced Settings and click Finish to generate the spec.

Apply specs

Apply the Operator and StorageCluster specs you generated in the section above using the kubectl apply command:

  1. Deploy the Operator:

    kubectl apply -f '<version-number>?comp=pxoperator'
    serviceaccount/portworx-operator created
    podsecuritypolicy.policy/px-operator created created created
    deployment.apps/portworx-operator created
  2. Deploy the StorageCluster:

    kubectl apply -f '<version-number>?operator=true&mc=false&kbver=&b=true&kd=type%3Dgp2%2Csize%3D150&s=%22type%3Dgp2%2Csize%3D150%22&c=px-cluster-XXXX-XXXX&eks=true&stork=true&csi=true&mon=true&tel=false&st=k8s&e==AWS_ACCESS_KEY_ID%3XXXX%2CAWS_SECRET_ACCESS_KEY%3XXXX&promop=true' created
Monitor the Portworx pods
  1. Enter the following kubectl get command, waiting until all Portworx pods show as ready in the output:

    kubectl get pods -o wide -n kube-system -l name=portworx
  2. Enter the following kubectl describe command with the ID of one of your Portworx pods to show the current installation status for individual nodes:

     kubectl -n kube-system describe pods <portworx-pod-id>
       Type     Reason                             Age                     From                  Message
       ----     ------                             ----                    ----                  -------
       Normal   Scheduled                          7m57s                   default-scheduler     Successfully assigned kube-system/portworx-qxtw4 to k8s-node-2
       Normal   Pulling                            7m55s                   kubelet, k8s-node-2   Pulling image "portworx/oci-monitor:2.5.0"
       Normal   Pulled                             7m54s                   kubelet, k8s-node-2   Successfully pulled image "portworx/oci-monitor:2.5.0"
       Normal   Created                            7m53s                   kubelet, k8s-node-2   Created container portworx
       Normal   Started                            7m51s                   kubelet, k8s-node-2   Started container portworx
       Normal   PortworxMonitorImagePullInPrgress  7m48s                   portworx, k8s-node-2  Portworx image portworx/px-enterprise:2.5.0 pull and extraction in progress
       Warning  NodeStateChange                    5m26s                   portworx, k8s-node-2  Node is not in quorum. Waiting to connect to peer nodes on port 9002.
       Warning  Unhealthy                          5m15s (x15 over 7m35s)  kubelet, k8s-node-2   Readiness probe failed: HTTP probe failed with statuscode: 503
       Normal   NodeStartSuccess                   5m7s                    portworx, k8s-node-2  PX is ready on this node
    NOTE: In your output, the image pulled will differ based on your chosen Portworx license type and version.
Monitor the cluster status

Use the pxctl status command to display the status of your Portworx cluster:

PX_POD=$(kubectl get pods -l name=portworx -n kube-system -o jsonpath='{.items[0]}')
kubectl exec $PX_POD -n kube-system -- /opt/pwx/bin/pxctl status

Last edited: Tuesday, May 9, 2023