Upgrade to the latest version of Portworx Enterprise for continued support. Documentation for the latest version of Portworx Enterprise can be found here.
Install
Prerequisites
- You must have a Kubernetes cluster with a minimum of three worker nodes.
- Portworx is installed on your Kubernetes cluster. For details about how you can install Portworx on Kubernetes, see the Portworx on Kubernetes page.
- You must have Stork installed on your Kubernetes cluster. For details about how you can install Stork, see the Stork page.
Install Cassandra
Enter the following
kubectl apply
command to create a headless service:kubectl apply -f - <<EOF apiVersion: v1 kind: Service metadata: labels: app: cassandra name: cassandra spec: clusterIP: None ports: - port: 9042 selector: app: cassandra EOF
service/cassandra created
Note the following about this service:
- The
spec.clusterIP
field is set toNone
. - The
spec.selector.app
field is set tocassandra
. The Kubernetes endpoints controller will configure the DNS to return addresses that point directly to your Cassandra Pods.
- The
Use the following
kubectl apply
command to create a storage class:kubectl apply -f - <<EOF kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: portworx-sc provisioner: kubernetes.io/portworx-volume parameters: repl: "2" priority_io: "high" group: "cassandra_vg" fg: "true" EOF
storageclass.storage.k8s.io/px-storageclass created
Note the following about this storage class:
- The provisioner field is set to
kubernetes.io/portworx-volume
. For details about the Portworx-specific parameters, refer to the Portworx Volume section of the Kubernetes documentation - The name of the
StorageClass
object isportworx-sc
- Portworx will create two replicas of each volume
- Portworx will use a high priority storage pool
- The provisioner field is set to
The following command creates a stateful set with three replicas and uses the STORK scheduler to place your Pods closer to where their data is located:
kubectl apply -f - <<EOF apiVersion: "apps/v1" kind: StatefulSet metadata: name: cassandra spec: selector: matchLabels: app: cassandra serviceName: cassandra replicas: 3 template: metadata: labels: app: cassandra spec: schedulerName: stork containers: - name: cassandra image: gcr.io/google-samples/cassandra:v12 imagePullPolicy: Always ports: - containerPort: 7000 name: intra-node - containerPort: 7001 name: tls-intra-node - containerPort: 7199 name: jmx - containerPort: 9042 name: cql resources: limits: cpu: "500m" memory: 1Gi requests: cpu: "500m" memory: 1Gi securityContext: capabilities: add: - IPC_LOCK lifecycle: preStop: exec: command: ["/bin/sh", "-c", "PID=$(pidof java) && kill $PID && while ps -p $PID > /dev/null; do sleep 1; done"] env: - name: MAX_HEAP_SIZE value: 512M - name: HEAP_NEWSIZE value: 100M - name: CASSANDRA_SEEDS value: "cassandra-0.cassandra.default.svc.cluster.local" - name: CASSANDRA_CLUSTER_NAME value: "K8Demo" - name: CASSANDRA_DC value: "DC1-K8Demo" - name: CASSANDRA_RACK value: "Rack1-K8Demo" - name: CASSANDRA_AUTO_BOOTSTRAP value: "false" - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace readinessProbe: exec: command: - /bin/bash - -c - /ready-probe.sh initialDelaySeconds: 15 timeoutSeconds: 5 # These volume mounts are persistent. They are like inline claims, # but not exactly because the names need to match exactly one of # the stateful pod volumes. volumeMounts: - name: cassandra-data mountPath: /var/lib/cassandra # These are converted to volume claims by the controller # and mounted at the paths mentioned above. volumeClaimTemplates: - metadata: name: cassandra-data annotations: volume.beta.kubernetes.io/storage-class: px-storageclass spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 1Gi EOF
statefulset.apps/cassandra configured
Validate the cluster functionality
Use the
kubectl get pvc
command to verify that the PVCs are bound to your persistent volumes:kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE cassandra-data-cassandra-0 Bound pvc-e6924b73-72f9-11e7-9d23-42010a8e0002 1Gi RWO portworx-sc 2m cassandra-data-cassandra-1 Bound pvc-49e8caf6-735d-11e7-9d23-42010a8e0002 1Gi RWO portworx-sc 2m cassandra-data-cassandra-2 Bound pvc-603d4f95-735d-11e7-9d23-42010a8e0002 1Gi RWO portworx-sc 1m
Verify that Kubernetes created the
portworx-sc
storage class:kubectl get storageclass
NAME TYPE portworx-sc kubernetes.io/portworx-volume
Use the
pxctl volume list
command to display the list of volumes in your cluster:pxctl volume list
ID NAME SIZE HA SHARED ENCRYPTED IO_PRIORITY SCALE STATUS 651254593135168442 pvc-49e8caf6-735d-11e7-9d23-42010a8e0002 1 GiB 2 no no LOW 0 up - attached on 10.142.0.3 136016794033281980 pvc-603d4f95-735d-11e7-9d23-42010a8e0002 1 GiB 2 no no LOW 0 up - attached on 10.142.0.4 752567898197695962 pvc-e6924b73-72f9-11e7-9d23-42010a8e0002 1 GiB 2 no no LOW 0 up - attached on 10.142.0.5
Make a note of the ID of one of your volumes. You’ll need it in the next step.
To verify that your Portworx volumes have two replicas, enter the
pxctl volume inspect
command, specifying the ID from the previous step. The following example command uses651254593135168442
:pxctl volume inspect 651254593135168442
Volume : 651254593135168442 Name : pvc-49e8caf6-735d-11e7-9d23-42010a8e0002 Size : 1.0 GiB Format : ext4 HA : 2 IO Priority : LOW Creation time : Jul 28 06:23:36 UTC 2017 Shared : no Status : up State : Attached: k8s-0 Device Path : /dev/pxd/pxd651254593135168442 Labels : pvc=cassandra-data-cassandra-1 Reads : 37 Reads MS : 72 Bytes Read : 372736 Writes : 1816 Writes MS : 17648 Bytes Written : 38424576 IOs in progress : 0 Bytes used : 33 MiB Replica sets on nodes: Set 0 Node : 10.142.0.4 Node : 10.142.0.3
Note that this volume is up and attached to the
k8s-0
host.List your Pods:
kubectl get pods
NAME READY STATUS RESTARTS AGE cassandra-0 1/1 Running 0 1m cassandra-1 1/1 Running 0 1m cassandra-2 0/1 Running 0 47s
Show the list of your Pods and the hosts on which Kubernetes scheduled them:
kubectl get pods -l app=cassandra -o json | jq '.items[] | {"name": .metadata.name,"hostname": .spec.nodeName, "hostIP": .status.hostIP, "PodIP": .status.podIP}'
{ "name": "cassandra-0", "hostname": "k8s-2", "hostIP": "10.142.0.5", "PodIP": "10.0.160.2" } { "name": "cassandra-1", "hostname": "k8s-0", "hostIP": "10.142.0.3", "PodIP": "10.0.64.2" } { "name": "cassandra-2", "hostname": "k8s-1", "hostIP": "10.142.0.4", "PodIP": "10.0.192.3" }
To open a shell session into one of your Pods, enter the following
kubectl exec
command, specifying your Pod name. This example opens thecassandra-0
Pod:kubectl exec -it cassandra-0 -- bash
Use the
nodetool status
command to retrieve information about your Cassandra cluster:nodetool status
Datacenter: DC1-K8Demo ====================== Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns (effective) Host ID Rack UN 10.0.160.2 164.39 KiB 32 62.3% ce3b48b8-1655-48a2-b167-08d03ca6bc41 Rack1-K8Demo UN 10.0.64.2 190.76 KiB 32 64.1% ba31128d-49fa-4696-865e-656d4d45238e Rack1-K8Demo UN 10.0.192.3 104.55 KiB 32 73.6% c778d78d-c6bc-4768-a3ec-0d51ba066dcb Rack1-K8Demo
Terminate the shell session:
exit
Related topics
- Cassandra on Kubernetes: Step-by-step guide for the most popular Kubernetes platforms
- Run multiple Cassandra rings on the same hosts