Skip to main content

StatefulSet with EBS Volume

Now that we understand StatefulSets and Dynamic Volume Provisioning, let's change our MySQL DB on the Catalog microservice to provision a new EBS volume to store database files persistent.

MySQL with EBS

Utilizing Kustomize, we'll do two things:

  • Create a new StatefulSet for the MySQL database used by the catalog component which uses an EBS volume
  • Update the catalog component to use this new version of the database

Why are we not updating the existing StatefulSet? The fields we need to update are immutable and cannot be changed.

Here in the new catalog database StatefulSet:

apiVersion: apps/v1
kind: StatefulSet
name: catalog-mysql-ebs
namespace: catalog
labels: eks-workshop database
replicas: 1
matchLabels: catalog catalog mysql-ebs
serviceName: mysql
labels: catalog catalog mysql-ebs eks-workshop database
- name: mysql
image: ""
- "--ignore-db-dir=lost+found"
imagePullPolicy: IfNotPresent
value: my-secret-pw
- name: MYSQL_USER
name: catalog-db
key: username
name: catalog-db
key: password
value: catalog
- name: mysql
containerPort: 3306
protocol: TCP
- name: data
mountPath: /var/lib/mysql
- metadata:
name: data
accessModes: ["ReadWriteOnce"]
storageClassName: gp2
storage: 30Gi

Notice the volumeClaimTemplates field which specifies the instructs Kubernetes to utilize Dynamic Volume Provisioning to create a new EBS Volume, a PersistentVolume (PV) and a PersistentVolumeClaim (PVC) all automatically.

This is how we'll re-configure the catalog component itself to use the new StatefulSet:

- op: add
path: /spec/template/spec/containers/0/env/-
value: catalog-mysql-ebs:3306

Apply the changes and wait for the new Pods to be rolled out:

~$kubectl apply -k ~/environment/eks-workshop/modules/fundamentals/storage/ebs/
~$kubectl rollout status --timeout=100s statefulset/catalog-mysql-ebs -n catalog

Let's now confirm that our newly deployed StatefulSet is running:

~$kubectl get statefulset -n catalog catalog-mysql-ebs
NAME                READY   AGE
catalog-mysql-ebs   1/1     79s

Inspecting our catalog-mysql-ebs StatefulSet, we can see that now we have a PersistentVolumeClaim attached to it with 30GiB and with storageClassName of gp2.

~$kubectl get statefulset -n catalog catalog-mysql-ebs \
-o jsonpath='{.spec.volumeClaimTemplates}' | jq .
    "apiVersion": "v1",
    "kind": "PersistentVolumeClaim",
    "metadata": {
      "creationTimestamp": null,
      "name": "data"
    "spec": {
      "accessModes": [
      "resources": {
        "requests": {
          "storage": "30Gi"
      "storageClassName": "gp2",
      "volumeMode": "Filesystem"
    "status": {
      "phase": "Pending"

We can analyze how the Dynamic Volume Provisioning created a PersistentVolume (PV) automatically for us:

~$kubectl get pv | grep -i catalog
pvc-1df77afa-10c8-4296-aa3e-cf2aabd93365   30Gi       RWO            Delete           Bound         catalog/data-catalog-mysql-ebs-0          gp2                            10m

Utilizing the AWS CLI, we can check the Amazon EBS volume that got created automatically for us:

~$aws ec2 describe-volumes \
--filters,Values=data-catalog-mysql-ebs-0 \
--query "Volumes[*].{ID:VolumeId,Tag:Tags}" \

If you prefer you can also check it via the AWS console, just look for the EBS volumes with the tag of key and value of data-catalog-mysql-ebs-0:

EBS Volume AWS Console Screenshot

If you'd like to inspect the container shell and check out the newly EBS volume attached to the Linux OS, run this instructions to run a shell command into the catalog-mysql-ebs container. It'll inspect the filesystems that you have mounted:

~$kubectl exec --stdin catalog-mysql-ebs-0 -n catalog -- bash -c "df -h"
Filesystem      Size  Used Avail Use% Mounted on
overlay         100G  7.6G   93G   8% /
tmpfs            64M     0   64M   0% /dev
tmpfs           3.8G     0  3.8G   0% /sys/fs/cgroup
/dev/nvme0n1p1  100G  7.6G   93G   8% /etc/hosts
shm              64M     0   64M   0% /dev/shm
/dev/nvme1n1     30G  211M   30G   1% /var/lib/mysql
tmpfs           7.0G   12K  7.0G   1% /run/secrets/
tmpfs           3.8G     0  3.8G   0% /proc/acpi
tmpfs           3.8G     0  3.8G   0% /sys/firmware

Check the disk that is currently being mounted on the /var/lib/mysql. This is the EBS Volume for the stateful MySQL database files that being stored in a persistent way.

Let's now test if our data is in fact persistent. We'll create the same test.txt file exactly the same way as we did on the first section of this module:

~$kubectl exec catalog-mysql-ebs-0 -n catalog -- bash -c "echo 123 > /var/lib/mysql/test.txt"

Now, let's verify that our test.txt file got created on the /var/lib/mysql directory:

~$kubectl exec catalog-mysql-ebs-0 -n catalog -- ls -larth /var/lib/mysql/ | grep -i test
-rw-r--r-- 1 root  root     4 Oct 18 13:57 test.txt

Now, let's remove the current catalog-mysql-ebs Pod, which will force the StatefulSet controller to automatically re-create it:

~$kubectl delete pods -n catalog catalog-mysql-ebs-0
pod "catalog-mysql-ebs-0" deleted

Wait for a few seconds, and run the command below to check if the catalog-mysql-ebs Pod has been re-created:

~$kubectl wait --for=condition=Ready pod -n catalog \
-l --timeout=60s
pod/catalog-mysql-ebs-0 condition met
~$kubectl get pods -n catalog -l
NAME                  READY   STATUS    RESTARTS   AGE
catalog-mysql-ebs-0   1/1     Running   0          29s

Finally, let's exec back into the MySQL container shell and run a ls command on the /var/lib/mysql path trying to look for the test.txt file that we created, and see if the file has now persisted:

~$kubectl exec catalog-mysql-ebs-0 -n catalog -- ls -larth /var/lib/mysql/ | grep -i test
-rw-r--r-- 1 mysql root     4 Oct 18 13:57 test.txt
~$kubectl exec catalog-mysql-ebs-0 -n catalog -- cat /var/lib/mysql/test.txt

As you can see the test.txt file is still available after a Pod delete and restart and with the right text on it 123. This is the main functionality of Persistent Volumes (PVs). Amazon EBS is storing the data and keeping our data safe and available within an AWS availability zone.