Dynamic provisioning using FSx for NetApp ONTAP
Now that we understand the FSxN storage class for Kubernetes let's create a Persistent Volume and change the assets
container on the assets deployment to mount the Volume created.
First inspect the fsxnpvclaim.yaml
file to see the parameters in the file and the claim of the specific storage size of 5GB from the Storage class fsxn-sc-nfs
we created in the earlier step:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: fsxn-nfs-claim
namespace: assets
spec:
accessModes:
- ReadWriteMany
storageClassName: fsxn-sc-nfs
resources:
requests:
storage: 5Gi
We'll also modify the assets service is two ways:
- Mount the PVC to the location where the assets images are stored
- Add an init container to copy the initial images to the FSxN volume
- Kustomize Patch
- Deployment/assets
- Diff
apiVersion: apps/v1
kind: Deployment
metadata:
name: assets
spec:
replicas: 2
template:
spec:
initContainers:
- name: copy
image: "public.ecr.aws/aws-containers/retail-store-sample-assets:0.4.0"
command:
[
"/bin/sh",
"-c",
"cp -R /usr/share/nginx/html/assets/* /fsxnvolume",
]
volumeMounts:
- name: fsxnvolume
mountPath: /fsxnvolume
containers:
- name: assets
volumeMounts:
- name: fsxnvolume
mountPath: /usr/share/nginx/html/assets
volumes:
- name: fsxnvolume
persistentVolumeClaim:
claimName: fsxn-nfs-claim
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/created-by: eks-workshop
app.kubernetes.io/type: app
name: assets
namespace: assets
spec:
replicas: 2
selector:
matchLabels:
app.kubernetes.io/component: service
app.kubernetes.io/instance: assets
app.kubernetes.io/name: assets
template:
metadata:
annotations:
prometheus.io/path: /metrics
prometheus.io/port: "8080"
prometheus.io/scrape: "true"
labels:
app.kubernetes.io/component: service
app.kubernetes.io/created-by: eks-workshop
app.kubernetes.io/instance: assets
app.kubernetes.io/name: assets
spec:
containers:
- envFrom:
- configMapRef:
name: assets
image: public.ecr.aws/aws-containers/retail-store-sample-assets:0.4.0
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /health.html
port: 8080
periodSeconds: 3
name: assets
ports:
- containerPort: 8080
name: http
protocol: TCP
resources:
limits:
memory: 128Mi
requests:
cpu: 128m
memory: 128Mi
securityContext:
capabilities:
drop:
- ALL
readOnlyRootFilesystem: false
volumeMounts:
- mountPath: /usr/share/nginx/html/assets
name: fsxnvolume
- mountPath: /tmp
name: tmp-volume
initContainers:
- command:
- /bin/sh
- -c
- cp -R /usr/share/nginx/html/assets/* /fsxnvolume
image: public.ecr.aws/aws-containers/retail-store-sample-assets:0.4.0
name: copy
volumeMounts:
- mountPath: /fsxnvolume
name: fsxnvolume
securityContext: {}
serviceAccountName: assets
volumes:
- name: fsxnvolume
persistentVolumeClaim:
claimName: fsxn-nfs-claim
- emptyDir:
medium: Memory
name: tmp-volume
app.kubernetes.io/type: app
name: assets
namespace: assets
spec:
- replicas: 1
+ replicas: 2
selector:
matchLabels:
app.kubernetes.io/component: service
app.kubernetes.io/instance: assets
[...]
drop:
- ALL
readOnlyRootFilesystem: false
volumeMounts:
+ - mountPath: /usr/share/nginx/html/assets
+ name: fsxnvolume
- mountPath: /tmp
name: tmp-volume
+ initContainers:
+ - command:
+ - /bin/sh
+ - -c
+ - cp -R /usr/share/nginx/html/assets/* /fsxnvolume
+ image: public.ecr.aws/aws-containers/retail-store-sample-assets:0.4.0
+ name: copy
+ volumeMounts:
+ - mountPath: /fsxnvolume
+ name: fsxnvolume
securityContext: {}
serviceAccountName: assets
volumes:
+ - name: fsxnvolume
+ persistentVolumeClaim:
+ claimName: fsxn-nfs-claim
- emptyDir:
medium: Memory
name: tmp-volume
We can apply the changes by running the following command:
namespace/assets unchanged
serviceaccount/assets unchanged
configmap/assets unchanged
service/assets unchanged
persistentvolumeclaim/fsxn-nfs-claim created
deployment.apps/assets configured
Now look at the volumeMounts
in the deployment, notice that we have our new Volume
named efsvolume
mounted onvolumeMounts
named /usr/share/nginx/html/assets
:
- mountPath: /usr/share/nginx/html/assets
name: fsxnvolume
- mountPath: /tmp
name: tmp-volume
A PersistentVolume (PV) has been created automatically for the PersistentVolumeClaim (PVC) we had created in the previous step:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-ceec6f39-8034-4b33-a4bc-c1b1370befd1 5Gi RWX Delete Bound assets/fsxn-nfs-claim fsxn-sc-nfs 173m
Also describe the PersistentVolumeClaim (PVC) created:
Name: fsxn-nfs-claim
Namespace: assets
StorageClass: fsxn-sc-nfs
Status: Bound
Volume: pvc-ceec6f39-8034-4b33-a4bc-c1b1370befd1
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: csi.trident.netapp.io
volume.kubernetes.io/storage-provisioner: csi.trident.netapp.io
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 5Gi
Access Modes: RWX
VolumeMode: Filesystem
Used By: assets-555dc4c9c9-g8hfs
assets-555dc4c9c9-m6r2l
Events: <none>
Now create a new file newproduct.png
under the assets directory in the first Pod:
And verify that the file now also exists in the second Pod:
chrono_classic.jpg
gentleman.jpg
newproduct.png <-----------
pocket_watch.jpg
smart_1.jpg
smart_2.jpg
test.txt
wood_watch.jpg
Now as you can see even though we created a file through the first Pod the second Pod also has access to this file because of the shared FSxN file system.