In this section we will launch a test deployment and see how Ocean handles different node configurations via the “Virtual Node Groups” feature.
The challenge of running multiple workload types (separate applications, dev/test environmets, node groups requiring a GPU AMI, etc…) on the same Kubernetes cluster is applying a unique configuration to each one of the workloads in a heterogeneous environment. When your worker nodes are managed in a standard EKS cluster, usually every workload type is managed separately in a different Auto-scaling group.
With Ocean, you can define custom “Virtual Node Groups” which allow you to configure multiple workload types on the same Ocean Cluster. As part of those VNGs, you can configure different sets of labels and taints to go along with a custom AMI, User Data script, Instance Profile, Security Group, Root Volume size and tags which will be used for the nodes that serve your matching pods. This feature ensures the ability to run any type of workload on the same Ocean Cluster.
Let’s see how this works:
Navigate to your Ocean Cluster within the Spot.io Console, then click on the “Virtual Node Groups” tab on the menu bar below your cluster’s name: “Overview | Cost Analysis … | Virtual Node Groups …”
Here you can see the “Default Virtual Node Group” which represents the initial configuration that the Ocean cluster was created with. To add a new configuration, click the “Create VNG” button directly above the Virtual Node Group table.
Select “Configure Manually”
Configure the new Virtual Node Groups as follows:
env, Value to
devand click “Add”.
Add another Virtual Node Group by clicking the “Create Virtual Node Group” button again, and configure it as follows:
env, Value to
testand click “Add”.
Now we will run a deployment that will show us how Ocean scales up and automatically launches nodes from the right Virtual Node Group.
Below is an example yaml with 3 test deployments.
The first test deployment, named
od uses a selector for the
env: dev label, and will require On-Demand instances via the
spotinst.io/node-lifecycle: od label. You can read more about using built in labels here. The second deployment, named
dev will also require the
env: dev label, while the third one, named
test should run on instances labeled
cat <<EoF > test_deployments.yaml apiVersion: apps/v1 kind: Deployment metadata: name: od spec: selector: matchLabels: env: dev replicas: 2 template: metadata: labels: env: dev spec: containers: - name: nginx-od image: nginx resources: requests: memory: "700Mi" cpu: "256m" nodeSelector: spotinst.io/node-lifecycle: od --- apiVersion: apps/v1 kind: Deployment metadata: name: dev spec: selector: matchLabels: env: dev replicas: 3 template: metadata: labels: env: dev spec: containers: - name: nginx-dev image: nginx resources: requests: memory: "800Mi" cpu: "800m" limits: memory: "1700Mi" cpu: "1700m" nodeSelector: env: dev --- apiVersion: apps/v1 kind: Deployment metadata: name: test spec: selector: matchLabels: env: test replicas: 3 template: metadata: labels: env: test spec: containers: - name: nginx-dev image: nginx resources: requests: memory: "1700Mi" cpu: "500m" limits: memory: "1700Gi" cpu: "1700m" nodeSelector: env: test EoF
Let’s apply these Deployments and watch Ocean’s Autoscaler in action:
kubectl apply -f test_deployments.yaml
At this point Ocean will scale up to meet the demands of the deployments. You will notice that autoscaling happens fast, and instance sizes will be optimized for efficient bin packing of resources. We expect to see at least 3 instances:
Dev EnvironmentVNG, On-Demand and Spot.
You can display your nodes with:
kubectl get nodes
The output should look like:
In addition, the scale up activity should be logged in the Ocean Cluster’s log tab:
Clicking on “view details” will open up a window with additional information about the scaling activity:
In the next slides, we will preview some additional features and benefits of Ocean for EKS.