In this section we will launch a test deployment and see how Ocean handles different node configurations via the “Launch Specifications” feature.
The challenge of running multiple workload types (separate applications, dev/test environmets, node groups requiring a GPU AMI, etc…) on the same Kubernetes cluster is applying a unique configuration to each one of the workloads in a heterogeneous environment. When your worker nodes are managed in a standard EKS cluster, usually every workload type is managed separately in a different Auto-scaling group.
With Ocean, you can define custom “launch specifications” which allow you to configure multiple workload types on the same Ocean Cluster. As part of those launch specs, you can configure different sets of labels and taints to go along with a custom AMI, User Data script, Instance Profile, Security Group, Root Volume size and tags which will be used for the nodes that serve your matching pods. This feature ensures the ability to run any type of workload on the same Ocean Cluster.
Let’s see how this works:
Navigate to your Ocean Cluster within the Spot.io Console, then click on the Actions menu on the top right and select “Launch Specifications”.
Here you can see the “Default Launch Specification” which represents the initial configuration that the Ocean cluster was created with. To add a new configuration, click the “Add Launch Specification” button on the top right.
Configure the new Launch Specification as follows:
env, Value to
devand click “Add”.
Add another Launch Specification by clicking the “Add Launch Specification” button again, and configure it as follows:
env, Value to
testand click “Add”.
Once you’re finished (make sure you have 3 Launch Specifications), click “Update” at the bottom right of the page.
Now we will run a deployment that will show us how Ocean scales up and automatically launches nodes from the right Launch Specification.
Below is an example yaml with 3 test desployments.
The first test deployment, named
od uses a selector for the
env: dev label, and will require On-Demand instances via the
spotinst.io/node-lifecycle: od label. You can read more about using built in labels here. The second deployment, named
dev will also require the
env: dev label, while the third one, named
test should run on instances labeled
cat <<EoF > test_deployments.yaml apiVersion: apps/v1 kind: Deployment metadata: name: od spec: selector: matchLabels: env: dev replicas: 2 template: metadata: labels: env: dev spec: containers: - name: nginx-od image: nginx resources: requests: memory: "700Mi" cpu: "256m" nodeSelector: spotinst.io/node-lifecycle: od --- apiVersion: apps/v1 kind: Deployment metadata: name: dev spec: selector: matchLabels: env: dev replicas: 3 template: metadata: labels: env: dev spec: containers: - name: nginx-dev image: nginx resources: requests: memory: "800Mi" cpu: "800m" limits: memory: "1700Mi" cpu: "1700m" nodeSelector: env: dev --- apiVersion: apps/v1 kind: Deployment metadata: name: test spec: selector: matchLabels: env: test replicas: 3 template: metadata: labels: env: test spec: containers: - name: nginx-dev image: nginx resources: requests: memory: "1700Mi" cpu: "500m" limits: memory: "1700Gi" cpu: "1700m" nodeSelector: env: test EoF
Let’s apply these Deployments and watch Ocean’s Autoscaler in action:
kubectl apply -f test_deployments.yaml
At this point Ocean will scale up to meet the demands of the deployments. You will notice that autoscaling happens fast, and instance sizes will be optimized for efficient bin packing of resources. We expect to see at least 3 instances:
Dev Environmentlaunch specification, On-Demand and Spot.
Test Environmentlaunch specification.
You can display your nodes with:
kubectl get nodes
The output should look like:
In addition, the scale up activity should be logged in the Ocean Cluster’s log tab:
Clicking on “view details” will open up a window with additional information about the scaling activity:
In the next slides, we will preview some additional features and benefits of Ocean for EKS.