Skip to main content

Set up the Node Pool

Karpenter configuration comes in the form of a NodePool CRD (Custom Resource Definition). A single Karpenter NodePool is capable of handling many different Pod shapes. Karpenter makes scheduling and provisioning decisions based on Pod attributes such as labels and affinity. A cluster may have more than one NodePool, but for the moment we'll declare a default one.

One of the main objectives of Karpenter is to simplify the management of capacity. If you're familiar with other auto scaling solutions, you may have noticed that Karpenter takes a different approach, referred to as group-less auto scaling. Other solutions have traditionally used the concept of a node group as the element of control that defines the characteristics of the capacity provided (i.e: On-Demand, EC2 Spot, GPU Nodes, etc) and that controls the desired scale of the group in the cluster. In AWS the implementation of a node group matches with Auto Scaling groups. Karpenter allows us to avoid complexity that arises from managing multiple types of applications with different compute needs.

We'll start by applying the following two CRDs, a NodePool and a EC2NodeClass. These are the requirements for Karpenter to start handling basic scaling requirements.

~/environment/eks-workshop/modules/autoscaling/compute/karpenter/nodepool/nodepool.yaml
apiVersion: karpenter.sh/v1beta1
kind: NodePool
metadata:
name: default
spec:
template:
metadata:
labels:
type: karpenter
spec:
requirements:
- key: karpenter.sh/capacity-type
operator: In
values: ["on-demand"]
- key: "node.kubernetes.io/instance-type"
operator: In
values: ["c5.large", "m5.large", "r5.large", "m5.xlarge"]
nodeClassRef:
name: default
limits:
cpu: "1000"
memory: 1000Gi
disruption:
consolidationPolicy: WhenUnderutilized
expireAfter: 720h # 30 * 24h = 720h
---
apiVersion: karpenter.k8s.aws/v1beta1
kind: EC2NodeClass
metadata:
name: default
spec:
amiFamily: AL2 # Amazon Linux 2
role: "${KARP_ROLE}"
subnetSelectorTerms:
- tags:
karpenter.sh/discovery: ${EKS_CLUSTER_NAME}
securityGroupSelectorTerms:
- tags:
karpenter.sh/discovery: ${EKS_CLUSTER_NAME}
tags:
app.kubernetes.io/created-by: eks-workshop

info

We're asking the NodePool to start all new nodes with a label type: karpenter, which will allow us to specifically target Karpenter nodes with Pods for demonstration purposes

The configuration for Karpenter is split into two parts. The first one defines the general NodePool specification. The second part is defined by the provider implementation for AWS, in our case EC2NodeClass and provides the specific configuration that applies to AWS. This particular NodePool configuration is quite simple, but during the workshop we'll customize it further. For the moment let's focus in a few of the settings used.

  • Requirements Section: The NodePool CRD supports defining node properties like instance type and zone. In this example, we're setting the karpenter.sh/capacity-type to initially limit Karpenter to provisioning On-Demand instances, as well as kubernetes.io/os, karpenter.k8s.aws/instance-category and karpenter.k8s.aws/instance-generation to limit to a subset of appropriate instance types. You can learn which other properties are available here. We'll work on a few more during the workshop.
  • Limits section: NodePool can define a limit on the amount of CPU and memory managed by it. Once this limit is reached Karpenter will not provision additional capacity associated with that particular NodePool, providing a cap on the total compute.
  • Tags: EC2NodeClass can also define a set of tags that the EC2 instances will have upon creation. This helps to enable accounting and governance at the EC2 level.
  • Selectors: The EC2NodeClass resource uses securityGroupSelectorTerms and subnetSelectorTerms to discover resources used to launch nodes. These tags were automatically set on the associated AWS infrastructure provided for the workshop.

Apply the NodePool and EC2NodeClass with the following command:

~$kubectl kustomize ~/environment/eks-workshop/modules/autoscaling/compute/karpenter/nodepool \
| envsubst | kubectl apply -f-

Throughout the workshop you can inspect the Karpenter logs with the following command to understand its behavior:

~$kubectl logs -l app.kubernetes.io/instance=karpenter -n karpenter | jq '.'