-
Notifications
You must be signed in to change notification settings - Fork 4.7k
Update Karpenter to v1.6.2 #17567
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update Karpenter to v1.6.2 #17567
Changes from all commits
f6afea2
efe8059
fecdf18
02622b0
0230b21
c9ca2e3
234d39f
64a8e79
1467ee0
f7614ca
efdc249
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,69 +1,122 @@ | ||
# Karpenter | ||
|
||
[Karpenter](https://karpenter.sh) is a Kubernetes-native capacity manager that directly provisions Nodes and underlying instances based on Pod requirements. On AWS, kOps supports managing an InstanceGroup with either Karpenter or an AWS Auto Scaling Group (ASG). | ||
[Karpenter](https://karpenter.sh) is an open-source node lifecycle management project built for Kubernetes. | ||
Adding Karpenter to a Kubernetes cluster can dramatically improve the efficiency and cost of running workloads on that cluster. | ||
|
||
On AWS, kOps supports managing an InstanceGroup with either Karpenter or an AWS Auto Scaling Group (ASG). | ||
|
||
## Prerequisites | ||
|
||
Managed Karpenter requires kOps 1.34+ and that [IAM Roles for Service Accounts (IRSA)](/cluster_spec#service-account-issuer-discovery-and-aws-iam-roles-for-service-accounts-irsa) be enabled for the cluster. | ||
|
||
If an older version of Karpenter was installed, it must be uninstalled before installing the new version. | ||
|
||
## Installing | ||
|
||
If using kOps 1.26 or older, enable the Karpenter feature flag : | ||
### New clusters | ||
|
||
```sh | ||
export KOPS_FEATURE_FLAGS="Karpenter" | ||
``` | ||
export KOPS_STATE_STORE="s3://my-state-store" | ||
export KOPS_DISCOVERY_STORE="s3://my-discovery-store" | ||
export NAME="my-cluster.example.com" | ||
export ZONES="eu-central-1a" | ||
|
||
Karpenter requires that external permissions for ServiceAccounts be enabled for the cluster. See [AWS IAM roles for ServiceAccounts documentation](/cluster_spec#service-account-issuer-discovery-and-aws-iam-roles-for-service-accounts-irsa) for how to enable this. | ||
kops create cluster --name ${NAME} \ | ||
--cloud=aws \ | ||
--instance-manager=karpenter \ | ||
--discovery-store=${KOPS_DISCOVERY_STORE} \ | ||
--zones=${ZONES} \ | ||
--yes | ||
|
||
kops validate cluster --name ${NAME} --wait=10m | ||
|
||
kops export kubeconfig --name ${NAME} --admin | ||
``` | ||
|
||
### Existing clusters | ||
|
||
On existing clusters, you can create a Karpenter InstanceGroup by adding the following to its InstanceGroup spec: | ||
The Karpenter addon must be enabled in the cluster spec: | ||
|
||
```yaml | ||
spec: | ||
manager: Karpenter | ||
karpenter: | ||
enabled: true | ||
``` | ||
|
||
You also need to enable the Karpenter addon in the cluster spec: | ||
To create a Karpenter InstanceGroup, set the following in its InstanceGroup spec: | ||
|
||
```yaml | ||
spec: | ||
karpenter: | ||
enabled: true | ||
manager: Karpenter | ||
``` | ||
|
||
### New clusters | ||
|
||
On new clusters, you can simply add the `--instance-manager=karpenter` flag: | ||
### EC2NodeClass and NodePool | ||
|
||
```sh | ||
kops create cluster --name mycluster.example.com --cloud aws --networking=amazonvpc --zones=eu-central-1a,eu-central-1b --master-count=3 --yes --discovery-store=s3://discovery-store/ | ||
USER_DATA=$(aws s3 cp ${KOPS_STATE_STORE}/${NAME}/igconfig/node/nodes/nodeupscript.sh -) | ||
USER_DATA=${USER_DATA//$'\n'/$'\n '} | ||
|
||
kubectl apply -f - <<YAML | ||
apiVersion: karpenter.k8s.aws/v1 | ||
kind: EC2NodeClass | ||
metadata: | ||
name: default | ||
spec: | ||
amiFamily: Custom | ||
amiSelectorTerms: | ||
- ssmParameter: /aws/service/canonical/ubuntu/server/24.04/stable/current/amd64/hvm/ebs-gp3/ami-id | ||
- ssmParameter: /aws/service/canonical/ubuntu/server/24.04/stable/current/arm64/hvm/ebs-gp3/ami-id | ||
associatePublicIPAddress: true | ||
tags: | ||
KubernetesCluster: ${NAME} | ||
kops.k8s.io/instancegroup: nodes | ||
k8s.io/role/node: "1" | ||
subnetSelectorTerms: | ||
- tags: | ||
KubernetesCluster: ${NAME} | ||
securityGroupSelectorTerms: | ||
- tags: | ||
KubernetesCluster: ${NAME} | ||
Name: nodes.${NAME} | ||
instanceProfile: nodes.${NAME} | ||
userData: | | ||
${USER_DATA} | ||
YAML | ||
|
||
kubectl apply -f - <<YAML | ||
apiVersion: karpenter.sh/v1 | ||
kind: NodePool | ||
metadata: | ||
name: default | ||
spec: | ||
template: | ||
spec: | ||
requirements: | ||
- key: kubernetes.io/arch | ||
operator: In | ||
values: ["amd64", "arm64"] | ||
- key: kubernetes.io/os | ||
operator: In | ||
values: ["linux"] | ||
- key: karpenter.sh/capacity-type | ||
operator: In | ||
values: ["on-demand", "spot"] | ||
nodeClassRef: | ||
group: karpenter.k8s.aws | ||
kind: EC2NodeClass | ||
name: default | ||
YAML | ||
``` | ||
|
||
## Karpenter-managed InstanceGroups | ||
|
||
A Karpenter-managed InstanceGroup controls a corresponding Karpenter Provisioner resource. kOps will ensure that the Provisioner is configured with the correct AWS security groups, subnets, and launch templates. Just like with ASG-managed InstanceGroups, you can add labels and taints to Nodes and kOps will ensure those are added accordingly. | ||
|
||
Note that not all features of InstanceGroups are supported. | ||
|
||
## Subnets | ||
|
||
By default, kOps will tag subnets with `kops.k8s.io/instance-group/<intancegroup>: "true"` for each InstanceGroup the subnet is assigned to. If you enable manual tagging of subnets, you have to ensure these tags are added, if not Karpenter will fail to provision any instances. | ||
|
||
## Instance Types | ||
|
||
If you do not specify a mixed instances policy, only the instance type specified by `spec.machineType` will be used. With Karpenter, one typically wants a wider range of instances to choose from. kOps supports both providing a list of instance types through `spec.mixedInstancesPolicy.instances` and providing instance type requirements through `spec.mixedInstancesPolicy.instanceRequirements`. See (/instance_groups)[InstanceGroup documentation] for more details. | ||
A Karpenter-managed InstanceGroup controls the bootstrap script. kOps will ensure the correct AWS security groups, subnets and permissions. | ||
`EC2NodeClass` and `NodePool` objects must be created by the cluster operator. | ||
|
||
## Known limitations | ||
|
||
### Karpenter-managed Launch Templates | ||
|
||
On EKS, Karpener creates its own launch templates for Provisioners. These launch templates will not work with a kOps cluster for a number of reasons. Most importantly, they do not use supported AMIs and they do not install and configure nodeup, the instance-side kOps component. The Karpenter features that require Karpenter to directly manage launch templates will not be available on kOps. | ||
|
||
### Unmanaged Provisioner resources | ||
|
||
As mentioned above, kOps will manage a Provisioner resource per InstanceGroup. It is technically possible to create Provsioner resources directly, but you have to ensure that you configure Provisioners according to kOps requirements. As mentioned above, Karpenter-managed launch templates do not work and you have to maintain your own kOps-compatible launch templates. | ||
|
||
### Other minor limitations | ||
|
||
* Control plane nodes must be provisioned with an ASG, not Karpenter. | ||
* Provisioners will unconditionally use spot with a fallback on ondemand instances. | ||
* Provisioners will unconditionally include burstable instance groups such as the T3 instance family. | ||
* kOps will not allow mixing arm64 and amd64 instances in the same Provider. | ||
* **Upgrade is not supported** from the previous version of managed Karpenter. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. You might want to say "We recommend creating a new cluster (karpenter support is currently feature-flagged / experimental so we do reserve the right to require new clusters)" There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Karpenter is no longer feature-flagged... |
||
* Control plane nodes must be provisioned with an ASG. | ||
* All `EC2NodeClass` objects must have the `spec.amiFamily` set to `Custom`. | ||
* `spec.instanceStorePolicy` configuration is not supported in `EC2NodeClass`. | ||
* `spec.kubelet`, `spec.taints` and `spec.labels` configuration are not supported in `EC2NodeClass`, but they can be configured in the `Cluster` or `InstanceGroup` spec. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As we feature flagged karpenter, I think it is OK to remove support for the old pre 1.0 version of karpenter. (Users that doesn't want to upgrade to karpenter 1.0+ can stay with the older version of kOps for a while). I think that's a reasonable position for us to take here...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Karpenter is no longer feature-flagged...
We removed the flag just before the major changes.