-
Notifications
You must be signed in to change notification settings - Fork 1.6k
KEP-5620: Node resizing via balloons. #5624
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
bwsalmon
commented
Oct 6, 2025
- One-line PR description: Node resizing via balloons
- Issue link: Node resizing via balloons #5620
- Other comments:
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: bwsalmon The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Hi @bwsalmon. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
#### Multiple Kubelets per node for testing | ||
|
||
A user who is testing some Kubernetes feature would like to run many Kubelets on the same host to decrease the amount of resources needed to test scenarios with large numbers of Kubelets. By enabling balloons and then resizing the balloons to ensure each Kubelet only consumes one Nth of the host, the customer can place N Kubelets on the same host without overloading the host itself. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This usecase seems to be better served by #5319
|
||
#### Autoscaling driven VM resizing | ||
|
||
A cloud provider would like to provide dynamically resizable Kubernetes nodes. The cloud provider creates a way to manage the resources provided to a particular Kubelet. By enabling the balloon pods and linking the management of the balloon pod sizes with the underlying resources available to the Kubelet host, the cloud provider can upsize and downsize the Kubernetes nodes to the Kubernetes system without having to involve Kubernetes in the specifics of the cloud resizing mechanism. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are there real-world examples of this need? Perhaps there are perspective users willing to support this usecase?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. That is worth being specific; Google will use this as soon as it is released.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cool, do we have google representatives that can +1? (@SergeyKanzhelev / @dchen1107 )
[EDIT] and help clarifying the usecase and interaction with node resize
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nevermind, I've only now seen the ack from Dawn. We're good, just please mention the perspective users and we're all good here.
/ok-to-test please make sure to fill |
@bwsalmon: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Consider including folks who also work outside the SIG or subproject. | ||
--> | ||
|
||
## Design Details |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Does this mean that the Balloon Pods are DaemonSets that specify the new PriorityClass introduced in this KEP? In other words, this doesn’t involve adding a new core resource like BalloonPod, right?
- Since they are expected to run in the kube-system namespace, is that achieved by restricting the new PriorityClass so that it can only be used within the kube-system namespace?
- Since they’re treated like regular pods, that means Balloon Pods are actually launched as containers (e.g. containers that just keep sleeping indefinitely), right?
In general balloon pods will look just like any other pod. They will be scheduled by a daemon set controller and have reservations and limits like any other pod. They will run in the kube-system namespace as system components. The two distinctions are as follows: | ||
|
||
- Balloon pods will be labelled so that monitoring tools can know that the space consumed by the balloon pod "doesn't exist" instead of being "space consumed by a workload". This will be a well known label and potentially require updates to the statistics collection in the Kubernetes master nodes. | ||
- Balloon pods will run at a special priority level (system-balloon) which mostly acts like system-node-critical, but instead of preempting other pods on upsize, balloon pods will fail to upsize. This distinction is critical because while the balloon pod should never be preempted (since the underlying capacity "doesn't exist"), we upsize them when we are reclaiming unused space, so if another pod ends up using this space before we reclaim it, we don't want to pre-empt the pod, which would impact a runnin workload, we'd rather fail the reclaim, since we were clearly wrong about it not being needed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I understand it, even if resource requests or limits are increased through In-Place Pod Resize, the scheduler doesn’t preempt other Pods to make room for it. Am I missing something?
Scheduler preemption/eviction to make room for pending resizes is not in scope. |