From 7fe2b7bdc25d7044384adb0e62234b88c5bda676 Mon Sep 17 00:00:00 2001 From: Martyna Grotek Date: Mon, 6 Oct 2025 13:01:38 +0000 Subject: [PATCH 1/3] KEP-5616: Cluster Autoscaler Pod Condition --- keps/prod-readiness/sig-autoscaling/5616.yaml | 3 + .../5616-ca-pod-condition/README.md | 845 ++++++++++++++++++ .../5616-ca-pod-condition/kep.yaml | 45 + 3 files changed, 893 insertions(+) create mode 100644 keps/prod-readiness/sig-autoscaling/5616.yaml create mode 100644 keps/sig-autoscaling/5616-ca-pod-condition/README.md create mode 100644 keps/sig-autoscaling/5616-ca-pod-condition/kep.yaml diff --git a/keps/prod-readiness/sig-autoscaling/5616.yaml b/keps/prod-readiness/sig-autoscaling/5616.yaml new file mode 100644 index 00000000000..347faea8f5b --- /dev/null +++ b/keps/prod-readiness/sig-autoscaling/5616.yaml @@ -0,0 +1,3 @@ +kep-number: 5616 +alpha: + approver: TBD \ No newline at end of file diff --git a/keps/sig-autoscaling/5616-ca-pod-condition/README.md b/keps/sig-autoscaling/5616-ca-pod-condition/README.md new file mode 100644 index 00000000000..c87ac4b4a55 --- /dev/null +++ b/keps/sig-autoscaling/5616-ca-pod-condition/README.md @@ -0,0 +1,845 @@ + +# KEP-5616: Cluster Autoscaler Pod Condition + + + + + + +- [Release Signoff Checklist](#release-signoff-checklist) +- [Summary](#summary) +- [Motivation](#motivation) + - [Goals](#goals) + - [Non-Goals](#non-goals) +- [Proposal](#proposal) + - [User Stories (Optional)](#user-stories-optional) + - [Story 1](#story-1) + - [Story 2](#story-2) + - [Notes/Constraints/Caveats (Optional)](#notesconstraintscaveats-optional) + - [Risks and Mitigations](#risks-and-mitigations) +- [Design Details](#design-details) + - [Test Plan](#test-plan) + - [Prerequisite testing updates](#prerequisite-testing-updates) + - [Unit tests](#unit-tests) + - [Integration tests](#integration-tests) + - [e2e tests](#e2e-tests) + - [Graduation Criteria](#graduation-criteria) + - [Upgrade / Downgrade Strategy](#upgrade--downgrade-strategy) + - [Version Skew Strategy](#version-skew-strategy) +- [Production Readiness Review Questionnaire](#production-readiness-review-questionnaire) + - [Feature Enablement and Rollback](#feature-enablement-and-rollback) + - [Rollout, Upgrade and Rollback Planning](#rollout-upgrade-and-rollback-planning) + - [Monitoring Requirements](#monitoring-requirements) + - [Dependencies](#dependencies) + - [Scalability](#scalability) + - [Troubleshooting](#troubleshooting) +- [Implementation History](#implementation-history) +- [Drawbacks](#drawbacks) +- [Alternatives](#alternatives) +- [Infrastructure Needed (Optional)](#infrastructure-needed-optional) + + +## Release Signoff Checklist + + + +Items marked with (R) are required *prior to targeting to a milestone / release*. + +- [ ] (R) Enhancement issue in release milestone, which links to KEP dir in [kubernetes/enhancements] (not the initial KEP PR) +- [ ] (R) KEP approvers have approved the KEP status as `implementable` +- [ ] (R) Design details are appropriately documented +- [ ] (R) Test plan is in place, giving consideration to SIG Architecture and SIG Testing input (including test refactors) + - [ ] e2e Tests for all Beta API Operations (endpoints) + - [ ] (R) Ensure GA e2e tests meet requirements for [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md) + - [ ] (R) Minimum Two Week Window for GA e2e tests to prove flake free +- [ ] (R) Graduation criteria is in place + - [ ] (R) [all GA Endpoints](https://github.com/kubernetes/community/pull/1806) must be hit by [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md) within one minor version of promotion to GA +- [ ] (R) Production readiness review completed +- [ ] (R) Production readiness review approved +- [ ] "Implementation History" section is up-to-date for milestone +- [ ] User-facing documentation has been created in [kubernetes/website], for publication to [kubernetes.io] +- [ ] Supporting documentation—e.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes + + + +[kubernetes.io]: https://kubernetes.io/ +[kubernetes/enhancements]: https://git.k8s.io/enhancements +[kubernetes/kubernetes]: https://git.k8s.io/kubernetes +[kubernetes/website]: https://git.k8s.io/website + +## Summary + +Cluster Autoscaler does not provide reliable observability for clients. + +k8s events are [best effort and it is discouraged to use them for any automations](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/event-style-guide.md). Additionaly [cache is disbaled](https://github.com/kubernetes/kubernetes/issues/131897) for them. + +This proposal introduces new pod condition owned by Cluster Autoscaler, which will provide information about scale up (being in progress or not attepmted). + + +## Motivation + + + +### Goals + + + + * Provide information regarding scale up for particular pod, which can be consumed by automations and other components (eg. scheduler: https://github.com/kubernetes/enhancements/issues/3990). + * Improve observability and debuggability for human operators. + * Introduce realiable replacement for k8s events: TriggeredScaleUp and NotTriggerScaleUp. + +### Non-Goals + + * Changing how Cluster Autoscaler is handling unschedulable pods. + + +## Proposal + + + +Introduce new pod condition type `NodeProvisioningInProgress`. +- `NodeProvisioningInProgress: True` (with empty reason) corresponding to todays `TiggeredScaleUp` event, meaning CA decided to scale up cluster to make place for this pod. +- `NodeProvisioningInProgress: False` corresponding to todays `NotTriggerScaleUp` event, meaning CA couldn't find node group that can be scaled up to make this pod schedulable. + +In case of failure, we would change reason to `Error`. After CA runs out of scale up options (all attempted failed) we would change the condition to `NodeProvisioningInProgress: False`. + + +### User Stories (Optional) + + + +#### Story 1 + +As a user, I want to have an easy and reliable way to investigate why my pods are stuck in the Pending phase. + +#### Story 2 + +### Notes/Constraints/Caveats (Optional) + + + +### Risks and Mitigations + + + +## Design Details + + + +We would emit them from the same place as corresponding k8s events, which is `EventingScaleUpStatusProcessor`. + +ScaleUpResult | Pod condition type & status | Pod condition reason +:---------------------------- | :-------------------------------- | :------------------- +ScaleUpSuccessful | NodeProvisioningInProgress: True | +ScaleUpError | NodeProvisioningInProgress: True | ...Error +ScaleUpNoOptionsAvailable | NodeProvisioningInProgress: False | NoOptionsAvailable +ScaleUpNotTried | NodeProvisioningInProgress: False | NotTried +ScaleUpInCooldown | NodeProvisioningInProgress: False | InCooldown +ScaleUpLimitedByMaxNodesTotal | NodeProvisioningInProgress: False | LimitedByMaxNodesTotal + +We distinguish following ScaleUpErrors (`errors.AutoscalerError`): +- CloudProviderError, which is an error related to underlying infrastructure +- ApiCallError, which is an error related to communication with k8s API server +- InternalError, which is an error inside Cluster Autoscaler +- TransientError, which is an error that causes us to skip a single loop, but does not require any additional action. +- ConfigurationError, which is an error related to bad configuration provided by a user. +- NodeGroupDoesNotExistError, which signifies that a NodeGroupdoes not exist. + +And we would have correspoding reasons in new pod condition. + +In future we could consider using `SkippedReasons` to fill ScaleUpNoOptionsAvailable with more details, but the message would need to be aggregated somehow because they are per node group. + +Some examples of `SkippedReasons`: +- BackoffReason - "in backoff after failed scale-up" +- MaxLimitReachedReason - "max node group size reached" +- NotReadyReason - "not ready for scale-up" + + +### Test Plan + + + +[ ] I/we understand the owners of the involved components may require updates to +existing tests to make this code solid enough prior to committing the changes necessary +to implement this enhancement. + +##### Prerequisite testing updates + + + +##### Unit tests + + + + + +- ``: `` - `` + +##### Integration tests + + + + + +- [test name](https://github.com/kubernetes/kubernetes/blob/2334b8469e1983c525c0c6382125710093a25883/test/integration/...): [integration master](https://testgrid.k8s.io/sig-release-master-blocking#integration-master?include-filter-by-regex=MyCoolFeature), [triage search](https://storage.googleapis.com/k8s-triage/index.html?test=MyCoolFeature) + +##### e2e tests + + + +- [test name](https://github.com/kubernetes/kubernetes/blob/2334b8469e1983c525c0c6382125710093a25883/test/e2e/...): [SIG ...](https://testgrid.k8s.io/sig-...?include-filter-by-regex=MyCoolFeature), [triage search](https://storage.googleapis.com/k8s-triage/index.html?test=MyCoolFeature) + +### Graduation Criteria + + + +### Upgrade / Downgrade Strategy + + + +### Version Skew Strategy + + + +## Production Readiness Review Questionnaire + + + +### Feature Enablement and Rollback + + + +###### How can this feature be enabled / disabled in a live cluster? + + + +- [x] Feature gate (also fill in values in `kep.yaml`) + - Feature gate name: `ScaleUpObservability` with values: + - `EventsOnly` - current state, + - `EventsAndConditions` - produce both - we could turn it on scalability tests and assess the impact on the overall performance, + - `ConditionsOnly` - produce only new pod conditions. + - Components depending on the feature gate: `cluster-autoscaler`. + + +###### Does enabling the feature change any default behavior? + +No, by default, CA will continue to use the `EventsOnly` for the scale up observability. + +###### Can the feature be disabled once it has been enabled (i.e. can we roll back the enablement)? + + + +Yes. + +###### What happens if we reenable the feature if it was previously rolled back? + +There should be no problems. + +###### Are there any tests for feature enablement/disablement? + + + +No, but they will be added. + +### Rollout, Upgrade and Rollback Planning + + + +###### How can a rollout or rollback fail? Can it impact already running workloads? + + + +Enablement and disablement of the feature should not affect already running workloads. + +###### What specific metrics should inform a rollback? + + + +###### Were upgrade and rollback tested? Was the upgrade->downgrade->upgrade path tested? + + + +###### Is the rollout accompanied by any deprecations and/or removals of features, APIs, fields of API types, flags, etc.? + + + +### Monitoring Requirements + + + +###### How can an operator determine if the feature is in use by workloads? + + + +###### How can someone using this feature know that it is working for their instance? + + + +For pods in `Pending` phase, with condition: +```yaml + status: "False" + type: PodScheduled + reason: "Unschedulable" +``` +should also have new `NodeProvisioningInProgress` condition. + + +###### What are the reasonable SLOs (Service Level Objectives) for the enhancement? + + + +###### What are the SLIs (Service Level Indicators) an operator can use to determine the health of the service? + + + +- [ ] Metrics + - Metric name: + - [Optional] Aggregation method: + - Components exposing the metric: +- [ ] Other (treat as last resort) + - Details: + +###### Are there any missing metrics that would be useful to have to improve observability of this feature? + + + +### Dependencies + + + +###### Does this feature depend on any specific services running in the cluster? + + +No. + +### Scalability + + + +###### Will enabling / using this feature result in any new API calls? + +Yes, + - PATCH pods + - estimated throughput: in a scale-up scenario 2x per pod + - originating component: Cluster Autoscaler + +###### Will enabling / using this feature result in introducing new API types? + +No. + +###### Will enabling / using this feature result in any new calls to the cloud provider? + +No. + +###### Will enabling / using this feature result in increasing size or count of the existing API objects? + + + +###### Will enabling / using this feature result in increasing time taken by any operations covered by existing SLIs/SLOs? + + + +###### Will enabling / using this feature result in non-negligible increase of resource usage (CPU, RAM, disk, IO, ...) in any components? + + + +###### Can enabling / using this feature result in resource exhaustion of some node resources (PIDs, sockets, inodes, etc.)? + +No. + +### Troubleshooting + + + +###### How does this feature react if the API server and/or etcd is unavailable? + +###### What are other known failure modes? + + + +###### What steps should be taken if SLOs are not being met to determine the problem? + +## Implementation History + + + +## Drawbacks + + + +## Alternatives + + + +## Infrastructure Needed (Optional) + + diff --git a/keps/sig-autoscaling/5616-ca-pod-condition/kep.yaml b/keps/sig-autoscaling/5616-ca-pod-condition/kep.yaml new file mode 100644 index 00000000000..b7a5d39ca2f --- /dev/null +++ b/keps/sig-autoscaling/5616-ca-pod-condition/kep.yaml @@ -0,0 +1,45 @@ +title: Cluster Autoscaler Pod Conditions +kep-number: 5616 +authors: + - "@MartynaGrotek" +owning-sig: sig-autoscaling +participating-sigs: + - sig-autoscaling + - sig-scheduling +status: provisional # |implementable|implemented|deferred|rejected|withdrawn|replaced +creation-date: 2025-10-06 +reviewers: + - TBD +approvers: + - TBD + +see-also: + - "/keps/sig-scheduling/3990-pod-topology-spread-fallback-mode" + +# The target maturity stage in the current dev cycle for this KEP. +# If the purpose of this KEP is to deprecate a user-visible feature +# and a Deprecated feature gates are added, they should be deprecated|disabled|removed. +stage: alpha + +# The most recent milestone for which work toward delivery of this KEP has been +# done. This can be the current (upcoming) milestone, if it is being actively +# worked on. +latest-milestone: "v1.35" + +# The milestone at which this feature was, or is targeted to be, at each stage. +milestone: + alpha: "v1.35" + beta: "v1.36" + stable: "v1.37" + +# The following PRR answers are required at alpha release +# List the feature gate name and the components for which it must be enabled +feature-gates: + - name: ScaleUpObservability + components: + - cluster-autoscaler +disable-supported: true + +# The following PRR answers are required at beta release +metrics: + - TBD From b808320a0109a59c9c429efcfe605c0ac81f8d0a Mon Sep 17 00:00:00 2001 From: Martyna Grotek Date: Mon, 13 Oct 2025 19:43:11 +0000 Subject: [PATCH 2/3] Get rid of CA-specific parts --- .../5616-ca-pod-condition/README.md | 58 ++++++++----------- 1 file changed, 24 insertions(+), 34 deletions(-) diff --git a/keps/sig-autoscaling/5616-ca-pod-condition/README.md b/keps/sig-autoscaling/5616-ca-pod-condition/README.md index c87ac4b4a55..5e210e9940e 100644 --- a/keps/sig-autoscaling/5616-ca-pod-condition/README.md +++ b/keps/sig-autoscaling/5616-ca-pod-condition/README.md @@ -207,10 +207,14 @@ nitty-gritty. --> Introduce new pod condition type `NodeProvisioningInProgress`. -- `NodeProvisioningInProgress: True` (with empty reason) corresponding to todays `TiggeredScaleUp` event, meaning CA decided to scale up cluster to make place for this pod. -- `NodeProvisioningInProgress: False` corresponding to todays `NotTriggerScaleUp` event, meaning CA couldn't find node group that can be scaled up to make this pod schedulable. -In case of failure, we would change reason to `Error`. After CA runs out of scale up options (all attempted failed) we would change the condition to `NodeProvisioningInProgress: False`. +`NodeProvisioningInProgress: True` denotes that cluster autoscaler attempts to provision node for a pod. It would correspond to todays `TiggeredScaleUp` k8s event. In case of failure, but some options still being available, it will stay `True`, but the reason will be `ProvisioningAttemptFailed`. + +`NodeProvisioningInProgress: False` means no provisiong attempted for a pod at this moment. It would correspond to todays `NotTriggerScaleUp` event. + +Condition won't be cleared after successful scheduling of a pod. + +Autoscaler-specific details could be stored in `message` field in [pod condition](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions). ### User Stories (Optional) @@ -224,10 +228,14 @@ bogged down. #### Story 1 -As a user, I want to have an easy and reliable way to investigate why my pods are stuck in the Pending phase. +As a user, I want to be able to setup automation detecting no provisioning for a pod happening at the moment. #### Story 2 +*From [KEP-3990: PodTopologySpread DoNotSchedule-to-ScheduleAnyway fallback mode](https://github.com/kubernetes/enhancements/issues/3990):* + +Scheduler should know which unschedulable Pod(s) don't trigger creation of nodes to fall back to `ScheduleAnyway` for Pod Topology Spread. + ### Notes/Constraints/Caveats (Optional) +Pod condition can't be used as a [field predicate](https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/#list-of-supported-fields) while listing relevant pods, but `status.phase==Pending` can as a server side pre-filtering. After that client side filtering can be implemented based on needs. + + ### Risks and Mitigations -We would emit them from the same place as corresponding k8s events, which is `EventingScaleUpStatusProcessor`. - -ScaleUpResult | Pod condition type & status | Pod condition reason -:---------------------------- | :-------------------------------- | :------------------- -ScaleUpSuccessful | NodeProvisioningInProgress: True | -ScaleUpError | NodeProvisioningInProgress: True | ...Error -ScaleUpNoOptionsAvailable | NodeProvisioningInProgress: False | NoOptionsAvailable -ScaleUpNotTried | NodeProvisioningInProgress: False | NotTried -ScaleUpInCooldown | NodeProvisioningInProgress: False | InCooldown -ScaleUpLimitedByMaxNodesTotal | NodeProvisioningInProgress: False | LimitedByMaxNodesTotal - -We distinguish following ScaleUpErrors (`errors.AutoscalerError`): -- CloudProviderError, which is an error related to underlying infrastructure -- ApiCallError, which is an error related to communication with k8s API server -- InternalError, which is an error inside Cluster Autoscaler -- TransientError, which is an error that causes us to skip a single loop, but does not require any additional action. -- ConfigurationError, which is an error related to bad configuration provided by a user. -- NodeGroupDoesNotExistError, which signifies that a NodeGroupdoes not exist. - -And we would have correspoding reasons in new pod condition. +When unschedulable pod is noticed by node autoscaler, there are two options (at each point in time): +1. attempt to help it (`NodeProvisioningInProgress: True`) +2. do nothing (`NodeProvisioningInProgress: False`) -In future we could consider using `SkippedReasons` to fill ScaleUpNoOptionsAvailable with more details, but the message would need to be aggregated somehow because they are per node group. +Transition is possible in both directions: +- True -> False +- False -> True -Some examples of `SkippedReasons`: -- BackoffReason - "in backoff after failed scale-up" -- MaxLimitReachedReason - "max node group size reached" -- NotReadyReason - "not ready for scale-up" +Pod stays with the conditions until the EOL. There is no point in clearing it, and it would be yet another kube api server call. ### Test Plan @@ -528,16 +521,13 @@ well as the [existing list] of feature gates. --> - [x] Feature gate (also fill in values in `kep.yaml`) - - Feature gate name: `ScaleUpObservability` with values: - - `EventsOnly` - current state, - - `EventsAndConditions` - produce both - we could turn it on scalability tests and assess the impact on the overall performance, - - `ConditionsOnly` - produce only new pod conditions. + - Feature gate name: `NodeProvisioningInProgressCondition` - Components depending on the feature gate: `cluster-autoscaler`. ###### Does enabling the feature change any default behavior? -No, by default, CA will continue to use the `EventsOnly` for the scale up observability. +No. ###### Can the feature be disabled once it has been enabled (i.e. can we roll back the enablement)? From ac5288e03d723eb486ba8f6c1bbaed0c329c3048 Mon Sep 17 00:00:00 2001 From: Martyna Grotek Date: Thu, 16 Oct 2025 15:28:23 +0000 Subject: [PATCH 3/3] Update target version and flag name --- keps/sig-autoscaling/5616-ca-pod-condition/kep.yaml | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/keps/sig-autoscaling/5616-ca-pod-condition/kep.yaml b/keps/sig-autoscaling/5616-ca-pod-condition/kep.yaml index b7a5d39ca2f..1bd367c3ef4 100644 --- a/keps/sig-autoscaling/5616-ca-pod-condition/kep.yaml +++ b/keps/sig-autoscaling/5616-ca-pod-condition/kep.yaml @@ -24,18 +24,18 @@ stage: alpha # The most recent milestone for which work toward delivery of this KEP has been # done. This can be the current (upcoming) milestone, if it is being actively # worked on. -latest-milestone: "v1.35" +latest-milestone: "v1.36" # The milestone at which this feature was, or is targeted to be, at each stage. milestone: - alpha: "v1.35" - beta: "v1.36" - stable: "v1.37" + alpha: "v1.36" + beta: "v1.37" + stable: "v1.38" # The following PRR answers are required at alpha release # List the feature gate name and the components for which it must be enabled feature-gates: - - name: ScaleUpObservability + - name: NodeProvisioningInProgressCondition components: - cluster-autoscaler disable-supported: true