Skip to content

Conversation

ttetyanka
Copy link

What type of PR is this?

What this PR does / why we need it:

Which issue(s) this PR fixes:

Fixes #

Special notes for your reviewer:

Does this PR introduce a user-facing change?


Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


@k8s-ci-robot k8s-ci-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Aug 28, 2025
@k8s-ci-robot
Copy link
Contributor

Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. label Aug 28, 2025
Copy link

linux-foundation-easycla bot commented Aug 28, 2025

CLA Not Signed

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: ttetyanka
Once this PR has been reviewed and has the lgtm label, please assign towca for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot
Copy link
Contributor

Welcome @ttetyanka!

It looks like this is your first PR to kubernetes/autoscaler 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes/autoscaler has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Aug 28, 2025
@k8s-ci-robot
Copy link
Contributor

Hi @ttetyanka. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Aug 28, 2025
@ttetyanka ttetyanka force-pushed the feature/deletionlatencytracker branch from aff3480 to ef7c537 Compare August 28, 2025 14:34
@elmiko
Copy link
Contributor

elmiko commented Aug 28, 2025

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Aug 28, 2025
)

type NodeInfo struct {
Name string

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like we have duplicated information. Name is already in key.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

File name not aligned with convention - using underscore between words.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Names of the test file and file implementation do not match.

Comment on lines +58 to +63
klog.V(2).Infof(
"Observing deletion for node %s, unneeded for %s (threshold was %s).",
nodeName, duration, info.Threshold,
)

metrics.UpdateScaleDownNodeDeletionDuration("true", duration-info.Threshold)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Logging and updating metric don't need and shouldn’t be done under lock.

limitsFinder *resource.LimitsFinder
cachedList []*apiv1.Node
byName map[string]*node
unneededTimeCache map[string]time.Duration

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would try to store here latency tracker. I think it would simplify code - eg. we could get rid of this cache and GetUnneededTimeForNode() method. We could save the GetScaleDownUnneededTime result when it is called during unremovableReason() method.

@@ -324,6 +327,9 @@ func (a *Actuator) deleteNodesAsync(nodes []*apiv1.Node, nodeGroup cloudprovider
}

for _, node := range nodes {
if a.nodeLatencyTracker != nil {
a.nodeLatencyTracker.ObserveDeletion(node.Name, time.Now())

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could consider covering this logic with test.

@@ -750,3 +760,7 @@ func UpdateInconsistentInstancesMigsCount(migCount int) {
func ObserveBinpackingHeterogeneity(instanceType, cpuCount, namespaceCount string, pegCount int) {
binpackingHeterogeneity.WithLabelValues(instanceType, cpuCount, namespaceCount).Observe(float64(pegCount))
}

func UpdateScaleDownNodeDeletionDuration(deleted string, duration time.Duration) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we ever pass deleted = false?

Namespace: caNamespace,
Name: "node_deletion_duration_seconds",
Help: "Latency from planning (node marked) to final outcome (deleted, aborted, rescued).",
Buckets: k8smetrics.ExponentialBuckets(10, 2, 12),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would consider having better resolution.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see any test checking the reported value.

@@ -324,6 +327,9 @@ func (a *Actuator) deleteNodesAsync(nodes []*apiv1.Node, nodeGroup cloudprovider
}

for _, node := range nodes {
if a.nodeLatencyTracker != nil {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn’t seem like the best placement. Deletion still might fail (look at the checks below). I think we would want to report only successful deletion.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/cluster-autoscaler cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants