Skip to content

Conversation

ramekris3163
Copy link
Contributor

@ramekris3163 ramekris3163 commented Aug 12, 2025

Summary

Label support for the OCM (Open Cluster Management) ManagedCluster resource, following the same patterns as existing annotation decorator and have the ability to pass the labels as part of the helm chart.

Related issue(s)

#1118

Summary by CodeRabbit

  • New Features

    • Support for cluster labels during registration and agent bootstrap; labels are applied to ManagedClusters and propagated alongside annotations.
    • New --cluster-labels option to supply key=value labels; startup will include the argument only when provided.
    • CRD/schema extended to accept clusterLabels in registration configuration.
  • Documentation

    • Added a commented sample showing how to define cluster labels in configuration.
  • Tests

    • Expanded tests for label/annotation propagation, updates, and equality helpers.

Copy link

coderabbitai bot commented Aug 12, 2025

Walkthrough

Adds clusterLabels support end-to-end: CRD/schema, helm sample, controller plumbing, CLI flag, decorators, manifest args, unit tests, spoke-agent wiring, and a dependency/version bump.

Changes

Cohort / File(s) Summary of changes
Helm chart sample values
deploy/klusterlet/chart/klusterlet/values.yaml
Added a commented sample clusterLabels block (documentation only; all lines are comments).
Klusterlet CRD/schema
deploy/klusterlet/chart/klusterlet/crds/0000_00_operator.open-cluster-management.io_klusterlets.crd.yaml, deploy/klusterlet/config/crds/0000_00_operator.open-cluster-management.io_klusterlets.crd.yaml, deploy/klusterlet/olm-catalog/latest/manifests/operator.open-cluster-management.io_klusterlets.yaml
Added clusterLabels property under spec.registrationConfiguration.properties as an object with additionalProperties: string and explanatory description.
Management manifests: cluster-labels arg
manifests/klusterlet/management/klusterlet-agent-deployment.yaml, manifests/klusterlet/management/klusterlet-registration-deployment.yaml
Conditionally render --cluster-labels={{ .ClusterLabelsString }} when .ClusterLabelsString is non-empty.
Klusterlet controller plumbing
pkg/operator/operators/klusterlet/controllers/klusterlet_controller.go
Added ClusterLabelsString string to klusterletConfig and populate it from RegistrationConfiguration.ClusterLabels as comma-separated key=value pairs for templating.
Registration options: flag and field
pkg/registration/spoke/options.go
Added exported ClusterLabels map[string]string to SpokeAgentOptions and a --cluster-labels CLI flag to populate it.
Creating controller: labels/annotations handling
pkg/registration/spoke/registration/creating_controller.go
Added LabelDecorator and AnnotationDecorator; introduced equalClientConfigs, equalLabels, equalAnnotations; update/create flow now compares and applies Labels and Annotations along with ClientConfigs; added CABundle bytes comparison and error handling around update.
Spoke agent wiring: apply labels on create
pkg/registration/spoke/spokeagent.go
Included registration.LabelDecorator(o.registrationOption.ClusterLabels) in the decorator list passed to NewManagedClusterCreatingController.
Unit tests for labels/annotations/equality
pkg/registration/spoke/registration/creating_controller_test.go
Added tests validating label and annotation propagation, update behavior, and equality helper edge cases.
Dependency and CSV timestamps
go.mod, deploy/cluster-manager/olm-catalog/.../cluster-manager.clusterserviceversion.yaml, deploy/klusterlet/olm-catalog/.../klusterlet.clusterserviceversion.yaml
Bumped open-cluster-management.io/api module version; updated createdAt timestamps in two CSV manifests.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Suggested labels

lgtm

Suggested reviewers

  • qiujian16
  • zhiweiyin318

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 golangci-lint (2.2.2)

Error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/product/migration-guide for migration instructions
The command is terminated due to an error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/product/migration-guide for migration instructions

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@openshift-ci openshift-ci bot requested review from haowells and ldpliu August 12, 2025 21:27
Copy link
Contributor

openshift-ci bot commented Aug 12, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: ramekris3163
Once this PR has been reviewed and has the lgtm label, please assign qiujian16 for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🔭 Outside diff range comments (1)
pkg/operator/operators/klusterlet/controllers/klusterletcontroller/klusterlet_controller.go (1)

387-400: Deterministic ordering needed when building comma-separated flags (avoid spurious rollouts)

Iterating over Go maps is random; the resulting string order will vary between reconciles, causing unnecessary Deployment rollouts when the arg value flips order. Sort keys for both annotations and labels before joining.

Apply this refactor:

-	// construct cluster annotations string, the final format is "key1=value1,key2=value2"
-	var annotationsArray []string
-	for k, v := range commonhelpers.FilterClusterAnnotations(klusterlet.Spec.RegistrationConfiguration.ClusterAnnotations) {
-		annotationsArray = append(annotationsArray, fmt.Sprintf("%s=%s", k, v))
-	}
-	config.ClusterAnnotationsString = strings.Join(annotationsArray, ",")
+	// construct cluster annotations string deterministically: "key1=value1,key2=value2"
+	ann := commonhelpers.FilterClusterAnnotations(klusterlet.Spec.RegistrationConfiguration.ClusterAnnotations)
+	annKeys := make([]string, 0, len(ann))
+	for k := range ann {
+		annKeys = append(annKeys, k)
+	}
+	// sort for stability across reconciles
+	// NOTE: requires adding "sort" to imports
+	sort.Strings(annKeys)
+	annotationsArray := make([]string, 0, len(annKeys))
+	for _, k := range annKeys {
+		annotationsArray = append(annotationsArray, fmt.Sprintf("%s=%s", k, ann[k]))
+	}
+	config.ClusterAnnotationsString = strings.Join(annotationsArray, ",")
 
-	// construct cluster labels string, the final format is "key1=value1,key2=value2"
-	var labelsArray []string
-	for k, v := range klusterlet.Spec.RegistrationConfiguration.ClusterLabels {
-		labelsArray = append(labelsArray, fmt.Sprintf("%s=%s", k, v))
-	}
-	config.ClusterLabelsString = strings.Join(labelsArray, ",")
+	// construct cluster labels string deterministically: "key1=value1,key2=value2"
+	labels := klusterlet.Spec.RegistrationConfiguration.ClusterLabels
+	labelKeys := make([]string, 0, len(labels))
+	for k := range labels {
+		labelKeys = append(labelKeys, k)
+	}
+	sort.Strings(labelKeys)
+	labelsArray := make([]string, 0, len(labelKeys))
+	for _, k := range labelKeys {
+		labelsArray = append(labelsArray, fmt.Sprintf("%s=%s", k, labels[k]))
+	}
+	config.ClusterLabelsString = strings.Join(labelsArray, ",")

Additionally, ensure the following import exists in this file:

import (
  // ...
  "sort"
  // ...
)
🧹 Nitpick comments (11)
deploy/klusterlet/chart/klusterlet/values.yaml (1)

130-132: Add brief guidance on valid label syntax and precedence in the sample block

The commented sample is helpful. Consider adding a short note that:

  • keys/values must comply with Kubernetes label syntax (DNS-1123/prefix rules).
  • these labels are applied during bootstrap; other actors can still mutate labels later.

Apply this tweak to make it explicit:

-#    clusterLabels:
+#    # clusterLabels are applied during bootstrap; other actors may change labels later.
+#    # Keys/values must conform to Kubernetes label syntax.
+#    clusterLabels:
pkg/registration/spoke/options.go (1)

80-81: Clarify flag help: mention format and K8s label constraints

Minor doc improvement to reduce operator mistakes.

-fs.StringToStringVar(&o.ClusterLabels, "cluster-labels", o.ClusterLabels, `the labels set on ManagedCluster when creating only, other actors can update it afterwards.`)
+fs.StringToStringVar(&o.ClusterLabels, "cluster-labels", o.ClusterLabels, `labels to set on the ManagedCluster at bootstrap only (key=value pairs). Keys/values must conform to Kubernetes label syntax; other actors can update them afterwards.`)
pkg/operator/operators/klusterlet/controllers/klusterletcontroller/klusterlet_controller.go (1)

394-400: Consider validating or filtering label keys before stringifying

Unlike annotations (which use FilterClusterAnnotations), labels are taken as-is. If there are reserved/managed label keys, consider adding a similar filter or validator to avoid accidental overrides.

If there is a known allowlist/denylist, I can add a helpers.FilterClusterLabels(labels map[string]string) following the annotations pattern. Confirm the desired policy and I’ll provide a concrete patch.

pkg/registration/spoke/registration/creating_controller.go (2)

101-105: Update error message to reflect broader update scope

The error mentions only ManagedClusterClientConfigs, but the update now includes Labels and Annotations too.

-		return fmt.Errorf("unable to update ManagedClusterClientConfigs of managed cluster %q in hub: %w", c.clusterName, err)
+		return fmt.Errorf("unable to update managed cluster %q on hub: %w", c.clusterName, err)

134-146: Label decorator applies labels as-is; consider optional filtering to avoid reserved/managed keys

If certain labels are hub-managed or reserved, mirror the annotations pattern by filtering/validating before setting.

If you confirm a policy, I can add a helper like below and wire it here:

// in pkg/common/helpers/labels.go
func FilterClusterLabels(labels map[string]string) map[string]string {
	out := make(map[string]string)
	if labels == nil {
		return out
	}
	// TODO: apply allow/deny list policy here
	for k, v := range labels {
		out[k] = v
	}
	return out
}

Then:

for k, v := range commonhelpers.FilterClusterLabels(labels) {
    cluster.Labels[k] = v
}
pkg/registration/spoke/registration/creating_controller_test.go (6)

111-113: Avoid brittle len assertions for labels; assert presence instead.

Asserting the exact count of labels is brittle and may break if defaults or future labels are added. The presence/value checks below are sufficient.

Apply this diff:

-				if len(clusterLabels) != 2 {
-					t.Errorf("expected cluster labels %#v but got: %#v", 2, len(clusterLabels))
-				}

150-152: Avoid brittle len assertion on update path as well.

Same concern on update: focus on required key/value rather than total count.

-				if len(clusterLabels) != 1 {
-					t.Errorf("expected cluster labels %#v but got: %#v", 1, len(clusterLabels))
-				}

206-208: Remove hardcoded label count in combined labels+annotations create test.

Future defaults may add labels; checking exact count can cause flaky tests.

-				if len(clusterLabels) != 2 {
-					t.Errorf("expected cluster labels %#v but got: %#v", 2, len(clusterLabels))
-				}

220-222: Remove hardcoded annotation count; rely on presence check below.

This keeps the test resilient if other annotations are introduced.

-				if len(clusterAnnotations) != 1 {
-					t.Errorf("expected cluster annotations %#v but got: %#v", 1, len(clusterAnnotations))
-				}

124-137: Consider relaxing “no labels” assertion to avoid future brittleness.

If the controller ever introduces default labels, this test will fail even though LabelDecorator behaved correctly. Prefer asserting absence of specific keys or that provided labels aren’t added when input is empty.

Do you want me to propose an alternate assertion that checks that cluster does not contain any of a provided set of keys (e.g., environment/agent labels), instead of asserting len==0?


82-121: Minor DRY improvement: factor repeated action unpacking into a helper.

Multiple tests repeat action extraction and casting boilerplate. A small helper improves readability and consistency.

Example helper (placed in this test file):

func getManagedClusterFromAction(t *testing.T, actions []clienttesting.Action, index int) *clusterv1.ManagedCluster {
	t.Helper()
	obj := actions[index].(interface{ GetObject() runtime.Object }).GetObject()
	mc, ok := obj.(*clusterv1.ManagedCluster)
	if !ok {
		t.Fatalf("action at %d is not ManagedCluster: %#v", index, actions[index])
	}
	return mc
}

Usage:

  • Replace casts like actions[1].(clienttesting.CreateActionImpl).Object with getManagedClusterFromAction(t, actions, 1).
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3df894d and d7674aa.

📒 Files selected for processing (8)
  • deploy/klusterlet/chart/klusterlet/values.yaml (1 hunks)
  • manifests/klusterlet/management/klusterlet-agent-deployment.yaml (1 hunks)
  • manifests/klusterlet/management/klusterlet-registration-deployment.yaml (1 hunks)
  • pkg/operator/operators/klusterlet/controllers/klusterletcontroller/klusterlet_controller.go (2 hunks)
  • pkg/registration/spoke/options.go (2 hunks)
  • pkg/registration/spoke/registration/creating_controller.go (3 hunks)
  • pkg/registration/spoke/registration/creating_controller_test.go (1 hunks)
  • pkg/registration/spoke/spokeagent.go (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (3)
pkg/registration/spoke/spokeagent.go (1)
pkg/registration/spoke/registration/creating_controller.go (1)
  • LabelDecorator (135-147)
pkg/registration/spoke/registration/creating_controller_test.go (5)
pkg/common/testing/assertion.go (1)
  • AssertActions (51-61)
pkg/registration/helpers/testing/assertion.go (1)
  • AssertManagedClusterClientConfigs (21-28)
pkg/registration/helpers/testing/testinghelpers.go (1)
  • TestManagedClusterName (38-38)
pkg/registration/spoke/registration/creating_controller.go (4)
  • ManagedClusterDecorator (26-26)
  • LabelDecorator (135-147)
  • ClientConfigDecorator (150-171)
  • AnnotationDecorator (119-132)
pkg/common/testing/fake_sync_context.go (1)
  • NewFakeSyncContext (21-27)
pkg/registration/spoke/registration/creating_controller.go (1)
pkg/common/helpers/annotations.go (1)
  • FilterClusterAnnotations (11-26)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: verify
  • GitHub Check: build
  • GitHub Check: unit
  • GitHub Check: e2e-singleton
  • GitHub Check: e2e-hosted
  • GitHub Check: integration
  • GitHub Check: e2e
🔇 Additional comments (11)
pkg/registration/spoke/options.go (1)

39-39: New option field for labels: LGTM

The addition aligns with the existing annotations option and downstream usage.

pkg/operator/operators/klusterlet/controllers/klusterletcontroller/klusterlet_controller.go (2)

170-171: Config plumbed for ClusterLabelsString: LGTM

This mirrors ClusterAnnotationsString for templating and wiring arguments downstream.


387-393: AI summary note: annotations construction still present

The AI summary mentions “shifts cluster-identity construction from annotations to labels,” but this block still constructs ClusterAnnotationsString. Looks consistent with adding labels in addition, not replacing annotations.

pkg/registration/spoke/registration/creating_controller.go (2)

118-132: Annotation decorator: LGTM

Filtering with FilterClusterAnnotations avoids non-prefixed entries and matches existing behavior.


190-214: Equality helpers for labels/annotations: LGTM

The logic is correct given we start from a DeepCopy of the existing object and decorators only merge/override targeted keys.

pkg/registration/spoke/spokeagent.go (1)

285-289: Wiring LabelDecorator into bootstrap: LGTM

Decorator order is Annotation -> Label -> ClientConfig -> driver. This lets the driver override if necessary; confirm this precedence is intentional.

Do we want the driver to be able to override labels set by the CLI? If not, move LabelDecorator after the driver’s decorator in the slice.

pkg/registration/spoke/registration/creating_controller_test.go (2)

279-353: Good coverage for label equality edge cases.

Covers nil vs empty, different keys/values/length. Looks solid.


355-396: Add missing equalAnnotations test cases

The current tests exercise empty, nil, matching, and differing-value maps but don’t cover maps with entirely different keys or mismatched lengths. Please extend the table in creating_controller_test.go as follows:

@@ -366,6 +366,14 @@ tests := []struct {
         {
             name: "different values",
             annotations1: map[string]string{"agent.open-cluster-management.io/test": "true"},
             annotations2: map[string]string{"agent.open-cluster-management.io/test": "false"},
             expected: false,
         },
+        {
+            name: "different keys",
+            annotations1: map[string]string{"a": "1"},
+            annotations2: map[string]string{"b": "1"},
+            expected: false,
+        },
+        {
+            name: "different lengths",
+            annotations1: map[string]string{"a": "1"},
+            annotations2: map[string]string{"a": "1", "b": "2"},
+            expected: false,
+        },

Optional follow-up
Currently, equalAnnotations does a raw length check and value compare. If its intent is to mirror what the controller persists (which uses commonhelpers.FilterClusterAnnotations before setting .Annotations), you may want to either:

  • Call commonhelpers.FilterClusterAnnotations on both inputs inside equalAnnotations, or
  • Document that equalAnnotations expects already-filtered maps.

Let me know if you’d like a follow-up PR to align on filtering semantics.

manifests/klusterlet/management/klusterlet-agent-deployment.yaml (2)

106-108: Flag wiring LGTM and consistent with annotations block.

The new optional --cluster-labels flag mirrors the existing --cluster-annotations pattern and is correctly gated on .ClusterLabelsString.


106-108: ClusterLabelsString End-to-End Wiring Verified
All checks confirm that ClusterLabelsString is correctly wired through the controller, templates, agent CLI, and label decorator:

• Controller (pkg/operator/.../klusterlet_controller.go) defines and populates ClusterLabelsString.
• Manifests include --cluster-labels={{ .ClusterLabelsString }} in both registration and agent deployments.
• Agent CLI (pkg/registration/spoke/options.go) registers the cluster-labels flag.
LabelDecorator (pkg/registration/spoke/spokeagent.go) applies the provided labels to the ManagedCluster.

No further action required.

manifests/klusterlet/management/klusterlet-registration-deployment.yaml (1)

91-93: Registration deployment flag addition looks correct.

Matches the agent manifest and is conditionally rendered; placement is consistent with neighboring args.

Comment on lines +91 to +93
{{if .ClusterLabelsString}}
- "--cluster-labels={{ .ClusterLabelsString }}"
{{end}}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Confirm registration agent supports --cluster-labels and values flow from Helm.

Validate that:

  • Registration agent’s options include the --cluster-labels flag.
  • The klusterlet controller populates .ClusterLabelsString from Helm values and passes it to both registration and agent deployments.

Run this script:


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "Registration deployment renders the new flag:"
rg -n --no-heading 'registration-agent' -A20 manifests/klusterlet/management/klusterlet-registration-deployment.yaml | sed 's/^/YAML: /'

echo
echo "Search for registration option flag definition and usage:"
rg -n --no-heading 'cluster-labels' -A3 -B3 pkg/registration | sed 's/^/GO: /'

echo
echo "Check Helm values/docs for cluster labels sample:"
fd -a 'values.yaml' deploy | while read -r f; do
  echo "== $f =="
  rg -n --no-heading 'clusterLabels' -A3 -B3 "$f" || true
done

Length of output: 5146


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "1) Search for --cluster-labels in registration deployment manifest:"
rg -n --no-heading -- '--cluster-labels' -A2 -B2 manifests/klusterlet/management/klusterlet-registration-deployment.yaml || echo "Not found"

echo
echo "2) Search for .ClusterLabelsString in registration deployment manifest:"
rg -n --no-heading 'ClusterLabelsString' -A2 -B2 manifests/klusterlet/management/klusterlet-registration-deployment.yaml || echo "Not found"

echo
echo "3) Search for --cluster-labels in agent deployment manifest:"
rg -n --no-heading -- '--cluster-labels' -A2 -B2 manifests/klusterlet/management/klusterlet-agent-deployment.yaml || echo "Not found"

echo
echo "4) Verify Helm chart defines Values.clusterLabels and helper to build ClusterLabelsString:"
echo "4a) Look for clusterLabels in klusterlet chart values:"
rg -n --no-heading 'clusterLabels' -A3 -B3 deploy/klusterlet/chart/klusterlet/values.yaml || echo "No clusterLabels in values.yaml"
echo
echo "4b) Search for clusterLabelsString helper in klusterlet chart templates:"
rg -n --no-heading 'clusterLabelsString' -A3 -B3 deploy/klusterlet/chart/klusterlet/templates || echo "No helper found"

Length of output: 2493


Ensure Helm chart exposes and injects clusterLabels

We’ve confirmed that:

  • The registration agent’s Go code defines and handles the --cluster‐labels flag.
  • Both the registration and agent manifests conditionally include - "--cluster-labels={{ .ClusterLabelsString }}" when .ClusterLabelsString is set.
  • However, the Klusterlet Helm chart currently comments out the clusterLabels section in values.yaml and provides no helper to build .ClusterLabelsString.

To complete the feature, please:

  • In deploy/klusterlet/chart/klusterlet/values.yaml, uncomment or add a top‐level clusterLabels: {} map.

  • In deploy/klusterlet/chart/klusterlet/templates/_helpers.tpl, define a helper, e.g.:

    {{- define "klusterlet.clusterLabelsString" -}}
    {{- $labels := .Values.clusterLabels | default dict }}
    {{- $pairs := slice }}
    {{- range $k, $v := $labels }}
      {{- $pairs = append $pairs (printf "%s=%s" $k $v) }}
    {{- end }}
    {{- join "," $pairs }}
    {{- end }}
    
  • In both klusterlet-registration-deployment.yaml and klusterlet-agent-deployment.yaml, set:

    {{- $cl := include "klusterlet.clusterLabelsString" . | quote }}
    {{- if ne $cl "\"\"" }}
    - "--cluster-labels={{ trim $cl "\"" }}"
    {{- end }}
    

This will ensure Values.clusterLabels flows through to .ClusterLabelsString and into both deployments.

🤖 Prompt for AI Agents
In manifests/klusterlet/management/klusterlet-registration-deployment.yaml
around lines 91-93 the template expects .ClusterLabelsString but the Helm chart
doesn't expose or build Values.clusterLabels; add a top-level clusterLabels: {}
map to deploy/klusterlet/chart/klusterlet/values.yaml, create a helper in
deploy/klusterlet/chart/klusterlet/templates/_helpers.tpl that builds a
comma-separated "key=value" string from .Values.clusterLabels (defaulting to
empty dict), and update both klusterlet-registration-deployment.yaml and
klusterlet-agent-deployment.yaml to call that helper, quote the result, check
for non-empty string, and inject the --cluster-labels flag with the trimmed
value so the flag is only added when labels are provided.

Comment on lines +236 to +247
// Verify labels are updated
clusterLabels := actual.(*clusterv1.ManagedCluster).Labels
if len(clusterLabels) != 2 {
t.Errorf("expected cluster labels %#v but got: %#v", 2, len(clusterLabels))
}

// Verify annotations are updated
clusterAnnotations := actual.(*clusterv1.ManagedCluster).Annotations
if len(clusterAnnotations) != 1 {
t.Errorf("expected cluster annotations %#v but got: %#v", 1, len(clusterAnnotations))
}
},
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Strengthen update test by asserting key presence instead of counts.

Current checks only assert counts, not correctness of values; assert the exact label and annotation keys/values that must be present.

Apply this diff:

-				// Verify labels are updated
-				clusterLabels := actual.(*clusterv1.ManagedCluster).Labels
-				if len(clusterLabels) != 2 {
-					t.Errorf("expected cluster labels %#v but got: %#v", 2, len(clusterLabels))
-				}
-
-				// Verify annotations are updated
-				clusterAnnotations := actual.(*clusterv1.ManagedCluster).Annotations
-				if len(clusterAnnotations) != 1 {
-					t.Errorf("expected cluster annotations %#v but got: %#v", 1, len(clusterAnnotations))
-				}
+				// Verify labels are updated
+				clusterLabels := actual.(*clusterv1.ManagedCluster).Labels
+				if v, ok := clusterLabels["env"]; !ok || v != "production" {
+					t.Errorf("expected cluster label env=production but got: %#v", clusterLabels["env"])
+				}
+				if v, ok := clusterLabels["cluster-role"]; !ok || v != "worker" {
+					t.Errorf("expected cluster label cluster-role=worker but got: %#v", clusterLabels["cluster-role"])
+				}
+
+				// Verify annotations are updated
+				clusterAnnotations := actual.(*clusterv1.ManagedCluster).Annotations
+				if v, ok := clusterAnnotations["agent.open-cluster-management.io/test"]; !ok || v != "true" {
+					t.Errorf("expected cluster annotation agent.open-cluster-management.io/test=true but got: %#v",
+						clusterAnnotations["agent.open-cluster-management.io/test"])
+				}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Verify labels are updated
clusterLabels := actual.(*clusterv1.ManagedCluster).Labels
if len(clusterLabels) != 2 {
t.Errorf("expected cluster labels %#v but got: %#v", 2, len(clusterLabels))
}
// Verify annotations are updated
clusterAnnotations := actual.(*clusterv1.ManagedCluster).Annotations
if len(clusterAnnotations) != 1 {
t.Errorf("expected cluster annotations %#v but got: %#v", 1, len(clusterAnnotations))
}
},
// Verify labels are updated
clusterLabels := actual.(*clusterv1.ManagedCluster).Labels
if v, ok := clusterLabels["env"]; !ok || v != "production" {
t.Errorf("expected cluster label env=production but got: %#v", clusterLabels["env"])
}
if v, ok := clusterLabels["cluster-role"]; !ok || v != "worker" {
t.Errorf("expected cluster label cluster-role=worker but got: %#v", clusterLabels["cluster-role"])
}
// Verify annotations are updated
clusterAnnotations := actual.(*clusterv1.ManagedCluster).Annotations
if v, ok := clusterAnnotations["agent.open-cluster-management.io/test"]; !ok || v != "true" {
t.Errorf("expected cluster annotation agent.open-cluster-management.io/test=true but got: %#v",
clusterAnnotations["agent.open-cluster-management.io/test"])
}
🤖 Prompt for AI Agents
In pkg/registration/spoke/registration/creating_controller_test.go around lines
236 to 247, replace the weak length-only assertions with explicit existence and
value checks: for labels, verify the expected keys exist in clusterLabels
(guarding against nil) and that their values equal the expected strings; for
annotations, verify the expected annotation key exists and its value equals the
expected string; use t.Errorf/t.Fatalf with clear messages naming the missing
key or unexpected value so the test fails with actionable output.

Comment on lines +409 to +493
t.Run("equalClientConfigs", func(t *testing.T) {
tests := []struct {
name string
configs1 []clusterv1.ClientConfig
configs2 []clusterv1.ClientConfig
expected bool
}{
{
name: "equal empty slices",
configs1: []clusterv1.ClientConfig{},
configs2: []clusterv1.ClientConfig{},
expected: true,
},
{
name: "equal nil and empty slice",
configs1: nil,
configs2: []clusterv1.ClientConfig{},
expected: true,
},
{
name: "equal non-empty slices",
configs1: []clusterv1.ClientConfig{
{
URL: "https://test.example.com",
CABundle: []byte("test-ca-bundle"),
},
},
configs2: []clusterv1.ClientConfig{
{
URL: "https://test.example.com",
CABundle: []byte("test-ca-bundle"),
},
},
expected: true,
},
{
name: "different lengths",
configs1: []clusterv1.ClientConfig{
{URL: "https://test1.example.com"},
},
configs2: []clusterv1.ClientConfig{
{URL: "https://test1.example.com"},
{URL: "https://test2.example.com"},
},
expected: false,
},
{
name: "different URLs",
configs1: []clusterv1.ClientConfig{
{URL: "https://test1.example.com"},
},
configs2: []clusterv1.ClientConfig{
{URL: "https://test2.example.com"},
},
expected: false,
},
{
name: "different CA bundles",
configs1: []clusterv1.ClientConfig{
{
URL: "https://test.example.com",
CABundle: []byte("bundle1"),
},
},
configs2: []clusterv1.ClientConfig{
{
URL: "https://test.example.com",
CABundle: []byte("bundle2"),
},
},
expected: false,
},
}

for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := equalClientConfigs(tt.configs1, tt.configs2)
if result != tt.expected {
t.Errorf("equalClientConfigs(%v, %v) = %v, expected %v",
tt.configs1, tt.configs2, result, tt.expected)
}
})
}
})
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Consider asserting order-insensitive equality for client configs.

If equalClientConfigs is used to determine update necessity, order sensitivity will cause unnecessary updates. Recommend making it order-insensitive (e.g., sort by URL before compare) and adding a test.

Proposed additional test case:

 		tests := []struct {
 			name     string
 			configs1 []clusterv1.ClientConfig
 			configs2 []clusterv1.ClientConfig
 			expected bool
 		}{
@@
 			{
 				name: "equal non-empty slices",
 				configs1: []clusterv1.ClientConfig{
 					{
 						URL:      "https://test.example.com",
 						CABundle: []byte("test-ca-bundle"),
 					},
 				},
 				configs2: []clusterv1.ClientConfig{
 					{
 						URL:      "https://test.example.com",
 						CABundle: []byte("test-ca-bundle"),
 					},
 				},
 				expected: true,
 			},
+			{
+				name: "same elements different order",
+				configs1: []clusterv1.ClientConfig{
+					{URL: "https://a.example.com", CABundle: []byte("a")},
+					{URL: "https://b.example.com", CABundle: []byte("b")},
+				},
+				configs2: []clusterv1.ClientConfig{
+					{URL: "https://b.example.com", CABundle: []byte("b")},
+					{URL: "https://a.example.com", CABundle: []byte("a")},
+				},
+				expected: true,
+			},

If this fails, adjust equalClientConfigs to normalize (sort by URL, then by CABundle) before comparison to avoid needless updates.


🏁 Script executed:

#!/bin/bash
# Search for the implementation of equalClientConfigs
rg -n "func equalClientConfigs" -A5
rg -n "equalClientConfigs(" -A2

Length of output: 730


🏁 Script executed:

#!/bin/bash
# Print the full equalClientConfigs implementation
sed -n '170,250p' pkg/registration/spoke/registration/creating_controller.go

# Search for all usages of equalClientConfigs in this package
rg -n 'equalClientConfigs' -A2 pkg/registration/spoke/registration/creating_controller.go

Length of output: 1632


Make equalClientConfigs order-insensitive

The current implementation in pkg/registration/spoke/registration/creating_controller.go (lines ~174–182) compares []clusterv1.ClientConfig by index, so any change in element order will be treated as a diff and trigger unnecessary updates in creating_controller.go:93. We should normalize (e.g., sort) both slices before comparing and add a test covering out-of-order equality.

• Refactor equalClientConfigs to sort both configs1 and configs2 by URL (and then by CABundle) prior to element-wise comparison.
• Extend the test in creating_controller_test.go to include an “same elements different order” case:

 		tests := []struct {
 			name     string
 			configs1 []clusterv1.ClientConfig
 			configs2 []clusterv1.ClientConfig
 			expected bool
 		}{
@@
 			{
 				name: "different CA bundles",
 				configs1: []clusterv1.ClientConfig{
 					{
 						URL:      "https://test.example.com",
 						CABundle: []byte("bundle1"),
 					},
 				},
 				configs2: []clusterv1.ClientConfig{
 					{
 						URL:      "https://test.example.com",
 						CABundle: []byte("bundle2"),
 					},
 				},
 				expected: false,
 			},
+			{
+				name: "same elements different order",
+				configs1: []clusterv1.ClientConfig{
+					{URL: "https://a.example.com", CABundle: []byte("a")},
+					{URL: "https://b.example.com", CABundle: []byte("b")},
+				},
+				configs2: []clusterv1.ClientConfig{
+					{URL: "https://b.example.com", CABundle: []byte("b")},
+					{URL: "https://a.example.com", CABundle: []byte("a")},
+				},
+				expected: true,
+			},

Comment on lines +92 to 100
// Check if any updates are needed (ClientConfigs, Labels, or Annotations)
if equalClientConfigs(existingCluster.Spec.ManagedClusterClientConfigs, managedCluster.Spec.ManagedClusterClientConfigs) &&
equalLabels(existingCluster.Labels, managedCluster.Labels) &&
equalAnnotations(existingCluster.Annotations, managedCluster.Annotations) {
return nil
}

// update ManagedClusterClientConfigs in ManagedCluster
// update ManagedCluster (ClientConfigs, Labels, and Annotations)
_, err = c.hubClusterClient.ClusterV1().ManagedClusters().Update(ctx, managedCluster, metav1.UpdateOptions{})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Equality gate may churn on ordering; make client-config comparison order-insensitive

The current equalClientConfigs is order-sensitive; different ordering will trigger unnecessary updates. Sort by URL (and CABundle as tiebreaker) before comparing.

-// equalClientConfigs compares two ClientConfig slices for equality
-func equalClientConfigs(configs1, configs2 []clusterv1.ClientConfig) bool {
-	if len(configs1) != len(configs2) {
-		return false
-	}
-	for i, config1 := range configs1 {
-		config2 := configs2[i]
-		if config1.URL != config2.URL {
-			return false
-		}
-		if string(config1.CABundle) != string(config2.CABundle) {
-			return false
-		}
-	}
-	return true
-}
+// equalClientConfigs compares two ClientConfig slices for equality (order-insensitive).
+func equalClientConfigs(configs1, configs2 []clusterv1.ClientConfig) bool {
+	if len(configs1) != len(configs2) {
+		return false
+	}
+	s1 := append([]clusterv1.ClientConfig(nil), configs1...)
+	s2 := append([]clusterv1.ClientConfig(nil), configs2...)
+	sort.Slice(s1, func(i, j int) bool {
+		if s1[i].URL == s1[j].URL {
+			return string(s1[i].CABundle) < string(s1[j].CABundle)
+		}
+		return s1[i].URL < s1[j].URL
+	})
+	sort.Slice(s2, func(i, j int) bool {
+		if s2[i].URL == s2[j].URL {
+			return string(s2[i].CABundle) < string(s2[j].CABundle)
+		}
+		return s2[i].URL < s2[j].URL
+	})
+	for i := range s1 {
+		if s1[i].URL != s2[i].URL || string(s1[i].CABundle) != string(s2[i].CABundle) {
+			return false
+		}
+	}
+	return true
+}

Note: add the following import in this file:

import "sort"
🤖 Prompt for AI Agents
In pkg/registration/spoke/registration/creating_controller.go around lines 92 to
100, the equalClientConfigs comparison is order-sensitive and can cause
unnecessary updates; modify equalClientConfigs (or pre-normalize its inputs) to
sort the ManagedClusterClientConfig slices by URL and use CABundle as a
tiebreaker before doing element-wise comparison so ordering differences do not
trigger churn, implement sorting with sort.Slice and stable comparison of URL
then CABundle, and add the import "sort" at the top of the file.

@ramekris3163 ramekris3163 force-pushed the feat/add_label_decorator_for_managedcluster branch from d7674aa to f32f857 Compare August 18, 2025 14:39
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
pkg/registration/spoke/registration/creating_controller.go (1)

173-189: equalClientConfigs is order-sensitive; sort before comparing

As previously noted, different ordering of otherwise-identical client configs will force unnecessary updates. Make the comparison order-insensitive by sorting a copy of both slices by URL (then CABundle) before comparing.

Apply this diff:

-// equalClientConfigs compares two ClientConfig slices for equality
-func equalClientConfigs(configs1, configs2 []clusterv1.ClientConfig) bool {
-	if len(configs1) != len(configs2) {
-		return false
-	}
-	for i, config1 := range configs1 {
-		config2 := configs2[i]
-		if config1.URL != config2.URL {
-			return false
-		}
-		if string(config1.CABundle) != string(config2.CABundle) {
-			return false
-		}
-	}
-	return true
-}
+// equalClientConfigs compares two ClientConfig slices for equality (order-insensitive).
+func equalClientConfigs(configs1, configs2 []clusterv1.ClientConfig) bool {
+	if len(configs1) != len(configs2) {
+		return false
+	}
+	s1 := append([]clusterv1.ClientConfig(nil), configs1...)
+	s2 := append([]clusterv1.ClientConfig(nil), configs2...)
+	sort.Slice(s1, func(i, j int) bool {
+		if s1[i].URL == s1[j].URL {
+			return string(s1[i].CABundle) < string(s1[j].CABundle)
+		}
+		return s1[i].URL < s1[j].URL
+	})
+	sort.Slice(s2, func(i, j int) bool {
+		if s2[i].URL == s2[j].URL {
+			return string(s2[i].CABundle) < string(s2[j].CABundle)
+		}
+		return s2[i].URL < s2[j].URL
+	})
+	for i := range s1 {
+		if s1[i].URL != s2[i].URL || string(s1[i].CABundle) != string(s2[i].CABundle) {
+			return false
+		}
+	}
+	return true
+}

And add the missing import at the top of this file:

import "sort"
🧹 Nitpick comments (2)
pkg/registration/spoke/registration/creating_controller.go (2)

99-105: Clarify error message and comment now that more than ClientConfigs are updated

The comment and error text still reference only ManagedClusterClientConfigs, but the Update now also covers labels and annotations. Suggest generalizing both.

Apply this diff:

-// update ManagedCluster (ClientConfigs, Labels, and Annotations)
+// Update ManagedCluster (ClientConfigs, Labels, and Annotations)

-// ManagedClusterClientConfigs in ManagedCluster is only allowed updated during bootstrap.
-// After bootstrap secret expired, an unauthorized error will be got, skip it
+// Updates here happen during bootstrap; once the bootstrap secret expires, the hub may return
+// Unauthorized/Forbidden. Skip those errors to avoid noisy logs and requeues.

-	if skipUnauthorizedError(err) != nil {
-		return fmt.Errorf("unable to update ManagedClusterClientConfigs of managed cluster %q in hub: %w", c.clusterName, err)
-	}
+	if skipUnauthorizedError(err) != nil {
+		return fmt.Errorf("unable to update managed cluster %q on hub: %w", c.clusterName, err)
+	}

135-147: Consider not overwriting existing labels (optional)

Current logic overwrites any existing label with the same key. If the hub (or other controllers) set/own some labels, this could clobber them during bootstrap-time updates. If you intend user-supplied labels to be additive-only, guard against overwrites.

Apply this diff to make merges additive:

-		for k, v := range labels {
-			cluster.Labels[k] = v
-		}
+		for k, v := range labels {
+			if _, exists := cluster.Labels[k]; !exists {
+				cluster.Labels[k] = v
+			}
+		}

If overwriting is the intended behavior, consider documenting it in the function comment.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between d7674aa and f32f857.

📒 Files selected for processing (8)
  • deploy/klusterlet/chart/klusterlet/values.yaml (1 hunks)
  • manifests/klusterlet/management/klusterlet-agent-deployment.yaml (1 hunks)
  • manifests/klusterlet/management/klusterlet-registration-deployment.yaml (1 hunks)
  • pkg/operator/operators/klusterlet/controllers/klusterletcontroller/klusterlet_controller.go (2 hunks)
  • pkg/registration/spoke/options.go (2 hunks)
  • pkg/registration/spoke/registration/creating_controller.go (3 hunks)
  • pkg/registration/spoke/registration/creating_controller_test.go (1 hunks)
  • pkg/registration/spoke/spokeagent.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (5)
  • manifests/klusterlet/management/klusterlet-registration-deployment.yaml
  • deploy/klusterlet/chart/klusterlet/values.yaml
  • pkg/registration/spoke/options.go
  • pkg/registration/spoke/registration/creating_controller_test.go
  • pkg/operator/operators/klusterlet/controllers/klusterletcontroller/klusterlet_controller.go
🧰 Additional context used
🧬 Code Graph Analysis (2)
pkg/registration/spoke/spokeagent.go (1)
pkg/registration/spoke/registration/creating_controller.go (1)
  • LabelDecorator (135-147)
pkg/registration/spoke/registration/creating_controller.go (1)
pkg/common/helpers/annotations.go (1)
  • FilterClusterAnnotations (11-26)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: e2e-singleton
  • GitHub Check: e2e
  • GitHub Check: e2e-hosted
  • GitHub Check: unit
  • GitHub Check: integration
  • GitHub Check: verify
  • GitHub Check: build
🔇 Additional comments (5)
manifests/klusterlet/management/klusterlet-agent-deployment.yaml (1)

106-108: LGTM: cluster-labels flag wiring mirrors annotations and is conditionally rendered

The new --cluster-labels argument follows the existing annotations pattern and is gated on a non-empty ClusterLabelsString. Quoting is correct for YAML/arg parsing.

pkg/registration/spoke/spokeagent.go (1)

286-289: Decorator chain update looks correct

Adding LabelDecorator between AnnotationDecorator and ClientConfigDecorator is consistent and safe; map merges are idempotent and nil-safe.

pkg/registration/spoke/registration/creating_controller.go (3)

92-96: Early-return equality gate is good; reduces unnecessary updates

Comparing client configs, labels, and annotations before issuing Update is the right call and should reduce churn.


118-132: AnnotationDecorator: correct filtering and merge semantics

Filtering with FilterClusterAnnotations before applying is the right guard; map init and merge are correct.


190-214: LGTM: map equality helpers are correct and nil-safe

Length checks plus key-by-key compare are fine; nil maps behave as expected.

…bility to pass it through helm chart.

Signed-off-by: ramekris3163 <[email protected]>
@ramekris3163 ramekris3163 force-pushed the feat/add_label_decorator_for_managedcluster branch from f32f857 to 38f00fd Compare August 18, 2025 14:47
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
pkg/registration/spoke/registration/creating_controller_test.go (2)

236-247: Assert specific keys/values on update; avoid length-only checks

Replace length checks with explicit assertions for env, cluster-role, and the expected annotation. This makes failures actionable and resilient to future defaults.

Apply this diff:

-				// Verify labels are updated
-				clusterLabels := actual.(*clusterv1.ManagedCluster).Labels
-				if len(clusterLabels) != 2 {
-					t.Errorf("expected cluster labels %#v but got: %#v", 2, len(clusterLabels))
-				}
-
-				// Verify annotations are updated
-				clusterAnnotations := actual.(*clusterv1.ManagedCluster).Annotations
-				if len(clusterAnnotations) != 1 {
-					t.Errorf("expected cluster annotations %#v but got: %#v", 1, len(clusterAnnotations))
-				}
+				// Verify labels are updated
+				clusterLabels := actual.(*clusterv1.ManagedCluster).Labels
+				if v, ok := clusterLabels["env"]; !ok || v != "production" {
+					t.Errorf("expected cluster label env=production but got: %#v", clusterLabels["env"])
+				}
+				if v, ok := clusterLabels["cluster-role"]; !ok || v != "worker" {
+					t.Errorf("expected cluster label cluster-role=worker but got: %#v", clusterLabels["cluster-role"])
+				}
+
+				// Verify annotations are updated
+				clusterAnnotations := actual.(*clusterv1.ManagedCluster).Annotations
+				if v, ok := clusterAnnotations["agent.open-cluster-management.io/test"]; !ok || v != "true" {
+					t.Errorf("expected cluster annotation agent.open-cluster-management.io/test=true but got: %#v",
+						clusterAnnotations["agent.open-cluster-management.io/test"])
+				}

409-493: Add order-insensitive case for equalClientConfigs; consider normalizing in impl

Current tests don’t cover slices with identical elements in different orders, which often occurs with client config updates. Add a test case, and make the implementation order-insensitive to avoid unnecessary updates.

Apply this diff to extend the test suite:

@@
 		tests := []struct {
 			name     string
 			configs1 []clusterv1.ClientConfig
 			configs2 []clusterv1.ClientConfig
 			expected bool
 		}{
@@
 			{
 				name: "different CA bundles",
 				configs1: []clusterv1.ClientConfig{
 					{
 						URL:      "https://test.example.com",
 						CABundle: []byte("bundle1"),
 					},
 				},
 				configs2: []clusterv1.ClientConfig{
 					{
 						URL:      "https://test.example.com",
 						CABundle: []byte("bundle2"),
 					},
 				},
 				expected: false,
 			},
+			{
+				name: "same elements different order",
+				configs1: []clusterv1.ClientConfig{
+					{URL: "https://a.example.com", CABundle: []byte("a")},
+					{URL: "https://b.example.com", CABundle: []byte("b")},
+				},
+				configs2: []clusterv1.ClientConfig{
+					{URL: "https://b.example.com", CABundle: []byte("b")},
+					{URL: "https://a.example.com", CABundle: []byte("a")},
+				},
+				expected: true,
+			},

If this fails, normalize both slices before comparison. Suggested implementation (in pkg/registration/spoke/registration/creating_controller.go):

// normalize order before compare
func equalClientConfigs(a, b []clusterv1.ClientConfig) bool {
	if len(a) == 0 && len(b) == 0 {
		return true
	}
	if len(a) != len(b) {
		return false
	}
	normalize := func(x []clusterv1.ClientConfig) []clusterv1.ClientConfig {
		out := append([]clusterv1.ClientConfig(nil), x...)
		sort.Slice(out, func(i, j int) bool {
			if out[i].URL == out[j].URL {
				return bytes.Compare(out[i].CABundle, out[j].CABundle) < 0
			}
			return out[i].URL < out[j].URL
		})
		return out
	}
	a2, b2 := normalize(a), normalize(b)
	for i := range a2 {
		if a2[i].URL != b2[i].URL || !bytes.Equal(a2[i].CABundle, b2[i].CABundle) {
			return false
		}
	}
	return true
}

Verification script to find and inspect the implementation:

#!/bin/bash
# Locate equalClientConfigs implementation and usages
rg -n -C3 --type=go '\bequalClientConfigs\s*\(' pkg/registration/spoke/registration/
rg -n -C5 --type=go 'func\s+equalClientConfigs\s*\(' pkg/registration/spoke/registration/
🧹 Nitpick comments (3)
pkg/registration/spoke/registration/creating_controller_test.go (3)

108-121: Minor: avoid brittle len(clusterLabels) checks; assert key/value presence

You already assert the exact expected keys/values. The additional length check can become brittle if defaults are introduced later. Consider dropping it.

Apply this diff:

-				// Verify labels
-				clusterLabels := actual.(*clusterv1.ManagedCluster).Labels
-				if len(clusterLabels) != 2 {
-					t.Errorf("expected cluster labels %#v but got: %#v", 2, len(clusterLabels))
-				}
+				// Verify labels
+				clusterLabels := actual.(*clusterv1.ManagedCluster).Labels

148-157: Strengthen update assertions; remove count-based check and assert key/value

Length-only checks are fragile with defaults. Assert the “region” label value directly.

Apply this diff:

-				// Verify labels
-				clusterLabels := actual.(*clusterv1.ManagedCluster).Labels
-				if len(clusterLabels) != 1 {
-					t.Errorf("expected cluster labels %#v but got: %#v", 1, len(clusterLabels))
-				}
-				if value, ok := clusterLabels["region"]; !ok || value != "us-west-2" {
+				// Verify labels
+				clusterLabels := actual.(*clusterv1.ManagedCluster).Labels
+				if value, ok := clusterLabels["region"]; !ok || value != "us-west-2" {
 					t.Errorf("expected cluster label region=us-west-2 but got: %#v",
 						clusterLabels["region"])
 				}

204-227: Nit: prefer presence checks over strict counts in creation case too

As with labels, consider removing the len(...) checks for both labels and annotations to reduce brittleness; the key/value assertions already guarantee correctness.

Apply this diff:

-				// Verify labels
-				clusterLabels := actual.(*clusterv1.ManagedCluster).Labels
-				if len(clusterLabels) != 2 {
-					t.Errorf("expected cluster labels %#v but got: %#v", 2, len(clusterLabels))
-				}
+				// Verify labels
+				clusterLabels := actual.(*clusterv1.ManagedCluster).Labels
@@
-				// Verify annotations
-				clusterAnnotations := actual.(*clusterv1.ManagedCluster).Annotations
-				if len(clusterAnnotations) != 1 {
-					t.Errorf("expected cluster annotations %#v but got: %#v", 1, len(clusterAnnotations))
-				}
+				// Verify annotations
+				clusterAnnotations := actual.(*clusterv1.ManagedCluster).Annotations
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between f32f857 and 38f00fd.

📒 Files selected for processing (8)
  • deploy/klusterlet/chart/klusterlet/values.yaml (1 hunks)
  • manifests/klusterlet/management/klusterlet-agent-deployment.yaml (1 hunks)
  • manifests/klusterlet/management/klusterlet-registration-deployment.yaml (1 hunks)
  • pkg/operator/operators/klusterlet/controllers/klusterletcontroller/klusterlet_controller.go (2 hunks)
  • pkg/registration/spoke/options.go (2 hunks)
  • pkg/registration/spoke/registration/creating_controller.go (3 hunks)
  • pkg/registration/spoke/registration/creating_controller_test.go (1 hunks)
  • pkg/registration/spoke/spokeagent.go (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • deploy/klusterlet/chart/klusterlet/values.yaml
🚧 Files skipped from review as they are similar to previous changes (5)
  • pkg/registration/spoke/options.go
  • pkg/registration/spoke/spokeagent.go
  • pkg/operator/operators/klusterlet/controllers/klusterletcontroller/klusterlet_controller.go
  • manifests/klusterlet/management/klusterlet-registration-deployment.yaml
  • pkg/registration/spoke/registration/creating_controller.go
🧰 Additional context used
🧬 Code Graph Analysis (1)
pkg/registration/spoke/registration/creating_controller_test.go (5)
pkg/common/testing/assertion.go (1)
  • AssertActions (51-61)
pkg/registration/helpers/testing/assertion.go (1)
  • AssertManagedClusterClientConfigs (21-28)
pkg/registration/helpers/testing/testinghelpers.go (1)
  • TestManagedClusterName (38-38)
pkg/registration/spoke/registration/creating_controller.go (4)
  • ManagedClusterDecorator (26-26)
  • LabelDecorator (135-147)
  • ClientConfigDecorator (150-171)
  • AnnotationDecorator (119-132)
pkg/common/testing/fake_sync_context.go (1)
  • NewFakeSyncContext (21-27)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: e2e-singleton
  • GitHub Check: e2e-hosted
  • GitHub Check: e2e
  • GitHub Check: verify
  • GitHub Check: integration
  • GitHub Check: unit
  • GitHub Check: build
🔇 Additional comments (1)
manifests/klusterlet/management/klusterlet-agent-deployment.yaml (1)

106-108: --cluster-labels flag wiring verified and correct

  • manifests/klusterlet/management/klusterlet-agent-deployment.yaml (line 107): includes --cluster-labels={{ .ClusterLabelsString }}
  • manifests/klusterlet/management/klusterlet-registration-deployment.yaml (line 92): includes --cluster-labels={{ .ClusterLabelsString }}
  • pkg/registration/spoke/options.go (line 80): registered via fs.StringToStringVar(&o.ClusterLabels, "cluster-labels", ...)

LGTM—approving these changes.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
go.mod (1)

43-43: Optional: prefer a tagged release when available

If the api repo cuts a tag compatible with these changes, consider switching from a pseudo-version to a tagged release to improve stability and ease of auditing. If OCM conventionally tracks HEAD with pseudo-versions, feel free to keep as-is.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 38f00fd and e10651e.

⛔ Files ignored due to path filters (5)
  • go.sum is excluded by !**/*.sum
  • vendor/modules.txt is excluded by !vendor/**
  • vendor/open-cluster-management.io/api/operator/v1/0000_00_operator.open-cluster-management.io_klusterlets.crd.yaml is excluded by !vendor/**
  • vendor/open-cluster-management.io/api/operator/v1/types_klusterlet.go is excluded by !vendor/**
  • vendor/open-cluster-management.io/api/operator/v1/zz_generated.deepcopy.go is excluded by !vendor/**
📒 Files selected for processing (1)
  • go.mod (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: integration
  • GitHub Check: unit
  • GitHub Check: verify
  • GitHub Check: e2e-hosted
  • GitHub Check: build
  • GitHub Check: e2e-singleton
  • GitHub Check: e2e
🔇 Additional comments (2)
go.mod (2)

43-43: LGTM: api bump aligns with new ManagedCluster label support

The version bump for open-cluster-management.io/api looks consistent with introducing label-related types/fields used across the PR. No immediate concerns here.


43-43: Module graph is clean – run go mod tidy and update vendor (if used)

The checks confirm:

  • go.mod references only the new API pseudo-version (v1.0.1-0.20250818020747-c0d6364fa4a6).
  • go.sum contains only entries for that pseudo-version.
  • No replace directives target open-cluster-management.io/api.
  • addon-framework and sdk-go remain on compatible pseudo-version tracks.

Next steps:

  • Run go mod tidy and commit any changes to go.mod/go.sum.
  • If you vendor dependencies, run go mod vendor (or make update-deps) and commit the updated vendor directory.

Copy link

codecov bot commented Aug 19, 2025

Codecov Report

❌ Patch coverage is 83.92857% with 9 lines in your changes missing coverage. Please review.
✅ Project coverage is 58.03%. Comparing base (3df894d) to head (fee6150).
⚠️ Report is 19 commits behind head on main.

Files with missing lines Patch % Lines
...stration/spoke/registration/creating_controller.go 91.30% 2 Missing and 2 partials ⚠️
...lers/klusterletcontroller/klusterlet_controller.go 57.14% 2 Missing and 1 partial ⚠️
pkg/registration/spoke/options.go 0.00% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1123      +/-   ##
==========================================
+ Coverage   57.92%   58.03%   +0.11%     
==========================================
  Files         211      211              
  Lines       20673    20809     +136     
==========================================
+ Hits        11975    12077     +102     
- Misses       7639     7669      +30     
- Partials     1059     1063       +4     
Flag Coverage Δ
unit 58.03% <83.92%> (+0.11%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Signed-off-by: ramekris3163 <[email protected]>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (3)
deploy/klusterlet/chart/klusterlet/crds/0000_00_operator.open-cluster-management.io_klusterlets.crd.yaml (1)

275-281: Optionally constrain label values early (avoid runtime rejections).

These strings will be used as Kubernetes label values on ManagedCluster. Adding a simple maxLength helps catch obvious issues at CRD validation time instead of failing later during ManagedCluster creation/update.

                   clusterLabels:
                     additionalProperties:
-                      type: string
+                      type: string
+                      maxLength: 63
                     description: ClusterLabels are labels set on the ManagedCluster at creation time; other actors can update them afterwards.
                     type: object

If you want stricter validation (e.g., regex or CEL to validate both keys and values), we can follow up, but keeping this light is fine for a first pass.

deploy/klusterlet/config/crds/0000_00_operator.open-cluster-management.io_klusterlets.crd.yaml (1)

275-281: Optional: add a simple maxLength for label values.

Early validation helps prevent failing later when these are applied to the ManagedCluster.

                   clusterLabels:
                     additionalProperties:
-                      type: string
+                      type: string
+                      maxLength: 63
                     description: ClusterLabels are labels set on the ManagedCluster at creation time; other actors can update them afterwards.
                     type: object
deploy/klusterlet/olm-catalog/latest/manifests/operator.open-cluster-management.io_klusterlets.yaml (1)

275-281: Optional: add a simple maxLength for label values.

Adds a low-friction guardrail against invalid label values.

                   clusterLabels:
                     additionalProperties:
-                      type: string
+                      type: string
+                      maxLength: 63
                     description: ClusterLabels are labels set on the ManagedCluster at creation time; other actors can update them afterwards.
                     type: object
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between e10651e and 797c297.

⛔ Files ignored due to path filters (1)
  • deploy/.DS_Store is excluded by !**/.DS_Store
📒 Files selected for processing (5)
  • deploy/cluster-manager/olm-catalog/latest/manifests/cluster-manager.clusterserviceversion.yaml (1 hunks)
  • deploy/klusterlet/chart/klusterlet/crds/0000_00_operator.open-cluster-management.io_klusterlets.crd.yaml (1 hunks)
  • deploy/klusterlet/config/crds/0000_00_operator.open-cluster-management.io_klusterlets.crd.yaml (1 hunks)
  • deploy/klusterlet/olm-catalog/latest/manifests/klusterlet.clusterserviceversion.yaml (1 hunks)
  • deploy/klusterlet/olm-catalog/latest/manifests/operator.open-cluster-management.io_klusterlets.yaml (1 hunks)
✅ Files skipped from review due to trivial changes (2)
  • deploy/cluster-manager/olm-catalog/latest/manifests/cluster-manager.clusterserviceversion.yaml
  • deploy/klusterlet/olm-catalog/latest/manifests/klusterlet.clusterserviceversion.yaml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: verify
  • GitHub Check: integration
  • GitHub Check: unit
  • GitHub Check: e2e-singleton
  • GitHub Check: e2e-hosted
  • GitHub Check: e2e
🔇 Additional comments (1)
deploy/klusterlet/chart/klusterlet/crds/0000_00_operator.open-cluster-management.io_klusterlets.crd.yaml (1)

275-281: All CRD copies for clusterLabels are in sync
I’ve compared the clusterLabels schema and description across:

  • deploy/klusterlet/chart/klusterlet/crds/0000_00_operator.open-cluster-management.io_klusterlets.crd.yaml
  • deploy/klusterlet/config/crds/0000_00_operator.open-cluster-management.io_klusterlets.crd.yaml
  • deploy/klusterlet/olm-catalog/latest/manifests/operator.open-cluster-management.io_klusterlets.yaml

Each block is identical in structure, type, and description. No further changes are needed here.

Comment on lines +275 to 281
clusterLabels:
additionalProperties:
type: string
description: ClusterLabels is labels set on ManagedCluster when
creating only, other actors can update it afterwards.
type: object
featureGates:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix grammar in user-facing description (pluralization and phrasing).

Current text is awkward: "ClusterLabels is labels set on ManagedCluster when creating only, other actors can update it afterwards."

Tighten it to be grammatically correct and clearer.

-                    description: ClusterLabels is labels set on ManagedCluster when
-                      creating only, other actors can update it afterwards.
+                    description: ClusterLabels are labels set on the ManagedCluster at creation time; other actors can update them afterwards.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
clusterLabels:
additionalProperties:
type: string
description: ClusterLabels is labels set on ManagedCluster when
creating only, other actors can update it afterwards.
type: object
featureGates:
clusterLabels:
additionalProperties:
type: string
description: ClusterLabels are labels set on the ManagedCluster at creation time; other actors can update them afterwards.
type: object
featureGates:
🤖 Prompt for AI Agents
In
deploy/klusterlet/chart/klusterlet/crds/0000_00_operator.open-cluster-management.io_klusterlets.crd.yaml
around lines 275 to 281, the user-facing description for clusterLabels is
grammatically awkward and unclear; replace it with a concise, correct sentence
such as: "ClusterLabels are labels set on the ManagedCluster when it is created;
other actors may update them afterwards." Update the description field to use
pluralization and clearer phrasing while preserving meaning.

Comment on lines +275 to 281
clusterLabels:
additionalProperties:
type: string
description: ClusterLabels is labels set on ManagedCluster when
creating only, other actors can update it afterwards.
type: object
featureGates:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix grammar in user-facing description (pluralization and phrasing).

Mirror the same correction here to keep copies consistent.

-                    description: ClusterLabels is labels set on ManagedCluster when
-                      creating only, other actors can update it afterwards.
+                    description: ClusterLabels are labels set on the ManagedCluster at creation time; other actors can update them afterwards.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
clusterLabels:
additionalProperties:
type: string
description: ClusterLabels is labels set on ManagedCluster when
creating only, other actors can update it afterwards.
type: object
featureGates:
clusterLabels:
additionalProperties:
type: string
description: ClusterLabels are labels set on the ManagedCluster at creation time; other actors can update them afterwards.
type: object
featureGates:
🤖 Prompt for AI Agents
In
deploy/klusterlet/config/crds/0000_00_operator.open-cluster-management.io_klusterlets.crd.yaml
around lines 275 to 281, the user-facing description for clusterLabels has
grammar issues; update the description to use correct pluralization and clearer
phrasing (e.g., "ClusterLabels are labels set on ManagedClusters when creating
them; other actors can update them afterwards."), and mirror the same corrected
sentence in any other copies of this description to keep consistency.

Comment on lines +275 to 281
clusterLabels:
additionalProperties:
type: string
description: ClusterLabels is labels set on ManagedCluster when
creating only, other actors can update it afterwards.
type: object
featureGates:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix grammar in user-facing description (pluralization and phrasing).

Align wording with the other CRD copies.

-                    description: ClusterLabels is labels set on ManagedCluster when
-                      creating only, other actors can update it afterwards.
+                    description: ClusterLabels are labels set on the ManagedCluster at creation time; other actors can update them afterwards.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
clusterLabels:
additionalProperties:
type: string
description: ClusterLabels is labels set on ManagedCluster when
creating only, other actors can update it afterwards.
type: object
featureGates:
clusterLabels:
additionalProperties:
type: string
description: ClusterLabels are labels set on the ManagedCluster at creation time; other actors can update them afterwards.
type: object
featureGates:
🤖 Prompt for AI Agents
In
deploy/klusterlet/olm-catalog/latest/manifests/operator.open-cluster-management.io_klusterlets.yaml
around lines 275 to 281, the user-facing description for clusterLabels is
grammatically incorrect (singular/plural mismatch and awkward phrasing); update
the description to match the other CRD copies and use plural/pluralized nouns
and clearer phrasing — for example replace with: "ClusterLabels are labels set
on ManagedClusters at creation time; other actors may update them afterwards."
Ensure this exact revised sentence is used in the description field.

Signed-off-by: ramekris3163 <[email protected]>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
pkg/registration/spoke/registration/creating_controller.go (1)

174-190: Make equalClientConfigs order-insensitive to prevent needless updates

As noted earlier, comparing by index causes churn when the same elements appear in different orders. Sort copies by URL (CABundle as tiebreaker) before comparing. This avoids spurious updates without mutating inputs.

-// equalClientConfigs compares two ClientConfig slices for equality
-func equalClientConfigs(configs1, configs2 []clusterv1.ClientConfig) bool {
-	if len(configs1) != len(configs2) {
-		return false
-	}
-	for i, config1 := range configs1 {
-		config2 := configs2[i]
-		if config1.URL != config2.URL {
-			return false
-		}
-		if !bytes.Equal(config1.CABundle, config2.CABundle) {
-			return false
-		}
-	}
-	return true
-}
+// equalClientConfigs compares two ClientConfig slices for equality (order-insensitive).
+func equalClientConfigs(configs1, configs2 []clusterv1.ClientConfig) bool {
+	if len(configs1) != len(configs2) {
+		return false
+	}
+	// Work on copies to avoid mutating callers' slices.
+	s1 := append([]clusterv1.ClientConfig(nil), configs1...)
+	s2 := append([]clusterv1.ClientConfig(nil), configs2...)
+	sort.Slice(s1, func(i, j int) bool {
+		if s1[i].URL == s1[j].URL {
+			return bytes.Compare(s1[i].CABundle, s1[j].CABundle) < 0
+		}
+		return s1[i].URL < s1[j].URL
+	})
+	sort.Slice(s2, func(i, j int) bool {
+		if s2[i].URL == s2[j].URL {
+			return bytes.Compare(s2[i].CABundle, s2[j].CABundle) < 0
+		}
+		return s2[i].URL < s2[j].URL
+	})
+	for i := range s1 {
+		if s1[i].URL != s2[i].URL || !bytes.Equal(s1[i].CABundle, s2[i].CABundle) {
+			return false
+		}
+	}
+	return true
+}
🧹 Nitpick comments (3)
pkg/registration/spoke/registration/creating_controller.go (3)

3-5: Prepare for order-insensitive comparison: add sort import

If you adopt the order-insensitive equalClientConfigs below, you’ll need the sort package.

 import (
 	"bytes"
 	"context"
 	"fmt"
+	"sort"
 	"time"

100-107: Correct the update error message to reflect broader update scope

The error mentions ManagedClusterClientConfigs only, but the update now covers labels and annotations as well. Adjust the message to avoid confusion when diagnosing failures.

-	if skipUnauthorizedError(err) != nil {
-		return fmt.Errorf("unable to update ManagedClusterClientConfigs of managed cluster %q in hub: %w", c.clusterName, err)
-	}
+	if skipUnauthorizedError(err) != nil {
+		return fmt.Errorf("unable to update managed cluster %q on hub: %w", c.clusterName, err)
+	}

135-148: LabelDecorator: tiny micro-optimization and behavior callout

  • If labels is nil/empty, you can short-circuit to avoid allocating an empty map.
  • Please confirm that overriding existing label keys is the intended behavior (it currently will overwrite any existing values for the same keys).
 func LabelDecorator(labels map[string]string) ManagedClusterDecorator {
 	return func(cluster *clusterv1.ManagedCluster) *clusterv1.ManagedCluster {
-		if cluster.Labels == nil {
+		if len(labels) == 0 {
+			return cluster
+		}
+		if cluster.Labels == nil {
 			cluster.Labels = make(map[string]string)
 		}
 
 		for k, v := range labels {
 			cluster.Labels[k] = v
 		}
 
 		return cluster
 	}
 }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 797c297 and fee6150.

📒 Files selected for processing (2)
  • pkg/registration/spoke/options.go (2 hunks)
  • pkg/registration/spoke/registration/creating_controller.go (4 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • pkg/registration/spoke/options.go
🧰 Additional context used
🧬 Code graph analysis (1)
pkg/registration/spoke/registration/creating_controller.go (1)
pkg/common/helpers/annotations.go (1)
  • FilterClusterAnnotations (11-26)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: e2e-singleton
  • GitHub Check: integration
  • GitHub Check: unit
  • GitHub Check: verify
  • GitHub Check: build
  • GitHub Check: e2e-hosted
  • GitHub Check: e2e
🔇 Additional comments (4)
pkg/registration/spoke/registration/creating_controller.go (4)

93-99: Early-exit equality gate looks good

Nice consolidation: skipping updates unless ClientConfigs, Labels, or Annotations actually change reduces churn.


119-133: AnnotationDecorator: good defensive nil handling and filtering

Using FilterClusterAnnotations and ensuring the map is non-nil is clean and idempotent. No issues.


191-203: equalLabels: simple and correct

Map equality with length gate is fine and treats nil and empty maps equivalently, which avoids update churn. LGTM.


204-216: equalAnnotations: mirrors label comparison appropriately

Consistent with equalLabels and works well with the filtered AnnotationDecorator. LGTM.

Copy link
Member

@qiujian16 qiujian16 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The main outstanding part is we cannot let agent to update label/annotation/clientConfigs arbitrarily since the these can also be set by user or other actors on the hub. We need to define a more strict rule for agent to set these values on hub.

  1. It can only be set when creating.
  2. If it is set after creating, there must be a clearer rule to avoid conflict.

}

// equalLabels compares two label maps for equality
func equalLabels(labels1, labels2 map[string]string) bool {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we cannot do such equal since the label of cluster can also be set by user on hub. So such label setting can only happen when creating the cluster.

If the label exists already on the cluster, the controller cannot update the label or annotation since it might be updated by user on purpose.

@@ -148,3 +170,46 @@ func ClientConfigDecorator(externalServerURLs []string, caBundle []byte) Managed
return cluster
}
}

// equalClientConfigs compares two ClientConfig slices for equality
func equalClientConfigs(configs1, configs2 []clusterv1.ClientConfig) bool {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here, clientconfigs can also be updated by user, so we cannot update clientConfigs based on equality.

cluster.Labels = make(map[string]string)
}

for k, v := range labels {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note we should not allowing set ANY label from the spoke. For instance, open-cluster-management.io/clusterset label is to define the label on the managedcluster, it should not be set by agent.

@@ -89,11 +90,14 @@ func (c *managedClusterCreatingController) sync(ctx context.Context, syncCtx fac
managedCluster = decorator(managedCluster)
}

if len(existingCluster.Spec.ManagedClusterClientConfigs) == len(managedCluster.Spec.ManagedClusterClientConfigs) {
Copy link
Member

@qiujian16 qiujian16 Aug 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the reason we only compare size is because the configs can be set on the hub with different cabundle or endpoint. And the agent should trust setting of hub since hub would have more accurate info on how to access spoke apiserver.

@zhujian7 I wonder whether to update the cluster should be returned by the decorator rather than compare in the controller.

Copy link
Member

@zhujian7 zhujian7 Aug 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, make sense, I think we can let the decorator interface return an additional modified(bool) value like the openshift library-go does.

@ramekris3163
Copy link
Contributor Author

The main outstanding part is we cannot let agent to update label/annotation/clientConfigs arbitrarily since the these can also be set by user or other actors on the hub. We need to define a more strict rule for agent to set these values on hub.

  1. It can only be set when creating.
  2. If it is set after creating, there must be a clearer rule to avoid conflict.

@qiujian16 can we then only enable setting of the label/annotation/clientConfigs during creation and never be able to update it later?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems not necessary?

@qiujian16
Copy link
Member

@qiujian16 can we then only enable setting of the label/annotation/clientConfigs during creation and never be able to update it later?

I think there is case that user create managedcluster at first on the hub without clientConfigs, and then agent should update that when registered which will requires an update, but we do not want it to be updated if the same endpoint already exists on in the clientConfigs. This has not been well documented.

On the other hand, I think it is fine to only set label/annotation when creating.

@openshift-merge-robot
Copy link
Contributor

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants