Skip to content

Conversation

Jeffwan
Copy link
Collaborator

@Jeffwan Jeffwan commented Jul 22, 2025

Pull Request Description

Support parallelism of single worker

In large EP mode or with full-parameter models (e.g., DeepSeek-R1 or Kimi K2), single-instance cross-node xPyD support is required. Although the current StormService technically supports cross-node deployment through role.replicas, it is limited to the 1P1D pattern. This approach has several limitations. Firstly, replicas was not originally designed to represent parallelism, making it a workaround that complicates other processes such as upgrades. Secondly, the semantic clarity is compromised, leading to potential confusion and operational fragility.

type RoleSpec struct {
    Name string `json:"name,omitempty"`

    // Replicas is the number of desired replicas.
    // +optional
    Replicas *int32 `json:"replicas,omitempty"`

   + // PodGroupSize is the number of pods to form a minimum role instance.
   + // +optional
   + PodGroupSize *int32 `json:"podGroupSize,omitempty"`

Previous names

llm-xpyd-roleset-gdkvq-decode-7dcf585456-0    1/1     Running   0          46s
llm-xpyd-roleset-gdkvq-decode-7dcf585456-1    1/1     Running   0          46s
llm-xpyd-roleset-gdkvq-prefill-5c8fd867f6-0   1/1     Running   0          46s
llm-xpyd-roleset-gdkvq-prefill-5c8fd867f6-1   1/1     Running   0          46s
llm-xpyd-roleset-gdkvq-prefill-5c8fd867f6-2   1/1     Running   0          46s
llm-xpyd-roleset-gdkvq-prefill-5c8fd867f6-3   1/1     Running   0          7s

new names, with new dimension pod-group-index

NAME                                            READY   STATUS        RESTARTS   AGE
llm-xpyd-roleset-v42l6-decode-7dcf585456-0-0    1/1     Running       0          2m49s
llm-xpyd-roleset-v42l6-decode-7dcf585456-0-1    1/1     Running       0          2m49s
llm-xpyd-roleset-v42l6-decode-7dcf585456-0-2    1/1     Running       0          2m49s
llm-xpyd-roleset-v42l6-decode-7dcf585456-0-3    1/1     Running       0          2m49s
llm-xpyd-roleset-v42l6-prefill-5c8fd867f6-0-0   1/1     Running       0          2m49s
llm-xpyd-roleset-v42l6-prefill-5c8fd867f6-0-1   1/1     Running       0          2m49s
llm-xpyd-roleset-v42l6-prefill-5c8fd867f6-1-0   1/1     Terminating   0          2m49s
llm-xpyd-roleset-v42l6-prefill-5c8fd867f6-1-1   1/1     Terminating   0          2m49s

after the change we should use following way for communication

    python3 ....
        --disaggregation-mode prefill \
        --trust-remote-code \
        --dist-init-addr "${ROLESET_NAME}-${ROLE_NAME}-${ROLE_TEMPLATE_HASH}-${ROLE_REPLICA_INDEX}-0.${STORM_SERVICE_NAME}.default.svc.cluster.local:5000" \
        --nnodes 2 \
        --node-rank $POD_GROUP_INDEX \
        --tp-size 2 \
        --log-level debug
    Environment:
      POD_GROUP_INDEX:     0
      ROLESET_INDEX:
      ROLESET_NAME:        llm-xpyd-roleset-dc8bz
      ROLE_NAME:           prefill
      ROLE_REPLICA_INDEX:  2
      ROLE_TEMPLATE_HASH:  5c8fd867f6
      STORM_SERVICE_NAME:  llm-xpyd

Related Issues

Resolves: #[Insert issue number(s)]

Important: Before submitting, please complete the description above and review the checklist below.


Contribution Guidelines (Expand for Details)

We appreciate your contribution to aibrix! To ensure a smooth review process and maintain high code quality, please adhere to the following guidelines:

Pull Request Title Format

Your PR title should start with one of these prefixes to indicate the nature of the change:

  • [Bug]: Corrections to existing functionality
  • [CI]: Changes to build process or CI pipeline
  • [Docs]: Updates or additions to documentation
  • [API]: Modifications to aibrix's API or interface
  • [CLI]: Changes or additions to the Command Line Interface
  • [Misc]: For changes not covered above (use sparingly)

Note: For changes spanning multiple categories, use multiple prefixes in order of importance.

Submission Checklist

  • PR title includes appropriate prefix(es)
  • Changes are clearly explained in the PR description
  • New and existing tests pass successfully
  • Code adheres to project style and best practices
  • Documentation updated to reflect changes (if applicable)
  • Thorough testing completed, no regressions introduced

By submitting this PR, you confirm that you've read these guidelines and your changes align with the project's contribution standards.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @Jeffwan, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces the concept of PodGroupSize to the Role definition, allowing for the orchestration of groups of pods as a single logical unit within a worker role. This change enables more granular control over parallelism and resource allocation, moving beyond simple replica counts to support more complex workload patterns. The modifications span API definitions, CRD updates, and core controller logic for scaling, rollout, and status reporting.

Highlights

  • API Extension: Introduced a new PodGroupSize field to the RoleSpec in api/orchestration/v1alpha1/roleset_types.go. This field allows defining the number of pods that form a minimum role instance, enabling parallelism within a single worker role.
  • CRD Updates: Updated the Custom Resource Definition (CRD) schemas for RoleSet and StormService in config/crd/orchestration/ to include the new podGroupSize property, ensuring it's recognized by Kubernetes.
  • Pod Orchestration Logic: Modified the StatefulRoleSyncer in pkg/controller/roleset/rolesyncer.go to incorporate PodGroupSize into scaling and rollout operations. This includes adjusting pod creation budgets, creating multiple pods to satisfy a group, and modifying deletion/readiness checks to operate on pod groups rather than individual pods.
  • Pod Indexing and Environment Variables: Enhanced pod rendering and environment variable injection in pkg/controller/roleset/utils.go to support a new PodGroupIndex. This ensures pods within a group are uniquely identified and can receive their specific group index via labels and environment variables.
  • Status Calculation Adjustment: Updated the RoleSetReconciler in pkg/controller/roleset/sync.go to correctly calculate readyReplicas, updatedReplicas, and totalReplicas by dividing the raw pod counts by the PodGroupSize, reflecting the new group-based scaling.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for PodGroupSize at the role level, allowing a single logical replica to consist of multiple pods. The changes span the API, CRD definitions, and controller logic, primarily affecting the StatefulRoleSyncer. My review focuses on the correctness and maintainability of the new logic. I've identified a potential issue in the pod creation logic for partial groups, along with several opportunities for code simplification, clarification, and a significant gap in test coverage for the new functionality.

Comment on lines +74 to 86
missingCount := podGroupSize - len(slots[i])
for j := 0; j < missingCount; j++ {
pod, err := ctrlutil.GetPodFromTemplate(&role.Template, roleSet, metav1.NewControllerRef(roleSet, orchestrationv1alpha1.SchemeGroupVersion.WithKind(orchestrationv1alpha1.RoleSetKind)))
if err != nil {
return false, err
}
if podGroupSize > 1 {
renderStormServicePod(roleSet, role, pod, &i, &j)
} else {
renderStormServicePod(roleSet, role, pod, &i, nil)
}
podsToCreate = append(podsToCreate, pod)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The logic for creating missing pods within a pod group may be incorrect. The loop for j := 0; j < missingCount; j++ uses the loop counter j as the podGroupIndex. This is problematic when there are already some pods in the slot, as it can lead to duplicate indices and pod creation failures. For example, if podGroupSize is 3 and a pod with pod-group-index=1 already exists, this loop might try to create a new pod with pod-group-index=1, which would fail.

Consider identifying which indices from 0 to podGroupSize-1 are missing and use those for the new pods.

Here is a conceptual fix:

  1. Iterate through the existing pods in slots[i] and build a set of used podGroupIndex values.
  2. Iterate from j = 0 to podGroupSize - 1.
  3. If j is not in the set of used indices, create a new pod with podGroupIndex = j.
  4. Stop after creating missingCount pods.

@@ -113,7 +113,7 @@ func TestRenderStormServicePod_WithRoleIndex(t *testing.T) {
Spec: *roleSet.Spec.Roles[0].Template.Spec.DeepCopy(),
}

renderStormServicePod(roleSet, &roleSet.Spec.Roles[0], pod, &roleIndex)
renderStormServicePod(roleSet, &roleSet.Spec.Roles[0], pod, &roleIndex, nil)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

While this test and others were updated to pass nil for podGroupIndex, the PR is missing new tests for the core functionality of pod groups. Please add tests for renderStormServicePod where podGroupIndex is not nil to verify that pod names, labels, and environment variables are correctly set for pods within a group. Also, the new logic in StatefulRoleSyncer should be covered by unit tests.

Comment on lines +214 to +216
for j := 0; j < podGroupSize; j++ {
toDelete = append(toDelete, slots[i][j])
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This loop to delete pods is unnecessarily complex and can be simplified. Since len(slots[i]) is guaranteed to be podGroupSize by the check on line 183, you can directly append the whole slice.

toDelete = append(toDelete, slots[i]...)

Comment on lines +287 to +297
// compare role-replica-index
indexI := getLabelInt(pods[i], constants.RoleReplicaIndexAnnotationKey)
indexJ := getLabelInt(pods[j], constants.RoleReplicaIndexAnnotationKey)
if indexI != indexJ {
return indexI < indexJ
}

// if role-replica-index is same,compare pod-group-index
groupIndexI := getLabelInt(pods[i], constants.RolePodGroupIndexLabelKey)
groupIndexJ := getLabelInt(pods[j], constants.RolePodGroupIndexLabelKey)
return groupIndexI < groupIndexJ
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The sorting logic uses constants.RoleReplicaIndexAnnotationKey to get the replica index from pod labels via getLabelInt. While RoleReplicaIndexAnnotationKey and RoleReplicaIndexLabelKey currently have the same value, this is confusing because getLabelInt operates on labels, not annotations. For clarity and to avoid potential bugs if the key values diverge, it's better to use constants.RoleReplicaIndexLabelKey here.

Also, there is a typo in the comment on line 294: same,compare should be same, compare.

Suggested change
// compare role-replica-index
indexI := getLabelInt(pods[i], constants.RoleReplicaIndexAnnotationKey)
indexJ := getLabelInt(pods[j], constants.RoleReplicaIndexAnnotationKey)
if indexI != indexJ {
return indexI < indexJ
}
// if role-replica-index is same,compare pod-group-index
groupIndexI := getLabelInt(pods[i], constants.RolePodGroupIndexLabelKey)
groupIndexJ := getLabelInt(pods[j], constants.RolePodGroupIndexLabelKey)
return groupIndexI < groupIndexJ
indexI := getLabelInt(pods[i], constants.RoleReplicaIndexLabelKey)
indexJ := getLabelInt(pods[j], constants.RoleReplicaIndexLabelKey)

Comment on lines +426 to +434
func getRolePodGroupSize(role *orchestrationv1alpha1.RoleSpec) int {
var podGroupSize int
if role.PodGroupSize == nil {
podGroupSize = 1
} else {
podGroupSize = int(*role.PodGroupSize)
}
return podGroupSize
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This function can be written more concisely.

func getRolePodGroupSize(role *orchestrationv1alpha1.RoleSpec) int {
	if role.PodGroupSize != nil {
		return int(*role.PodGroupSize)
	}
	return 1
}

@Jeffwan
Copy link
Collaborator Author

Jeffwan commented Jul 23, 2025

please help take a look @googs1025

@googs1025
Copy link
Collaborator

please help take a look @googs1025

Thanks for this, I'll take a look tomorrow 😄

@@ -60,21 +60,32 @@ func (s *StatefulRoleSyncer) Scale(ctx context.Context, roleSet *orchestrationv1
// delete pods that cannot find the corresponding slot
slots, toDelete := s.podSlotForRole(role, activePods)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

notsure If there are multiple Pods in a Slot, do they need to be sorted by podGroupIndex(maybe)? 🤔

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does have multiple in the past, different revisions. now, it's more complex, it has different index pods as well, additional dimension.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorting inside the slots is not necessary at this moment. when the controller try to delete pod, the sequence matters

@@ -60,21 +60,32 @@ func (s *StatefulRoleSyncer) Scale(ctx context.Context, roleSet *orchestrationv1
// delete pods that cannot find the corresponding slot
slots, toDelete := s.podSlotForRole(role, activePods)
podsToDelete = append(podsToDelete, toDelete...)
createBudget := int32(len(slots)) + MaxSurge(role) - int32(len(activePods)) - int32(len(terminatingPods))
podGroupSize := getRolePodGroupSize(role)
activeReplicas := len(activePods) / podGroupSize // risk that partial pod within group is active
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The activeReplicas count seems to be a little bit issue. If not all pods in the slot are ready, using podGroupSize seems a bit biased. 🤔

maybe this way?

activeReplicas := 0
for _, slot := range slots {
    if len(slot) >= podGroupSize {
        activeReplicas++
    }
}

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it used to use activePods to check the activeReplicas. I agree this is a little rough. we can group by active pods based on groups and then use >= podGroupSize to check activeReplicas

return false, err
// indexing is not guaranteed in this case..
missingCount := podGroupSize - len(slots[i])
for j := 0; j < missingCount; j++ {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here we just fill missing Pods in the PodGroup in sequence;
We don't seem to consider whether the existing PodGroupIndex is missing, I'm not sure if there will be a issue of miss or uncontinuous PodGroupIndex?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me double check this case.

@Jeffwan
Copy link
Collaborator Author

Jeffwan commented Aug 19, 2025

Current way is a little bit hacky and it makes the pod group management extreme hard.

Let's close this PR. /cc @googs1025

@Jeffwan Jeffwan closed this Aug 19, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants