Skip to content

Conversation

timlee0212
Copy link
Contributor

πŸ“Œ Description

This PR introduces a series of optimizations to the trtllm_mnnvl_allreduce. These optimizations are also added by NVIDIA/TensorRT-LLM#5934 and NVIDIA/TensorRT-LLM#6237。

  • Use GPU array to pass the uc pointers in the mcast memory.
  • Use L2 reduction to replace the expensive atomicAdd.
  • Adjust the point of synchronization for buffer flag read.
  • Optimize the lamport polling performance.
  • Clean up the code structure.
  • Enhance the unittest to cover more test cases.

πŸ” Related Issues

πŸš€ Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

βœ… Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

πŸ§ͺ Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @timlee0212, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on significantly improving the performance and robustness of the TRTLLM MNNVL Allreduce implementation. It achieves this through a combination of memory management enhancements, optimized synchronization primitives, and a refactored kernel architecture, all while expanding test coverage to ensure reliability.

Highlights

  • Performance Optimization: Implemented several key optimizations for the TRTLLM MNNVL Allreduce operation, including using GPU arrays for passing unicast (UC) pointers, replacing expensive atomicAdd operations with more efficient L2 reduction, and refining the synchronization points for buffer flag reads.
  • Code Structure and Maintainability: Introduced a new LamportFlags CUDA struct to encapsulate and manage buffer state and synchronization logic within the kernels, leading to cleaner and more organized code. This also includes adding support for __nv_half (FP16) data types in utility functions.
  • Improved Lamport Polling: Enhanced the Lamport polling mechanism by optimizing the synchronization logic within the CUDA kernels, leveraging vectorized loads (loadfloat2) and the new LamportFlags struct for more efficient buffer access and state updates.
  • Enhanced Testing: Expanded the unit tests for trtllm_mnnvl_allreduce to cover a wider range of sequence lengths and to ensure proper workspace management and cleanup across multiple test iterations.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with πŸ‘ and πŸ‘Ž on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩

@timlee0212
Copy link
Contributor Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a series of significant optimizations and cleanups to the trtllm_mnnvl_allreduce functionality. The changes, including the use of GPU-side pointer arrays, L2 atomics for reduction, and improved Lamport polling, align well with the stated goals of enhancing performance. The refactoring of the CUDA kernel logic into a LamportFlags struct greatly improves readability and maintainability. Furthermore, the enhancements to the unit tests, particularly the parameterization with lists of sequence lengths, make the testing more robust and representative of real-world usage.

My review has identified one critical issue concerning memory safety in the CUDA kernel that needs to be addressed, and one medium-severity issue regarding a potentially confusing function name that could impact future maintenance. Overall, this is a high-quality contribution with substantial improvements.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a series of optimizations for the TRT-LLM MNNVL Allreduce functionality, enhancing performance and maintainability. Key improvements include using GPU-side pointer arrays, replacing atomicAdd with L2 reductions, and optimizing Lamport polling. I have identified a potential race condition in the synchronization logic and some improvements in memory management and pointer usage.

@timlee0212
Copy link
Contributor Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a series of significant optimizations for the TRT-LLM MNNVL Allreduce implementation. The changes are well-structured and improve both performance and code clarity.

Key improvements include:

  • Using a GPU-side array for unicast pointers, which is more efficient than passing them from the host.
  • Replacing atomicAdd with more performant red.global.gpu.add instructions on newer CUDA architectures.
  • Optimizing the Lamport polling loop with vectorized loads.
  • Encapsulating buffer flag management into a LamportFlags struct, which greatly improves readability and maintainability of the CUDA kernels.
  • Enhancing the test suite to cover a wider range of sequence lengths and improve its structure.

The code is of high quality. I've found one critical issue in the test code where a method is called that was removed as part of this refactoring. Please see the detailed comment.

Copy link
Collaborator

@yzh119 yzh119 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, there are some duplicate code for mnnvl_allreduce and single node allreduce but let's merge this first and refactor them in later PRs.

@yzh119 yzh119 merged commit 43e08e9 into flashinfer-ai:main Jul 25, 2025
2 checks passed
@timlee0212 timlee0212 deleted the mnnvl_opt branch July 25, 2025 19:58
Edenzzzz pushed a commit to Edenzzzz/flashinfer that referenced this pull request Jul 27, 2025
<!-- .github/pull_request_template.md -->

## πŸ“Œ Description

This PR introduces a series of optimizations to the
trtllm_mnnvl_allreduce. These optimizations are also added by
[https://github.com/NVIDIA/TensorRT-LLM/pull/5934](https://github.com/NVIDIA/TensorRT-LLM/pull/5934)
and
[https://github.com/NVIDIA/TensorRT-LLM/pull/6237](https://github.com/NVIDIA/TensorRT-LLM/pull/6237)。

- Use GPU array to pass the uc pointers in the mcast memory.
- Use L2 reduction to replace the expensive atomicAdd.
- Adjust the point of synchronization for buffer flag read.
- Optimize the lamport polling performance.
- Clean up the code structure.
- Enhance the unittest to cover more test cases.

## πŸ” Related Issues

<!-- Link any related issues here -->

## πŸš€ Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull
request, please make sure the following items are complete.

### βœ… Pre-commit Checks

- [x] I have installed `pre-commit` by running `pip install pre-commit`
(or used your preferred method).
- [x] I have installed the hooks with `pre-commit install`.
- [x] I have run the hooks manually with `pre-commit run --all-files`
and fixed any reported issues.

> If you are unsure about how to set up `pre-commit`, see [the
pre-commit documentation](https://pre-commit.com/).

## πŸ§ͺ Tests

- [x] Tests have been added or updated as needed.
- [x] All tests are passing (`unittest`, etc.).

## Reviewer Notes

<!-- Optional: anything you'd like reviewers to focus on, concerns, etc.
-->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants