Skip to content

[Fix][nvbug 5401163][nvbug 5404726][Qwen3] Fix bug of MoE on tp > 1 with trtllm moe backend #6235

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Jul 24, 2025

Conversation

byshiue
Copy link
Collaborator

@byshiue byshiue commented Jul 21, 2025

Unwaive following tests

  • accuracy/test_llm_api_pytorch.py::TestQwen3_235B_A22B::test_nvfp4[latency_moe_trtllm]
  • accuracy/test_llm_api_pytorch.py::TestQwen3_30B_A3B::test_nvfp4[dep4_latency_moe_trtllm]
  • accuracy/test_llm_api_pytorch.py::TestQwen3_30B_A3B::test_nvfp4[tep4_latency_moe_trtllm]

Summary by CodeRabbit

  • New Features

    • Added a new attribute to improve module preloading behavior for certain model configurations.
  • Documentation

    • Updated comments to clarify supported models for a specific parameter.
  • Tests

    • Re-enabled several previously skipped tests for Qwen3 models, ensuring broader test coverage.
    • Added a new test for the Qwen3-30B-A3B model using the Eagle decoding algorithm.
    • Included a new accuracy reference entry for the Eagle decoding algorithm on Qwen3-30B-A3B.

@byshiue byshiue requested a review from a team as a code owner July 21, 2025 22:04
@byshiue byshiue requested review from brb-nv and nv-yilinf July 21, 2025 22:04
@byshiue
Copy link
Collaborator Author

byshiue commented Jul 21, 2025

/bot run

Copy link
Contributor

coderabbitai bot commented Jul 21, 2025

Walkthrough

The changes introduce a new attribute, preload_weight_modules, to the Qwen3MoEModel and propagate it to Qwen3MoeForCausalLM. A comment is updated to include "Qwen3" alongside "llama4" for a workaround in _load_weights_impl_v2. Three test skip entries related to Qwen3 nvfp4 tests are removed. Additionally, a new Eagle decoding algorithm entry and a corresponding test are added for Qwen3-30B-A3B, and a conditional skip is removed from a Qwen3-235B-A22B test.

Changes

File(s) Change Summary
tensorrt_llm/_torch/models/modeling_qwen3_moe.py Added preload_weight_modules attribute to Qwen3MoEModel and propagated it to Qwen3MoeForCausalLM.
tensorrt_llm/_torch/models/modeling_utils.py Updated comment in _load_weights_impl_v2 to include "Qwen3" as supported for a workaround.
tests/integration/test_lists/waives.txt Removed three skip entries for Qwen3 nvfp4-related tests.
tests/integration/defs/accuracy/gsm8k.yaml Added Eagle decoding algorithm entry for Qwen3-30B-A3B with accuracy 83.43.
tests/integration/defs/accuracy/test_llm_api_pytorch.py Added test_eagle3 method for Qwen3-30B-A3B; removed conditional skip in test_nvfp4 for Qwen3-235B-A22B.

Estimated code review effort

2 (~15 minutes)

Possibly related PRs

Suggested labels

Community want to contribute

Suggested reviewers

  • nv-guomingz
  • litaotju

Poem

A tweak for Qwen3, so neat and precise,
With modules to preload, and comments concise.
Skipped tests now run, the waivers are gone—
The code hops ahead, like a rabbit at dawn!
🐇✨


📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 60714eb and 1966e99.

📒 Files selected for processing (5)
  • tensorrt_llm/_torch/models/modeling_qwen3_moe.py (2 hunks)
  • tensorrt_llm/_torch/models/modeling_utils.py (1 hunks)
  • tests/integration/defs/accuracy/references/gsm8k.yaml (1 hunks)
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py (1 hunks)
  • tests/integration/test_lists/waives.txt (0 hunks)
💤 Files with no reviewable changes (1)
  • tests/integration/test_lists/waives.txt
✅ Files skipped from review due to trivial changes (2)
  • tests/integration/defs/accuracy/references/gsm8k.yaml
  • tensorrt_llm/_torch/models/modeling_utils.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • tensorrt_llm/_torch/models/modeling_qwen3_moe.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (1)
tests/integration/defs/accuracy/test_llm_api_pytorch.py (1)

1759-1782: LGTM! Well-structured Eagle3 decoding test.

The test method is properly configured with appropriate settings for Eagle3 speculative decoding:

  • Correctly disables overlap scheduler and block reuse for speculative decoding
  • Uses multiple batch sizes in CUDA graph configuration
  • The eagle3_one_model=True parameter and draft length of 1 are appropriate for Eagle3
  • Follows established patterns from other Eagle tests in the codebase
✨ Finishing Touches
  • 📝 Generate Docstrings

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12480 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12480 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #9285 completed with status: 'FAILURE'

@byshiue
Copy link
Collaborator Author

byshiue commented Jul 22, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12592 [ run ] triggered by Bot

@byshiue
Copy link
Collaborator Author

byshiue commented Jul 22, 2025

/bot run --disable-fail-fast

@byshiue byshiue changed the title [Fix][Nvbug 5401163] Fix bug of MoE on tp > 1 with trtllm moe backend [Fix][nvbug 5401163][nvbug 5404726][Qwen3] Fix bug of MoE on tp > 1 with trtllm moe backend Jul 22, 2025
@tensorrt-cicd
Copy link
Collaborator

PR_Github #12604 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12592 [ run ] completed with state ABORTED

@byshiue byshiue mentioned this pull request Jul 22, 2025
4 tasks
@tensorrt-cicd
Copy link
Collaborator

PR_Github #12604 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #9379 completed with status: 'FAILURE'

@byshiue
Copy link
Collaborator Author

byshiue commented Jul 23, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12722 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12722 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #9469 completed with status: 'FAILURE'

@byshiue
Copy link
Collaborator Author

byshiue commented Jul 23, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12744 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12744 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #9488 completed with status: 'SUCCESS'

Copy link
Collaborator

@shaharmor98 shaharmor98 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@byshiue byshiue merged commit 7b6aadc into NVIDIA:main Jul 24, 2025
3 checks passed
NVShreyas pushed a commit to NVShreyas/TensorRT-LLM that referenced this pull request Jul 28, 2025
…ith trtllm moe backend (NVIDIA#6235)

Signed-off-by: bhsueh <[email protected]>
Signed-off-by: Shreyas Misra <[email protected]>
Ransiki pushed a commit to Ransiki/TensorRT-LLM that referenced this pull request Jul 29, 2025
…ith trtllm moe backend (NVIDIA#6235)

Signed-off-by: bhsueh <[email protected]>
Signed-off-by: Ransiki Zhang <[email protected]>
lancelly pushed a commit to lancelly/TensorRT-LLM that referenced this pull request Aug 6, 2025
…ith trtllm moe backend (NVIDIA#6235)

Signed-off-by: bhsueh <[email protected]>
Signed-off-by: Lanyu Liao <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants