Skip to content

Remove input_sf swizzle for module WideEPMoE #6231

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

StudyingShao
Copy link
Collaborator

@StudyingShao StudyingShao commented Jul 21, 2025

Summary by CodeRabbit

  • New Features

    • Added support for specifying whether input scaling factors are swizzled in fused Mixture of Experts (MoE) operations.
    • Introduced a new boolean parameter swizzled_input_sf in both Python and C++ MoE APIs and operator calls.
  • Refactor

    • Updated internal interfaces and operator calls to propagate the new swizzled_input_sf parameter throughout the MoE codebase.
    • Removed redundant scaling factor swizzling in WideEPMoE module and explicitly marked input scaling factors as not swizzled where applicable.

Description

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@StudyingShao StudyingShao requested a review from a team as a code owner July 21, 2025 18:55
Copy link

coderabbitai bot commented Jul 21, 2025

Walkthrough

A new boolean parameter swizzled_input_sf was introduced and threaded through the Mixture-of-Experts (MoE) codebase, affecting C++ kernels, plugin interfaces, PyTorch custom ops, and Python module calls. This parameter determines whether input scaling factors are swizzled, requiring interface and call-site updates across CUDA kernels, C++ runners, plugin logic, Python bindings, and test/benchmark code.

Changes

Files/Paths Change Summary
cpp/tensorrt_llm/kernels/cutlass_kernels/include/moe_kernels.h,
cpp/tensorrt_llm/kernels/cutlass_kernels/include/moe_util_kernels.h
Added swizzled_input_sf boolean parameter to MoE runner and kernel interfaces.
cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu Propagated swizzled_input_sf through kernel/device functions and MoE runner logic.
cpp/tensorrt_llm/thop/moeOp.cpp Updated FusedMoeRunner methods to accept and forward swizzled_input_sf.
cpp/tensorrt_llm/thop/moeUtilOp.cpp Passed swizzled_input_sf to kernel launcher in runPermute.
cpp/micro_benchmarks/mixtureOfExpertsBackendBenchmarkFixture.h,
cpp/tests/unit_tests/kernels/mixtureOfExpertsTest.cu
Updated MoE runner calls to include swizzled_input_sf argument in benchmarks and tests.
cpp/tensorrt_llm/plugins/mixtureOfExperts/mixtureOfExpertsPlugin.cpp Added swizzled_input_sf argument to plugin MoE runner calls.
tensorrt_llm/_torch/custom_ops/torch_custom_ops.py Added swizzled_input_sf argument to fused_moe Python custom op and passed to backend.
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py Set swizzled_input_sf=True in fused MoE operator call.
tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py Removed swizzling logic after DeepEP dispatch; set swizzled_input_sf=False in fused MoE call.

Sequence Diagram(s)

sequenceDiagram
    participant PyTorchModule
    participant PythonCustomOp
    participant FusedMoeRunner
    participant CppMoERunner
    participant CUDAKernel

    PyTorchModule->>PythonCustomOp: fused_moe(..., input_sf, swizzled_input_sf)
    PythonCustomOp->>FusedMoeRunner: runMoe(..., input_sf, swizzled_input_sf)
    FusedMoeRunner->>CppMoERunner: runMoe(..., input_sf, swizzled_input_sf)
    CppMoERunner->>CUDAKernel: expandInputRowsKernelLauncher(..., input_sf, swizzled_input_sf)
    CUDAKernel->>CUDAKernel: Use swizzled_input_sf to select scaling layout
Loading

Estimated code review effort

3 (~45 minutes)

Suggested labels

Community want to contribute

Suggested reviewers

  • hlu1
  • Tracin
  • hyukn

Poem

A rabbit hopped through swizzled fields,
Where scaling factors spun on shields.
With kernels tuned and runners neat,
The experts’ mixture can’t be beat!
Now every tensor—swizzled or not—
Will find its place, its perfect spot.
🐇✨


📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d2608f4 and 92792a5.

📒 Files selected for processing (11)
  • cpp/micro_benchmarks/mixtureOfExpertsBackendBenchmarkFixture.h (2 hunks)
  • cpp/tensorrt_llm/kernels/cutlass_kernels/include/moe_kernels.h (2 hunks)
  • cpp/tensorrt_llm/kernels/cutlass_kernels/include/moe_util_kernels.h (1 hunks)
  • cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu (9 hunks)
  • cpp/tensorrt_llm/plugins/mixtureOfExperts/mixtureOfExpertsPlugin.cpp (2 hunks)
  • cpp/tensorrt_llm/thop/moeOp.cpp (6 hunks)
  • cpp/tensorrt_llm/thop/moeUtilOp.cpp (1 hunks)
  • cpp/tests/unit_tests/kernels/mixtureOfExpertsTest.cu (1 hunks)
  • tensorrt_llm/_torch/custom_ops/torch_custom_ops.py (2 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py (1 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py (2 hunks)
✅ Files skipped from review due to trivial changes (1)
  • cpp/tensorrt_llm/plugins/mixtureOfExperts/mixtureOfExpertsPlugin.cpp
🚧 Files skipped from review as they are similar to previous changes (10)
  • cpp/tensorrt_llm/thop/moeUtilOp.cpp
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py
  • cpp/micro_benchmarks/mixtureOfExpertsBackendBenchmarkFixture.h
  • cpp/tests/unit_tests/kernels/mixtureOfExpertsTest.cu
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py
  • cpp/tensorrt_llm/kernels/cutlass_kernels/include/moe_util_kernels.h
  • tensorrt_llm/_torch/custom_ops/torch_custom_ops.py
  • cpp/tensorrt_llm/kernels/cutlass_kernels/include/moe_kernels.h
  • cpp/tensorrt_llm/thop/moeOp.cpp
  • cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
✨ Finishing Touches
  • 📝 Generate Docstrings

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@StudyingShao StudyingShao self-assigned this Jul 21, 2025
@StudyingShao StudyingShao requested a review from Kefeng-Duan July 21, 2025 18:56
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🔭 Outside diff range comments (1)
tensorrt_llm/_torch/custom_ops/torch_custom_ops.py (1)

217-241: Missing parameter in fake registration function.

The fake registration function for fused_moe (starting at line 217) is missing the new swizzled_input_sf parameter in its signature, while the actual function has it. This inconsistency could cause issues with torch.compile compatibility.

Add the missing parameter to maintain signature consistency:

@torch.library.register_fake("trtllm::fused_moe")
def _(
    input: torch.Tensor,
    token_selected_experts: torch.Tensor,
    token_final_scales: torch.Tensor,
    fc1_expert_weights: torch.Tensor,
    fc1_expert_biases: Optional[torch.Tensor],
    fc2_expert_weights: torch.Tensor,
    fc2_expert_biases: Optional[torch.Tensor],
    output_dtype: torch.dtype,
    quant_scales: List[torch.Tensor],
    input_sf: Optional[torch.Tensor] = None,
+   swizzled_input_sf: bool = True,
    tp_size: int = 1,
    tp_rank: int = 0,
    ep_size: int = 1,
    ep_rank: int = 0,
    cluster_size: int = 1,
    cluster_rank: int = 0,
    enable_alltoall: bool = False,
    use_deepseek_fp8_block_scale: bool = False,
    use_w4a8_group_scaling: bool = False,
    use_mxfp8_act_scaling: bool = False,
    min_latency_mode: bool = False,
    tune_max_num_tokens: int = 8192,
):
🧹 Nitpick comments (1)
cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu (1)

1066-1084: Consider refactoring to reduce code duplication.

The logic correctly implements the conditional behavior based on swizzled_input_sf, but there's significant code duplication between the two branches. Consider refactoring:

 if (input_sf)
 {
-    if (swizzled_input_sf)
-    {
-        auto const sf_in
-            = cvt_quant_to_fp4_get_sf_out_offset<TmaWarpSpecializedGroupedGemmInput::ElementSF, NumThreadsPerSF,
-                VecSize>(std::nullopt /* batchIdx */, source_token_id, elem_idx, std::nullopt /* numRows */,
-                num_cols, const_cast<TmaWarpSpecializedGroupedGemmInput::ElementSF*>(input_sf),
-                FP4QuantizationSFLayout::SWIZZLED);
-        *sf_out = *sf_in;
-    }
-    else
-    {
-        auto const sf_in
-            = cvt_quant_to_fp4_get_sf_out_offset<TmaWarpSpecializedGroupedGemmInput::ElementSF, NumThreadsPerSF,
-                VecSize>(std::nullopt /* batchIdx */, source_token_id, elem_idx, std::nullopt /* numRows */,
-                num_cols, const_cast<TmaWarpSpecializedGroupedGemmInput::ElementSF*>(input_sf),
-                FP4QuantizationSFLayout::LINEAR);
-        *sf_out = *sf_in;
-    }
+    auto const layout = swizzled_input_sf ? FP4QuantizationSFLayout::SWIZZLED : FP4QuantizationSFLayout::LINEAR;
+    auto const sf_in
+        = cvt_quant_to_fp4_get_sf_out_offset<TmaWarpSpecializedGroupedGemmInput::ElementSF, NumThreadsPerSF,
+            VecSize>(std::nullopt /* batchIdx */, source_token_id, elem_idx, std::nullopt /* numRows */,
+            num_cols, const_cast<TmaWarpSpecializedGroupedGemmInput::ElementSF*>(input_sf), layout);
+    *sf_out = *sf_in;
 }
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d7f0b0a and 9844fbb.

📒 Files selected for processing (11)
  • cpp/micro_benchmarks/mixtureOfExpertsBackendBenchmarkFixture.h (2 hunks)
  • cpp/tensorrt_llm/kernels/cutlass_kernels/include/moe_kernels.h (2 hunks)
  • cpp/tensorrt_llm/kernels/cutlass_kernels/include/moe_util_kernels.h (1 hunks)
  • cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu (9 hunks)
  • cpp/tensorrt_llm/plugins/mixtureOfExperts/mixtureOfExpertsPlugin.cpp (2 hunks)
  • cpp/tensorrt_llm/thop/moeOp.cpp (6 hunks)
  • cpp/tensorrt_llm/thop/moeUtilOp.cpp (1 hunks)
  • cpp/tests/unit_tests/kernels/mixtureOfExpertsTest.cu (1 hunks)
  • tensorrt_llm/_torch/custom_ops/torch_custom_ops.py (2 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py (1 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py (1 hunks)
🔇 Additional comments (23)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py (1)

293-293: LGTM! Addition of swizzled_input_sf parameter is consistent with the broader MoE interface update.

The addition of swizzled_input_sf=True to the fused_moe operator call is correctly aligned with the codebase-wide introduction of this parameter. The True value indicates that input scaling factors are swizzled in the CutlassFusedMoE path, which is consistent with the existing behavior and the PR's objective to differentiate swizzling behavior between CutlassFusedMoE and WideEPMoE implementations.

cpp/tensorrt_llm/thop/moeUtilOp.cpp (1)

86-86: LGTM! Parameter addition aligns with the coordinated MoE interface update.

The addition of the true argument for the swizzled_input_sf parameter to expandInputRowsKernelLauncher is consistent with the broader codebase changes. The hardcoded true value indicates that input scaling factors are always swizzled in this utility operation path, which appears appropriate given that this is a low-level utility function that likely expects pre-processed (swizzled) data.

cpp/tensorrt_llm/plugins/mixtureOfExperts/mixtureOfExpertsPlugin.cpp (1)

960-960: LGTM! Consistent parameter addition across both compilation paths.

The addition of the true argument for swizzled_input_sf to both mMOERunner->runMoe calls maintains consistency between the USING_OSS_CUTLASS_MOE_GEMM and non-OSS compilation paths. The true value correctly indicates that input scaling factors are swizzled in the plugin execution path, which aligns with the expected behavior and the broader interface update across the MoE codebase.

Also applies to: 971-971

cpp/micro_benchmarks/mixtureOfExpertsBackendBenchmarkFixture.h (1)

983-983: LGTM: Consistent with the new swizzled_input_sf parameter interface.

The addition of true as the third parameter aligns with the updated runMoe signature that now includes the swizzled_input_sf boolean parameter. This indicates that the benchmark assumes input scaling factors are swizzled, which is consistent with the pattern seen in other parts of the codebase.

Both conditional compilation paths (#ifdef USING_OSS_CUTLASS_MOE_GEMM and #else) are updated consistently.

Also applies to: 995-995

cpp/tests/unit_tests/kernels/mixtureOfExpertsTest.cu (1)

1176-1186: LGTM! Test updated to match new runMoe interface.

The addition of the swizzled_input_sf parameter (set to true) correctly updates the test to match the new method signature. Setting this to true maintains the existing swizzling behavior for standard MoE operations, which is appropriate for this test code.

tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py (1)

681-681: Parameter addition looks correct.

The addition of swizzled_input_sf=False parameter is consistent with the PR objective to remove input_sf swizzle for the WideEPMoE module.

However, I notice the AI summary mentions that conditional swizzling code was removed (specifically lines checking if self.has_nvfp4: and applying swizzle_sf), but I don't see any removed lines in the provided code. This suggests a potential inconsistency between the summary and the actual changes shown.

Likely an incorrect or invalid review comment.

cpp/tensorrt_llm/kernels/cutlass_kernels/include/moe_util_kernels.h (1)

61-62: Clean kernel interface update.

The addition of the swizzled_input_sf parameter to the expandInputRowsKernelLauncher function template is well-positioned and properly typed. The parameter placement after input_sf is logical and maintains interface consistency.

tensorrt_llm/_torch/custom_ops/torch_custom_ops.py (2)

132-132: Good parameter addition with sensible default.

The addition of swizzled_input_sf: bool = True parameter is well-placed and maintains backward compatibility with the default value.


202-202: Parameter correctly forwarded to underlying implementation.

The parameter is properly passed through to the underlying runner call.

cpp/tensorrt_llm/thop/moeOp.cpp (4)

221-224: LGTM: Method signature updated correctly.

The new swizzled_input_sf parameter is properly positioned after the related input_sf parameter and follows correct C++ const conventions.


322-322: LGTM: Parameter correctly passed to kernel calls.

The swizzled_input_sf parameter is consistently passed to both OSS and non-OSS kernel runner calls in the correct position.

Also applies to: 336-336


358-361: LGTM: Consistent method signature update.

The runMoeMinLantency method signature is updated consistently with runMoe, maintaining proper parameter positioning and const conventions.


453-453: LGTM: Consistent parameter passing in runMoeMinLantency.

The kernel calls in runMoeMinLantency consistently pass the new parameter to both OSS and non-OSS code paths, matching the pattern established in runMoe.

Also applies to: 467-467

cpp/tensorrt_llm/kernels/cutlass_kernels/include/moe_kernels.h (2)

395-402: LGTM: Interface signature updated correctly.

The runMoe method signature in CutlassMoeFCRunnerInterface properly adds the swizzled_input_sf parameter with correct positioning, typing, and const conventions.


542-549: LGTM: Implementation signature matches interface.

The CutlassMoeFCRunner::runMoe method signature correctly implements the interface with matching parameter positioning, typing, and the proper override specifier.

cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu (8)

1046-1046: LGTM: Well-designed backward compatibility.

The addition of the swizzled_input_sf parameter with a default value of true maintains backward compatibility while enabling the new functionality. The parameter name clearly communicates its purpose.


1467-1468: LGTM: Consistent kernel interface extension.

The addition of input_sf and swizzled_input_sf parameters to the kernel template correctly propagates the new functionality. The parameter ordering is logical and maintains consistency with the overall design pattern.


1569-1571: LGTM: Correct parameter propagation.

The call to writeSF correctly passes the new swizzled_input_sf parameter, maintaining the intended behavior flow from kernel to device function.


1671-1672: LGTM: Launcher interface extension.

The launcher function signature correctly extends to include the new parameters, maintaining consistency with the kernel interface changes.


1748-1749: LGTM: Complete parameter propagation.

The kernel launch correctly passes the new input_sf and swizzled_input_sf parameters, completing the propagation chain from the public interface through to the kernel execution.


1759-1760: LGTM: Template instantiation consistency.

The macro correctly includes the new parameters to ensure all template instantiations have consistent signatures with the updated function template.


1641-1642: LGTM: Correct call site usage.

The call to expandInputRowsKernelLauncher correctly passes the new input_sf and swizzled_input_sf parameters in the proper order, maintaining the intended functionality.


3422-3423: LGTM: Public API extension.

The runMoe method signature correctly extends the public API with the new input_sf_void and swizzled_input_sf parameters. The parameter naming is clear and the ordering is logical, providing the entry point for controlling input scaling factor layout interpretation.

Signed-off-by: Jiang Shao <[email protected]>
Signed-off-by: Jiang Shao <[email protected]>
Signed-off-by: Jiang Shao <[email protected]>
Signed-off-by: Jiang Shao <[email protected]>
Signed-off-by: Jiang Shao <[email protected]>
@StudyingShao StudyingShao force-pushed the jiangs/1.0.0rc4/WideEPMoE_rm_swizzle branch from d2608f4 to 92792a5 Compare July 21, 2025 19:33
@StudyingShao
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12470 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12470 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #9276 completed with status: 'FAILURE'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants