Skip to content

[None][feat] Multi-block mode for Hopper spec dec XQA kernel #4416

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Aug 3, 2025

Conversation

jhaotingc
Copy link
Collaborator

@jhaotingc jhaotingc commented May 18, 2025

[feat] Multi-block mode for Hopper spec dec XQA kernel

Description

Following PR 3269, it's observed that at low batch size and low draft len, the XQA hopper spec dec kernel has low CTA.

The reason is root caused to not having multi-block mode when Spec-dec is turned on.

The XQA Hopper Spec-dec kernel is launched with gridDim = dim3{specDecBlocks, multi_block, nbKVHeads * xqaParams.batch_size}, where

  • specDecBlocks = divUp(specDecParams.qSeqLen, 64 / num_q_heads_over_kv), num_q_heads_over_kv for Llama is 8.
  • nbKVHeads is num_kv_head per TP rank, for Llama 3, it's 8 when TP=1, 2 when TP=4, 1 when TP=8.

Before the fix, multi_block = 1.

At a very common use case, where eagle draft length set to < 8, running TP=8, using Llama 3 70b / Llama 3.1 8b. The number of blocks launched could be only batch_size blocks. At BS=1, only 1 block will be launched.

gridDim = dim3{ divUp(7 / 8), 1, 1 * xqaParams.batch_size} # = batch_size

Therefore, multi-block mode is crucial for low BS, low draft length case.

Heuristic design:

A series of sweeps was done with xqa. The experiment showed that when original gridDim is less than a wave of SM, there is benefit for multi-block mode. Furthermore, when original block count is <= 8, 64k > ISL >= 1k, populating all SMs is not always good. The experiments are shown in Appendix.

Speedup:

Kernel Speedup for ISL=32k, draft length 7, batch size 2.

By increasing gridDim from (1,1,2) to (1,32,2) yield a 7.8x speedup.

before: 
kernel_mha
Begins: 39.1859s
Ends: 39.1863s (+418.620 μs)
grid:  <<<1, 1, 2>>>
block: <<<128, 1, 3>>>

after: 
kernel_mha
Begins: 33.2649s
Ends: 33.265s (+52.991 μs)
grid:  <<<1, 32, 2>>>
block: <<<128, 1, 3>>>

Kernel Speedup for ISL=32k, draft length 7, batch size 8.

By increasing gridDim from (1,1,8) to (1,8,8) yield a ** 5.4x speedup.**

before: 
kernel_mha
Begins: 62.8263s
Ends: 62.8268s (+420.063 μs)
grid:  <<<1, 1, 8>>>
block: <<<128, 1, 3>>>

after: 
kernel_mha
Begins: 42.189s
Ends: 42.1891s (+77.663 μs)
grid:  <<<1, 8, 8>>>
block: <<<128, 1, 3>>>

Kernel Speedup for ISL=10k, draft length 7, batch size 8.

By increasing gridDim from (1,1,8) to (1,4,8) yield a ** 2.5x speedup.**

before: 
kernel_mha
Begins: 39.3397s
Ends: 39.3399s (+137.759 μs)
grid:  <<<1, 1, 8>>>
block: <<<128, 1, 3>>>

after: 
kernel_mha
Begins: 38.6683s
Ends: 38.6683s (+55.776 μs)
grid:  <<<1, 4, 8>>>
block: <<<128, 1, 3>>>

Generation Step Speedup for ISL=32k, ISL=10k, ISL=1k, running TP8PP1 Llama 3 70B Eagle, Linear Tree (depth 6, max_draft_len = 7).

image
image
image

Accuracy verification:

Llama 3 70b Eagle TP8 H200

gsm8k (not affecting anything because not multi-block mode enabled when ISL < 2k)

# add speculative_config in lm_eval_tensorrt_llm.py
python lm_eval_tensorrt_llm.py --model trt-llm \
    --model_args tokenizer=$HF_DIR,model=$ENGINE_DIR \
    --tasks gsm8k

# (Before:)
trt-llm (tokenizer=/scratch_1/tmp/hf_models/Meta-Llama-3-70B-Instruct,model=/scratch_1/tmp/trt_engines/Meta-Llama-3-70B-Instruct_eagle_fp8/tp8_pp1), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 1
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match|↑  |0.9075|±  | 0.008|
|     |       |strict-match    |     5|exact_match|↑  |0.9067|±  | 0.008|


# (After:)
trt-llm (tokenizer=/scratch_1/tmp/hf_models/Meta-Llama-3-70B-Instruct,model=/scratch_1/tmp/trt_engines/Meta-Llama-3-70B-Instruct_eagle_fp8/tp8_pp1), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 1
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match|↑  |0.9121|±  |0.0078|
|     |       |strict-match    |     5|exact_match|↑  |0.9121|±  |0.0078|

ruler

Llama 3 70b Eagle TP4 H200

trt-llm (backend=trt,tokenizer=/scratch_1/tmp/hf_models/Meta-Llama-3-70B-Instruct,model=/scratch_1/tmp/trt_engines/Meta-Llama-3-70B-Instruct_eagle_fp8/tp4_pp1,max_context_length=10240,max_gen_toks=1024,eagle_decoding_config=/scratch/TensorRT-LLM-dev/AGI_BUG/eagle_decoding_config.json), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 8|  Tasks  |Version|Filter|n-shot|Metric|   |Value |   |Stderr|
|---------|------:|------|-----:|-----:|---|-----:|---|------|
|ruler_cwe|      1|none  |     0|  4096|↑  |1.0000|±  |   N/A|
|         |       |none  |     0|  8192|↑  |0.9802|±  |   N/A|

note: for ruler ISL >= 16k, there seems to be error before or after this tuning.

Longbench

Llama 3 70b Eagle TP4 H200

trt-llm (backend=trt,tokenizer=/scratch_1/tmp/hf_models/Meta-Llama-3-70B-Instruct,model=/scratch_1/tmp/trt_engines/Meta-Llama-3-70B-Instruct_eagle_fp8/tp4_pp1,max_context_length=18432,max_gen_toks=1024,eagle_decoding_config=/scratch/TensorRT-LLM-dev/AGI_BUG/eagle_decoding_config.json), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 8

(before)
|      Tasks       |Version|Filter|n-shot|  Metric   |   |Value |   |Stderr|                                                                                                    
|------------------|------:|------|-----:|-----------|---|-----:|---|-----:|
|longbench_2wikimqa|      2|none  |     0|qa_f1_score|↑  |0.5346|±  |0.0292|

(after)
|      Tasks       |Version|Filter|n-shot|  Metric   |   |Value |   |Stderr|
|------------------|------:|------|-----:|-----------|---|-----:|---|-----:|
|longbench_2wikimqa|      2|none  |     0|qa_f1_score|↑  |0.5358|±  |0.0291|

Test Coverage

Appendix

xqa sweep result

image (46)
image (43)
image (44)
image (45)
image (42)
image (41)
image (40)
image (39)
image (38)
image (37)

BS=16, ISL=1024 slight regression:

before:
kernel_mha
Begins: 36.3784s
Ends: 36.3785s (+21.600 μs)
grid:  <<<1, 1, 16>>>
block: <<<128, 1, 3>>>

after:
kernel_mha
Begins: 39.7469s
Ends: 39.7469s (+27.584 μs)
grid:  <<<1, 8, 16>>>
block: <<<128, 1, 3>>>

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--disable-fail-fast --skip-test --stage-list "A10-1, xxx" --gpu-type "A30, H100_PCIe" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-[Post-Merge]-1, xxx"]

Launch build/test pipelines. All previously running jobs will be killed.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests. Will also run L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-[Post-Merge]-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-[Post-Merge]-1, xxx".

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Summary by CodeRabbit

  • New Features

    • Improved tuning for multi-block count in speculative decoding with GMMA kernels, potentially enhancing performance in those scenarios.
  • Bug Fixes

    • Refined logic for enabling or disabling multi-block mode, ensuring more predictable behavior across different decoding modes and environment variable settings.
    • Disabled multi-block mode support for the precompiled XQA Spec-decoding kernel to prevent unsupported configurations.
  • Refactor

    • Simplified and clarified the handling of multi-block mode configuration for attention operations.
    • Removed conditional resets of multi-block mode in speculative decoding paths for more consistent behavior.
    • Forced multi-block mode to always be enabled in the GPT attention plugin common constructor for consistency.

@jhaotingc
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #5603 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #5603 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #4089 completed with status: 'SUCCESS'

@jhaotingc jhaotingc force-pushed the xqa_hopper_cta_perf branch 4 times, most recently from e635efe to d20ab7d Compare May 18, 2025 20:30
@jhaotingc
Copy link
Collaborator Author

/bot run --disable-fail-fast

@jhaotingc jhaotingc changed the title initial heuristic for xqa hopper spec dec Multi-block mode for Hopper spec dec XQA kernel May 18, 2025
@jhaotingc jhaotingc changed the title Multi-block mode for Hopper spec dec XQA kernel [feat] Multi-block mode for Hopper spec dec XQA kernel May 18, 2025
@jhaotingc
Copy link
Collaborator Author

/bot run --disable-fail-fast --post-merge

@tensorrt-cicd
Copy link
Collaborator

PR_Github #5631 [ run ] triggered by Bot

@jhaotingc
Copy link
Collaborator Author

/bot kill

@jhaotingc
Copy link
Collaborator Author

/bot kill --post-merge

@tensorrt-cicd
Copy link
Collaborator

PR_Github #5636 [ kill ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #5631 [ run ] completed with state ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #5636 [ kill ] completed with state SUCCESS
Successfully killed previous jobs for commit 043e0dc

@tensorrt-cicd
Copy link
Collaborator

PR_Github #5637 Bot args parsing error: usage: /bot [-h]
{run,kill,skip,submit,reviewers,reuse-pipeline,reuse-review} ...
/bot: error: unrecognized arguments: --post-merge

@jhaotingc
Copy link
Collaborator Author

/bot run --disable-fail-fast --post-merge

@tensorrt-cicd
Copy link
Collaborator

PR_Github #5654 [ run ] triggered by Bot

@jhaotingc
Copy link
Collaborator Author

/bot kill

@jhaotingc jhaotingc requested review from lowsfer and symphonylyh May 19, 2025 03:37
@jhaotingc
Copy link
Collaborator Author

/bot kill --post-merge

@jhaotingc
Copy link
Collaborator Author

/bot run --disable-fail-fast --post-merge

@tensorrt-cicd
Copy link
Collaborator

PR_Github #5667 Bot args parsing error: usage: /bot [-h]
{run,kill,skip,submit,reviewers,reuse-pipeline,reuse-review} ...
/bot: error: unrecognized arguments: --post-merge

@tensorrt-cicd
Copy link
Collaborator

PR_Github #5668 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #5654 [ run ] completed with state ABORTED

@lowsfer
Copy link
Member

lowsfer commented May 19, 2025

Good work! gitlab/ftp/xqa content is already opensource in trtllm repo, so please update the link.

@jhaotingc jhaotingc changed the title [feat] Multi-block mode for Hopper spec dec XQA kernel [None][feat] Multi-block mode for Hopper spec dec XQA kernel Aug 2, 2025
@jhaotingc
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13847 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13847 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #10415 completed with status: 'SUCCESS'

@jhaotingc jhaotingc force-pushed the xqa_hopper_cta_perf branch from 1e9f703 to a973a7e Compare August 2, 2025 23:08
@jhaotingc
Copy link
Collaborator Author

/bot run --disable-fail-fast

@jhaotingc
Copy link
Collaborator Author

Run again to make sure it passes accuracy test that was just merged:
#6264

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderXQAImplCommon.h (2)

397-401: Magic numbers still present despite past review comments.

The hardcoded values 32 and 2048 were flagged in previous reviews but are still present. These should be replaced with named constants as suggested in the past review comments to improve maintainability.


428-462: Magic numbers still need to be addressed.

Despite previous review comments, the hardcoded values (8, 16, 65536, 2048, 4, 8192) are still present in the tuning logic. These were specifically flagged in past reviews and should be replaced with named constants for better maintainability.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1e9f703 and a973a7e.

📒 Files selected for processing (7)
  • cpp/tensorrt_llm/common/attentionOp.cpp (1 hunks)
  • cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderXQAImplCommon.h (1 hunks)
  • cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderXQAImplJIT/decoderXQAImplJIT.cpp (1 hunks)
  • cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderXQAImplPrecompiled.cpp (1 hunks)
  • cpp/tensorrt_llm/plugins/gptAttentionCommon/gptAttentionCommon.cpp (1 hunks)
  • cpp/tensorrt_llm/plugins/gptAttentionPlugin/gptAttentionPlugin.cpp (0 hunks)
  • cpp/tensorrt_llm/thop/attentionOp.cpp (0 hunks)
💤 Files with no reviewable changes (2)
  • cpp/tensorrt_llm/thop/attentionOp.cpp
  • cpp/tensorrt_llm/plugins/gptAttentionPlugin/gptAttentionPlugin.cpp
🚧 Files skipped from review as they are similar to previous changes (4)
  • cpp/tensorrt_llm/plugins/gptAttentionCommon/gptAttentionCommon.cpp
  • cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderXQAImplPrecompiled.cpp
  • cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderXQAImplJIT/decoderXQAImplJIT.cpp
  • cpp/tensorrt_llm/common/attentionOp.cpp
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{cpp,h,hpp,cc,cxx}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.{cpp,h,hpp,cc,cxx}: Closing braces of namespaces should have a comment saying the namespace it closes (e.g., } // namespace foo)
Prefer const or constexpr variables over #defines whenever possible, as the latter are not visible to the compiler.
A variable that is not modified after its initialization should be declared as const.
Except 0 (only used in comparison for checking signness/existence/emptiness) and nullptr, true, false, all other literals should only be used for variable initialization.
Use the Allman indentation style for braces.
Put the semicolon for an empty for or while loop in a new line.
The statement forming the body of a switch, while, do .. while or for statement shall be a compound statement (use brace-delimited statements).
If and else should always be followed by brace-delimited statements, even if empty or a single statement.
C++ filenames should use camel case with first letter lowercase (e.g., thisIsAFilename.cpp), and all files involved in the compilation of a target must have filenames that are case-insensitive unique.
All types (including class names) are camel case with uppercase first letter (e.g., FooBarClass).
Local variables, methods, and namespaces use camel case with first letter lowercase (e.g., localFooBar).
Non-magic-number global variables that are non-static and not defined in anonymous namespace use camel case prefixed by a lower case 'g' (e.g., gDontUseGlobalFoos).
Non-magic-number global variables that are static or defined in an anonymous namespace use camel case prefixed by a lower case 's' (e.g., sMutableStaticGlobal).
Locally visible static variable uses camel case with lowercase prefix 's' as the first letter of the name (e.g., static std::once_flag sFlag;).
Class member variables use camel case prefixed with an 'm' (e.g., mNbFooValues). Public member variables do not require the 'm' prefix but it is encouraged for clarity.
Enumerations, global constants, static constants at class-scope, and function-scope magic...

Files:

  • cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderXQAImplCommon.h
**/*.{h,hpp}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Use a preprocessor guard in header files. The guard name must have prefix TRTLLM_ followed by the filename, all in caps, and no trailing underscore.

Files:

  • cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderXQAImplCommon.h
**/*.{cpp,h,hpp,cc,cxx,cu,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the current year. This includes .cpp, .h, .cu, .py, and any other source files which are compiled or interpreted.

Files:

  • cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderXQAImplCommon.h
🧠 Learnings (4)
📚 Learning: applies to **/*.{cpp,h,hpp,cc,cxx} : enumerations, global constants, static constants at class-scope...
Learnt from: CR
PR: NVIDIA/TensorRT-LLM#0
File: CODING_GUIDELINES.md:0-0
Timestamp: 2025-08-01T07:34:42.734Z
Learning: Applies to **/*.{cpp,h,hpp,cc,cxx} : Enumerations, global constants, static constants at class-scope, and function-scope magic-number/literal constants are uppercase snakecase with prefix 'k' (e.g., kDIGIT_NUM).

Applied to files:

  • cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderXQAImplCommon.h
📚 Learning: applies to **/*.{cpp,h,hpp,cc,cxx} : prefer const or constexpr variables over #defines whenever poss...
Learnt from: CR
PR: NVIDIA/TensorRT-LLM#0
File: CODING_GUIDELINES.md:0-0
Timestamp: 2025-08-01T07:34:42.734Z
Learning: Applies to **/*.{cpp,h,hpp,cc,cxx} : Prefer const or constexpr variables over #defines whenever possible, as the latter are not visible to the compiler.

Applied to files:

  • cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderXQAImplCommon.h
📚 Learning: applies to **/*.{cpp,h,hpp,cc,cxx} : non-magic-number global variables that are static or defined in...
Learnt from: CR
PR: NVIDIA/TensorRT-LLM#0
File: CODING_GUIDELINES.md:0-0
Timestamp: 2025-08-01T07:34:42.734Z
Learning: Applies to **/*.{cpp,h,hpp,cc,cxx} : Non-magic-number global variables that are static or defined in an anonymous namespace use camel case prefixed by a lower case 's' (e.g., sMutableStaticGlobal).

Applied to files:

  • cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderXQAImplCommon.h
📚 Learning: applies to **/*.{cpp,h,hpp,cc,cxx} : non-magic-number global variables that are non-static and not d...
Learnt from: CR
PR: NVIDIA/TensorRT-LLM#0
File: CODING_GUIDELINES.md:0-0
Timestamp: 2025-08-01T07:34:42.734Z
Learning: Applies to **/*.{cpp,h,hpp,cc,cxx} : Non-magic-number global variables that are non-static and not defined in anonymous namespace use camel case prefixed by a lower case 'g' (e.g., gDontUseGlobalFoos).

Applied to files:

  • cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderXQAImplCommon.h
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (5)
cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderXQAImplCommon.h (5)

384-401: Function signature and early return logic look good.

The function signature is well-designed with clear parameter names and appropriate types. The early return for user-specified overrides via environment variable is a good practice for configurability. The early exit for large batch sizes and short input sequence lengths is also reasonable for performance optimization.


410-418: Well-implemented bit manipulation utility.

The highestPowerof2 lambda function is correctly implemented using standard bit manipulation techniques. The logic efficiently finds the highest power of 2 less than or equal to the input value.


420-426: Multi-block count calculation is sound.

The calculation logic for determining the maximum number of blocks to populate at most one wave of SMs is mathematically correct. The power-of-2 adjustment and bounds checking (1-64 range) are appropriate for GPU kernel tuning.


453-459: Excellent documentation improvement.

The detailed comments explaining the bit shift calculation logic are well-written and address the concerns raised in past reviews. The formula explanation and range mappings make the complex logic much clearer for future maintainers.


463-465: Good safety check with informative error message.

The runtime validation to ensure the adjusted multi-block count doesn't exceed the number of multiprocessors is a valuable safety measure. The warning message clearly explains the potential performance impact, which helps with debugging and tuning.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13858 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13858 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #10426 completed with status: 'FAILURE'

@jhaotingc jhaotingc force-pushed the xqa_hopper_cta_perf branch from a973a7e to d373543 Compare August 3, 2025 06:27
@jhaotingc
Copy link
Collaborator Author

/bot run

@jhaotingc
Copy link
Collaborator Author

random failure

@jhaotingc jhaotingc enabled auto-merge (squash) August 3, 2025 06:28
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (3)
cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderXQAImplCommon.h (3)

397-401: Replace magic numbers with named constants.

The hard-coded values 32 and 2048 should be defined as named constants to improve maintainability and clarity, following the coding guidelines for this codebase.

+namespace {
+constexpr int kMaxBatchSizeForTuning = 32;
+constexpr int kMinHistoryLengthForTuning = 2048;
+} // namespace

-    if (batch_size > 32 || history_length < 2048)
+    if (batch_size > kMaxBatchSizeForTuning || history_length < kMinHistoryLengthForTuning)

395-395: Consider using representative KV sequence length for more robust tuning.

Using xqaParams.max_past_kv_length as history_length forces tuning to the worst-case sequence in the batch. Because per-sequence KV cache lengths can vary, this may lead to suboptimal block counts and occupancy for typical workloads.

Consider replacing the hard maximum with a more robust metric, such as:

  • Computing the average of xqaParams.sequence_lengths
  • Applying a scaling factor to the max (e.g., 0.8 × max)
  • Using a high percentile of sequence lengths instead

433-461: Use named constants for magic numbers.

The function contains several hard-coded values that should be defined as named constants for better maintainability and clarity.

+namespace
+{
+// Tuning thresholds for multi-block mode
+constexpr int kMaxSingleBlockCountForTuning = 8;
+constexpr int kMinMultiBlockCountForTuning = 16;
+constexpr int kMaxHistoryLengthForTuning = 65536;
+constexpr int kHistoryLengthThreshold1 = 2048;
+constexpr int kHistoryLengthThreshold2 = 8192;
+constexpr int kMaxMultiBlockForShortHistory = 4;
+} // namespace

-        if (single_block_count <= 8 && multi_block_count >= 16 && history_length < 65536)
+        if (single_block_count <= kMaxSingleBlockCountForTuning 
+            && multi_block_count >= kMinMultiBlockCountForTuning 
+            && history_length < kMaxHistoryLengthForTuning)
         {
-            if (history_length < 2048)
+            if (history_length < kHistoryLengthThreshold1)
             {
-                multi_block_count = std::min(multi_block_count, 4);
+                multi_block_count = std::min(multi_block_count, kMaxMultiBlockForShortHistory);
             }
-            else if (history_length < 65536)
+            else if (history_length < kMaxHistoryLengthForTuning)
             {
-                if (single_block_count == 8 && history_length <= 8192)
+                if (single_block_count == kMaxSingleBlockCountForTuning 
+                    && history_length <= kHistoryLengthThreshold2)
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a973a7e and d373543.

📒 Files selected for processing (7)
  • cpp/tensorrt_llm/common/attentionOp.cpp (1 hunks)
  • cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderXQAImplCommon.h (1 hunks)
  • cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderXQAImplJIT/decoderXQAImplJIT.cpp (1 hunks)
  • cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderXQAImplPrecompiled.cpp (1 hunks)
  • cpp/tensorrt_llm/plugins/gptAttentionCommon/gptAttentionCommon.cpp (1 hunks)
  • cpp/tensorrt_llm/plugins/gptAttentionPlugin/gptAttentionPlugin.cpp (0 hunks)
  • cpp/tensorrt_llm/thop/attentionOp.cpp (0 hunks)
💤 Files with no reviewable changes (2)
  • cpp/tensorrt_llm/plugins/gptAttentionPlugin/gptAttentionPlugin.cpp
  • cpp/tensorrt_llm/thop/attentionOp.cpp
🚧 Files skipped from review as they are similar to previous changes (4)
  • cpp/tensorrt_llm/plugins/gptAttentionCommon/gptAttentionCommon.cpp
  • cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderXQAImplPrecompiled.cpp
  • cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderXQAImplJIT/decoderXQAImplJIT.cpp
  • cpp/tensorrt_llm/common/attentionOp.cpp
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{cpp,h,hpp,cc,cxx}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.{cpp,h,hpp,cc,cxx}: Closing braces of namespaces should have a comment saying the namespace it closes (e.g., } // namespace foo)
Prefer const or constexpr variables over #defines whenever possible, as the latter are not visible to the compiler.
A variable that is not modified after its initialization should be declared as const.
Except 0 (only used in comparison for checking signness/existence/emptiness) and nullptr, true, false, all other literals should only be used for variable initialization.
Use the Allman indentation style for braces.
Put the semicolon for an empty for or while loop in a new line.
The statement forming the body of a switch, while, do .. while or for statement shall be a compound statement (use brace-delimited statements).
If and else should always be followed by brace-delimited statements, even if empty or a single statement.
C++ filenames should use camel case with first letter lowercase (e.g., thisIsAFilename.cpp), and all files involved in the compilation of a target must have filenames that are case-insensitive unique.
All types (including class names) are camel case with uppercase first letter (e.g., FooBarClass).
Local variables, methods, and namespaces use camel case with first letter lowercase (e.g., localFooBar).
Non-magic-number global variables that are non-static and not defined in anonymous namespace use camel case prefixed by a lower case 'g' (e.g., gDontUseGlobalFoos).
Non-magic-number global variables that are static or defined in an anonymous namespace use camel case prefixed by a lower case 's' (e.g., sMutableStaticGlobal).
Locally visible static variable uses camel case with lowercase prefix 's' as the first letter of the name (e.g., static std::once_flag sFlag;).
Class member variables use camel case prefixed with an 'm' (e.g., mNbFooValues). Public member variables do not require the 'm' prefix but it is encouraged for clarity.
Enumerations, global constants, static constants at class-scope, and function-scope magic...

Files:

  • cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderXQAImplCommon.h
**/*.{h,hpp}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Use a preprocessor guard in header files. The guard name must have prefix TRTLLM_ followed by the filename, all in caps, and no trailing underscore.

Files:

  • cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderXQAImplCommon.h
**/*.{cpp,h,hpp,cc,cxx,cu,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the current year. This includes .cpp, .h, .cu, .py, and any other source files which are compiled or interpreted.

Files:

  • cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderXQAImplCommon.h
🧠 Learnings (4)
📚 Learning: applies to **/*.{cpp,h,hpp,cc,cxx} : enumerations, global constants, static constants at class-scope...
Learnt from: CR
PR: NVIDIA/TensorRT-LLM#0
File: CODING_GUIDELINES.md:0-0
Timestamp: 2025-08-01T07:34:42.734Z
Learning: Applies to **/*.{cpp,h,hpp,cc,cxx} : Enumerations, global constants, static constants at class-scope, and function-scope magic-number/literal constants are uppercase snakecase with prefix 'k' (e.g., kDIGIT_NUM).

Applied to files:

  • cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderXQAImplCommon.h
📚 Learning: applies to **/*.{cpp,h,hpp,cc,cxx} : prefer const or constexpr variables over #defines whenever poss...
Learnt from: CR
PR: NVIDIA/TensorRT-LLM#0
File: CODING_GUIDELINES.md:0-0
Timestamp: 2025-08-01T07:34:42.734Z
Learning: Applies to **/*.{cpp,h,hpp,cc,cxx} : Prefer const or constexpr variables over #defines whenever possible, as the latter are not visible to the compiler.

Applied to files:

  • cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderXQAImplCommon.h
📚 Learning: applies to **/*.{cpp,h,hpp,cc,cxx} : non-magic-number global variables that are static or defined in...
Learnt from: CR
PR: NVIDIA/TensorRT-LLM#0
File: CODING_GUIDELINES.md:0-0
Timestamp: 2025-08-01T07:34:42.734Z
Learning: Applies to **/*.{cpp,h,hpp,cc,cxx} : Non-magic-number global variables that are static or defined in an anonymous namespace use camel case prefixed by a lower case 's' (e.g., sMutableStaticGlobal).

Applied to files:

  • cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderXQAImplCommon.h
📚 Learning: applies to **/*.{cpp,h,hpp,cc,cxx} : non-magic-number global variables that are non-static and not d...
Learnt from: CR
PR: NVIDIA/TensorRT-LLM#0
File: CODING_GUIDELINES.md:0-0
Timestamp: 2025-08-01T07:34:42.734Z
Learning: Applies to **/*.{cpp,h,hpp,cc,cxx} : Non-magic-number global variables that are non-static and not defined in anonymous namespace use camel case prefixed by a lower case 'g' (e.g., gDontUseGlobalFoos).

Applied to files:

  • cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderXQAImplCommon.h
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (2)
cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderXQAImplCommon.h (2)

451-460: Good documentation of complex bit-shift logic.

The detailed comments explaining the bit-shift calculation formula and the mapping of history length ranges to shift values effectively address the complexity of this logic. The documentation makes the intent clear for future maintainers.


463-467: Excellent runtime safety check.

The validation that multi_block_count * single_block_count <= multiprocessor_count with a descriptive warning message demonstrates good defensive programming practices. This prevents configurations that could degrade performance.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13863 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13863 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #10430 completed with status: 'SUCCESS'

@jhaotingc jhaotingc merged commit 6edaa23 into NVIDIA:main Aug 3, 2025
4 checks passed
@jhaotingc jhaotingc deleted the xqa_hopper_cta_perf branch August 3, 2025 21:32
symphonylyh pushed a commit to symphonylyh/TensorRT-LLM that referenced this pull request Aug 5, 2025
fix: Fix poor generation with FP8 Gemma3 1B checkpoint (NVIDIA#6499)

Signed-off-by: Balaram Buddharaju <[email protected]>

[None][fix] Serialize the window_size in the kv event (NVIDIA#6526)

Signed-off-by: richardhuo-nv <[email protected]>

[None][feat] Multi-block mode for Hopper spec dec XQA kernel (NVIDIA#4416)

Signed-off-by: Jhao-Ting Chen <[email protected]>
symphonylyh pushed a commit to symphonylyh/TensorRT-LLM that referenced this pull request Aug 5, 2025
fix: Fix poor generation with FP8 Gemma3 1B checkpoint (NVIDIA#6499)

Signed-off-by: Balaram Buddharaju <[email protected]>

[None][fix] Serialize the window_size in the kv event (NVIDIA#6526)

Signed-off-by: richardhuo-nv <[email protected]>

[None][feat] Multi-block mode for Hopper spec dec XQA kernel (NVIDIA#4416)

Signed-off-by: Jhao-Ting Chen <[email protected]>

[None][feat] Add support for fused gate_up_proj scales for FP8 blockwise (NVIDIA#6496)

Signed-off-by: Aurelien Chartier <[email protected]>
lancelly pushed a commit to lancelly/TensorRT-LLM that referenced this pull request Aug 6, 2025
jain-ria pushed a commit to jain-ria/TensorRT-LLM that referenced this pull request Aug 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants