Skip to content

Conversation

chenopis
Copy link
Collaborator

@chenopis chenopis commented Jul 31, 2025

Fix some links from bug using link check report.

Dead Link Checker report - https___nvidia.github.io_TensorRT-LLM_index.html.pdf

Summary by CodeRabbit

  • Documentation
    • Updated and corrected hyperlinks for models, images, and code references across multiple documentation pages.
    • Clarified and simplified descriptions in advanced and release notes sections.
    • Corrected the naming and URL for a benchmark model in the performance documentation.
    • Removed an outdated external link from the PyTorch developer guide.
    • Fixed a typo in a filename mentioned in Expert Parallelism instructions.

Signed-off-by: Andrew Chen <[email protected]>
@chenopis chenopis requested a review from a team as a code owner July 31, 2025 20:50
@chenopis chenopis requested a review from FrankD412 July 31, 2025 20:50
Copy link
Contributor

coderabbitai bot commented Jul 31, 2025

📝 Walkthrough

Walkthrough

This update makes minor corrections and adjustments to documentation. Changes include updating or correcting hyperlinks to models, repositories, and images, simplifying references and removing outdated or redundant links, and making small textual edits in release notes and developer guides. No code or logic changes are involved.

Changes

Cohort / File(s) Change Summary
Model Link Corrections
docs/source/advanced/expert-parallelism.md, docs/source/performance/perf-benchmarking.md
Updated Hugging Face model links for Mistral Mixtral 8x7B to correct or more general URLs.
Example and Resource Link Updates
docs/source/advanced/speculative-decoding.md, docs/source/blogs/tech_blog/blog1_Pushing_Latency_Boundaries_Optimizing_DeepSeek-R1_Performance_on_NVIDIA_B200_GPUs.md
Changed links to point to more general or accurate example resources and source files.
Image Source Update
docs/source/blogs/Falcon180B-H200.md
Changed image URL to reference a specific commit hash instead of a branch.
Release Notes Edits
docs/source/release-notes.md
Simplified and clarified two textual notes in the release notes.
Developer Guide Cleanup
docs/source/torch.md
Removed a single external "Examples" link from the Developer Guide section.

Sequence Diagram(s)

No sequence diagram generated as changes are limited to documentation and do not affect control flow or feature logic.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~2 minutes

Suggested labels

Documentation

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.


📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between aab55d2 and 5fe32db.

📒 Files selected for processing (1)
  • docs/source/release-notes.md (2 hunks)
✅ Files skipped from review due to trivial changes (1)
  • docs/source/release-notes.md
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@chenopis chenopis self-assigned this Jul 31, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
docs/source/release-notes.md (1)

1048-1048: Clarify the “OPT” reference and tighten phrasing

The trailing “and OPT” is ambiguous (is it the model, the optimizer, or something else?) and the sentence now scans awkwardly. Consider an explicit, parallel construction:

- - New Python builder API and `trtllm-build` command and OPT
+ - Added Python builder API, `trtllm-build` command, and OPT support
docs/source/advanced/speculative-decoding.md (2)

63-63: Link is now too generic; point users directly to the example

Dropping the deep link makes the reference harder to follow. Suggest linking to a stable location such as the dedicated README or example script:

- An example of this orchestration process can be found in the [TensorRT-LLM Triton backend](https://github.com/triton-inference-server/tensorrtllm_backend).
+ An example orchestration script is available in the Triton backend repository’s
+ [draft-target-model client example](https://github.com/triton-inference-server/tensorrtllm_backend/blob/main/client/python/draft_target_model_client.py).

Ensure the chosen path is future-proof (e.g., link to a README anchor) to avoid another broken link cycle.


175-175: Fix casing of “PyTorch” and tighten wording

Minor style/grammar tweak:

-[Disaggregated Serving](https://github.com/NVIDIA/TensorRT-LLM/blob/main/docs/source/advanced/disaggregated-service.md) with EAGLE3 using the two model approach is supported in the Pytorch backend.
+[Disaggregated Serving](https://github.com/NVIDIA/TensorRT-LLM/blob/main/docs/source/advanced/disaggregated-service.md) with EAGLE-3 using the two-model approach is supported in the PyTorch backend.
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8062e0f and d1f3186.

📒 Files selected for processing (7)
  • docs/source/advanced/expert-parallelism.md (1 hunks)
  • docs/source/advanced/speculative-decoding.md (2 hunks)
  • docs/source/blogs/Falcon180B-H200.md (1 hunks)
  • docs/source/blogs/tech_blog/blog1_Pushing_Latency_Boundaries_Optimizing_DeepSeek-R1_Performance_on_NVIDIA_B200_GPUs.md (1 hunks)
  • docs/source/performance/perf-benchmarking.md (1 hunks)
  • docs/source/release-notes.md (2 hunks)
  • docs/source/torch.md (0 hunks)
💤 Files with no reviewable changes (1)
  • docs/source/torch.md
🧰 Additional context used
🧠 Learnings (4)
📚 Learning: in tensorrt-llm testing, it's common to have both cli flow tests (test_cli_flow.py) and pytorch api ...
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • docs/source/blogs/Falcon180B-H200.md
  • docs/source/performance/perf-benchmarking.md
  • docs/source/advanced/speculative-decoding.md
  • docs/source/blogs/tech_blog/blog1_Pushing_Latency_Boundaries_Optimizing_DeepSeek-R1_Performance_on_NVIDIA_B200_GPUs.md
  • docs/source/release-notes.md
📚 Learning: in tensorrt-llm's multimodal processing pipeline, shared tensor recovery using `from_shared_tensor()...
Learnt from: yechank-nvidia
PR: NVIDIA/TensorRT-LLM#6254
File: tensorrt_llm/_torch/pyexecutor/model_engine.py:1201-1204
Timestamp: 2025-07-22T09:22:14.726Z
Learning: In TensorRT-LLM's multimodal processing pipeline, shared tensor recovery using `from_shared_tensor()` is only needed during the context phase. Generation requests reuse the already-recovered tensor data and only need to call `strip_for_generation()` to remove unnecessary multimodal data while preserving the recovered tensors. This avoids redundant tensor recovery operations during generation.

Applied to files:

  • docs/source/advanced/speculative-decoding.md
📚 Learning: in the tensorrt-llm waive list merging system, removed lines are always located at the end of the me...
Learnt from: yiqingy0
PR: NVIDIA/TensorRT-LLM#5198
File: jenkins/mergeWaiveList.py:0-0
Timestamp: 2025-07-22T08:33:49.109Z
Learning: In the TensorRT-LLM waive list merging system, removed lines are always located at the end of the merge waive lists, which is why the mergeWaiveList.py script uses reverse traversal - it's an optimization for this specific domain constraint.

Applied to files:

  • docs/source/blogs/tech_blog/blog1_Pushing_Latency_Boundaries_Optimizing_DeepSeek-R1_Performance_on_NVIDIA_B200_GPUs.md
📚 Learning: in tensorrt_llm/executor/worker.py, the lora adapter cache optimization logic that checks `is_adapte...
Learnt from: amitz-nv
PR: NVIDIA/TensorRT-LLM#5616
File: tensorrt_llm/executor/worker.py:375-384
Timestamp: 2025-07-17T09:01:27.402Z
Learning: In tensorrt_llm/executor/worker.py, the LoRA adapter cache optimization logic that checks `is_adapter_in_cpu_cache()` and conditionally passes None for weights/config has a known race condition issue that cannot be solved with simple error handling or verification checks. This is a known limitation that requires a more comprehensive solution.

Applied to files:

  • docs/source/release-notes.md
🪛 LanguageTool
docs/source/advanced/expert-parallelism.md

[grammar] ~7-~7: Ensure spelling is correct
Context: ...ns arrive, the router layer selects the TopK experts for each token. The correspondi...

(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (5)
docs/source/blogs/Falcon180B-H200.md (1)

36-36: Image may not render – /blob/<sha>/... requires ?raw=true

Unlike the earlier image on Line 20, this new <img> tag still uses the /blob/… path but omits ?raw=true.
GitHub serves an HTML page for /blob/* URLs; browsers therefore fail to treat the response as an image unless ?raw=true is appended (or the path is rewritten to /raw/…). That would defeat the purpose of this “broken-link” fix.

-<img src="https://github.com/NVIDIA/TensorRT-LLM/blob/5aec7af45fc0abd876fa68a9ae8c8cae084f3af3/docs/source/blogs/media/Falcon180B-H200_acc.png"
+<img src="https://github.com/NVIDIA/TensorRT-LLM/blob/5aec7af45fc0abd876fa68a9ae8c8cae084f3af3/docs/source/blogs/media/Falcon180B-H200_acc.png?raw=true"

Please verify all newly-pinned image links follow the same pattern.

docs/source/advanced/expert-parallelism.md (1)

7-7: LGTM – link now lands on the model card instead of the JSON file

The new Hugging Face URL is clearer for readers and mirrors the pattern used elsewhere in the docs.

docs/source/performance/perf-benchmarking.md (1)

82-82: Consistent naming but double-check availability of the instruct checkpoint

mistralai/Mixtral-8x7B-Instruct-v0.1 looks correct, but both the model card and its FP16 weights were only recently published. Please run an external check (or Hugging Face CLI) before merging to be sure the path is live; otherwise the link checker will flag it again.

docs/source/blogs/tech_blog/blog1_Pushing_Latency_Boundaries_Optimizing_DeepSeek-R1_Performance_on_NVIDIA_B200_GPUs.md (1)

128-128: Header file reference is clearer – confirm file exists in main

Good catch switching to RoutingKernelTopK.cuh; that’s the public header that actually defines the Top-K routing kernel.
Just ensure the file path (cpp/tensorrt_llm/kernels/trtllmGenKernels/blockScaleMoe/RoutingKernelTopK.cuh) really exists on main; an extra directory level or case-mismatch will re-introduce a broken link.

docs/source/release-notes.md (1)

642-642: LGTM – trimmed note reads cleanly

The simplified Windows-specific known-issue line is accurate and concise.

@chenopis chenopis force-pushed the andrewch-broken-links branch from d1f3186 to 0c2a183 Compare August 1, 2025 17:11
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
docs/source/advanced/expert-parallelism.md (3)

7-7: Tighten wording & avoid duplicate brand name in reference text

Minor polish will read more cleanly and match the official model name:

-Mixture of Experts (MoE) architectures have been used widely recently, such as [Mistral Mixtral 8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1).
+Mixture of Experts (MoE) architectures have become widely used recently, such as [Mixtral 8×7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) from Mistral AI.

26-26: Correct script name typo (convert_coneckpoint.pyconvert_checkpoint.py)

-when calling `convert_coneckpoint.py`.
+when calling `convert_checkpoint.py`.

30-31: Remove stray closing parenthesis that breaks Markdown rendering

-.../config.json).
-)
+.../config.json).
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d1f3186 and 0c2a183.

📒 Files selected for processing (7)
  • docs/source/advanced/expert-parallelism.md (1 hunks)
  • docs/source/advanced/speculative-decoding.md (2 hunks)
  • docs/source/blogs/Falcon180B-H200.md (1 hunks)
  • docs/source/blogs/tech_blog/blog1_Pushing_Latency_Boundaries_Optimizing_DeepSeek-R1_Performance_on_NVIDIA_B200_GPUs.md (1 hunks)
  • docs/source/performance/perf-benchmarking.md (1 hunks)
  • docs/source/release-notes.md (2 hunks)
  • docs/source/torch.md (0 hunks)
💤 Files with no reviewable changes (1)
  • docs/source/torch.md
✅ Files skipped from review due to trivial changes (5)
  • docs/source/release-notes.md
  • docs/source/blogs/tech_blog/blog1_Pushing_Latency_Boundaries_Optimizing_DeepSeek-R1_Performance_on_NVIDIA_B200_GPUs.md
  • docs/source/advanced/speculative-decoding.md
  • docs/source/blogs/Falcon180B-H200.md
  • docs/source/performance/perf-benchmarking.md
🧰 Additional context used
🪛 LanguageTool
docs/source/advanced/expert-parallelism.md

[grammar] ~7-~7: Ensure spelling is correct
Context: ...ns arrive, the router layer selects the TopK experts for each token. The correspondi...

(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check

@chenopis chenopis force-pushed the andrewch-broken-links branch from 0c2a183 to cfa2d42 Compare August 1, 2025 17:20
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
docs/source/advanced/expert-parallelism.md (1)

7-7: Polish wording & acronym consistency in updated paragraph

The new Hugging Face link is correct, but the sentence can be tightened and made stylistically consistent:

  • Prefer the acronym “MoE” over “MOE”.
  • Use the conventional “top-k” (or “Top-k”) instead of “TopK”.
  • “have been used widely recently” reads awkwardly; “have become widespread recently” flows better.

Optional patch:

-Mixture of Experts (MoE) architectures have been used widely recently, such as [Mistral Mixtral 8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1). Specifically, MOE’s structure supports multiple parallel Feedforward Neural Network (FFN) layers (called experts) to replace the single FFN layer in the dense model. When tokens arrive, the router layer selects the TopK experts for each token. The corresponding hidden state of the token is then dispatched to the selected TopK experts, respectively.
+Mixture-of-Experts (MoE) architectures have become widespread recently, with models such as [Mistral Mixtral 8×7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1). Specifically, MoE’s structure supports multiple parallel feed-forward neural-network (FFN) layers (called experts) in place of the single FFN layer in a dense model. When tokens arrive, the router layer selects the top-k experts for each token, and the corresponding hidden state of each token is dispatched to those experts.

Purely editorial—take it or leave it.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0c2a183 and cfa2d42.

📒 Files selected for processing (7)
  • docs/source/advanced/expert-parallelism.md (1 hunks)
  • docs/source/advanced/speculative-decoding.md (2 hunks)
  • docs/source/blogs/Falcon180B-H200.md (1 hunks)
  • docs/source/blogs/tech_blog/blog1_Pushing_Latency_Boundaries_Optimizing_DeepSeek-R1_Performance_on_NVIDIA_B200_GPUs.md (1 hunks)
  • docs/source/performance/perf-benchmarking.md (1 hunks)
  • docs/source/release-notes.md (2 hunks)
  • docs/source/torch.md (0 hunks)
💤 Files with no reviewable changes (1)
  • docs/source/torch.md
✅ Files skipped from review due to trivial changes (4)
  • docs/source/blogs/Falcon180B-H200.md
  • docs/source/blogs/tech_blog/blog1_Pushing_Latency_Boundaries_Optimizing_DeepSeek-R1_Performance_on_NVIDIA_B200_GPUs.md
  • docs/source/performance/perf-benchmarking.md
  • docs/source/release-notes.md
🚧 Files skipped from review as they are similar to previous changes (1)
  • docs/source/advanced/speculative-decoding.md
🧰 Additional context used
🪛 LanguageTool
docs/source/advanced/expert-parallelism.md

[grammar] ~7-~7: Ensure spelling is correct
Context: ...ns arrive, the router layer selects the TopK experts for each token. The correspondi...

(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check

Accept feedback from coderabbitai

Signed-off-by: Andrew Chen <[email protected]>
@chenopis chenopis force-pushed the andrewch-broken-links branch from cfa2d42 to 8bfdb31 Compare August 1, 2025 17:27
@chenopis
Copy link
Collaborator Author

chenopis commented Aug 4, 2025

@coderabbitai review

Copy link
Contributor

coderabbitai bot commented Aug 4, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Collaborator

@nv-guomingz nv-guomingz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@chenopis
Copy link
Collaborator Author

chenopis commented Aug 5, 2025

@nv-guomingz does someone in @NVIDIA/trtllm-bench-reviewers need to approve this?

@chenopis chenopis changed the title [https://nvbugs/5423962][fix] Fix broken links [https://nvbugs/5423962][fix] Address broken links Aug 5, 2025
@chenopis
Copy link
Collaborator Author

chenopis commented Aug 5, 2025

@NVIDIA/trtllm-bench-reviewers How do I get blossom-ci to run?

@chenopis
Copy link
Collaborator Author

chenopis commented Aug 5, 2025

/bot run

@chenopis
Copy link
Collaborator Author

chenopis commented Aug 5, 2025

@FrankD412 @venkywonka How do I get this PR merged? It always seems to get stuck on the blossom-ci check.

@chenopis
Copy link
Collaborator Author

chenopis commented Aug 5, 2025

/bot run --stage-list "A10-Build_Docs"

@chenopis
Copy link
Collaborator Author

chenopis commented Aug 5, 2025

/bot skip --comment "Updates to documentation, skipping full CI"

@chenopis
Copy link
Collaborator Author

chenopis commented Aug 5, 2025

@FrankD412 @venkywonka How do I get this PR merged? It always seems to get stuck on the blossom-ci check.

The last time I updated documentation we ran /bot run --stage-list "A10-Build_Docs" just to make sure the docs compiled. Once that pasted you can run something like /bot skip --comment "Updates to documentation, skipping full CI" to get past the check.

@FrankD412 I don't think I'm on the authorized person list. Is this something you can run for me?

@FrankD412
Copy link
Collaborator

/bot skip --comment "Updates to documentation, skipping full CI"

@FrankD412
Copy link
Collaborator

@FrankD412 @venkywonka How do I get this PR merged? It always seems to get stuck on the blossom-ci check.

The last time I updated documentation we ran /bot run --stage-list "A10-Build_Docs" just to make sure the docs compiled. Once that pasted you can run something like /bot skip --comment "Updates to documentation, skipping full CI" to get past the check.

@FrankD412 I don't think I'm on the authorized person list. Is this something you can run for me?

Let me give it a shot.

@FrankD412 FrankD412 enabled auto-merge (squash) August 5, 2025 18:17
@FrankD412
Copy link
Collaborator

@chenopis -- I've set it to automerge pending the bot.

@chenopis
Copy link
Collaborator Author

chenopis commented Aug 5, 2025

@chenopis -- I've set it to automerge pending the bot.

Thank you!

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14175 [ skip ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14175 [ skip ] completed with state SUCCESS
Skipping testing for commit dc977ab

@FrankD412 FrankD412 disabled auto-merge August 5, 2025 19:42
@chenopis chenopis requested a review from a team as a code owner August 7, 2025 18:56
@chenopis chenopis requested review from kaiyux and Shixiaowei02 August 7, 2025 18:56
@chenopis
Copy link
Collaborator Author

chenopis commented Aug 7, 2025

@FrankD412 can you try merging this again?

@FrankD412 FrankD412 enabled auto-merge (squash) August 7, 2025 19:08
@FrankD412
Copy link
Collaborator

Done -- it's set to auto merge

@FrankD412 FrankD412 disabled auto-merge August 7, 2025 19:08
@FrankD412 FrankD412 enabled auto-merge (squash) August 7, 2025 19:09
@FrankD412
Copy link
Collaborator

/bot skip --comment "Updates to documentation, skipping full CI"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14510 [ skip ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14510 [ skip ] completed with state SUCCESS
Skipping testing for commit c8d61dd

@FrankD412 FrankD412 merged commit 4ecda91 into NVIDIA:main Aug 7, 2025
4 checks passed
@chenopis chenopis deleted the andrewch-broken-links branch August 8, 2025 01:17
Shunkangz pushed a commit to hcyezhang/TensorRT-LLM that referenced this pull request Aug 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants