-
Notifications
You must be signed in to change notification settings - Fork 1.7k
[None][docs] update stale link for AutoDeploy #7135
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
📝 WalkthroughWalkthroughUpdated the AutoDeploy link in README.md’s Useful Links section from a local repository path to an external NVIDIA documentation URL; anchor text and description remain unchanged. No other content or exported/public declarations were modified. Changes
Sequence Diagram(s)(omitted — changes are documentation-only and do not affect runtime control flow) Estimated code review effort🎯 1 (Trivial) | ⏱️ ~2 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
Status, Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
README.md (2)
256-256
: Fix markdownlint MD004: use asterisks for list bullets.The linter expects
*
list markers; this line uses-
. Update the bullet to keep CI clean and stay consistent with the repo’s markdown style.Apply within this PR (minimal change):
- - [AutoDeploy](https://nvidia.github.io/TensorRT-LLM/torch/auto_deploy/auto-deploy.html): A prototype backend for TensorRT-LLM to simplify and accelerate the deployment of PyTorch models. + * [AutoDeploy](https://nvidia.github.io/TensorRT-LLM/torch/auto_deploy/auto-deploy.html): A prototype backend for TensorRT-LLM to simplify and accelerate the deployment of PyTorch models.Optional follow-up for consistency: convert the entire “Useful Links” block (Lines 254–257) to
*
bullets as well.
256-256
: Optional: add automated link checking to CI to prevent future rot.Consider a lightweight GitHub Action using lychee to catch broken/stale links in Markdown on every PR.
Example workflow snippet (save as .github/workflows/link-check.yml):
name: Link Check on: pull_request: paths: - '**/*.md' jobs: lychee: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: lycheeverse/lychee-action@v1 with: args: --no-progress --max-concurrency 4 --accept 200,204 --exclude-mail ./README.md env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
README.md
(1 hunks)
🧰 Additional context used
🪛 markdownlint-cli2 (0.17.2)
README.md
256-256: Unordered list style
Expected: asterisk; Actual: dash
(MD004, ul-style)
🔇 Additional comments (2)
README.md (2)
256-256
: LGTM: Switched AutoDeploy link to canonical docs.Good call moving from a repo-local path to the published documentation URL to reduce future staleness.
256-256
: AutoDeploy link verification passed
- The URL
https://nvidia.github.io/TensorRT-LLM/torch/auto_deploy/auto-deploy.html
returns HTTP 200 with no redirects.- No further updates are required for this link.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
README.md (1)
254-258
: Fix markdownlint MD004 (unordered list style) in Useful Linksmarkdownlint flags the list marker style here (expects asterisks, found dashes). Aligning with the repo’s lint rule will keep CI/lint quiet and make the section consistent with other lists above.
Apply this diff to switch the markers:
- - [Quantized models on Hugging Face](https://huggingface.co/collections/nvidia/model-optimizer-66aa84f7966b3150262481a4): A growing collection of quantized (e.g., FP8, FP4) and optimized LLMs, including [DeepSeek FP4](https://huggingface.co/nvidia/DeepSeek-R1-FP4), ready for fast inference with TensorRT-LLM. - - [NVIDIA Dynamo](https://github.com/ai-dynamo/dynamo): A datacenter scale distributed inference serving framework that works seamlessly with TensorRT-LLM. - - [AutoDeploy](https://nvidia.github.io/TensorRT-LLM/torch/auto_deploy/auto-deploy.html): A prototype backend for TensorRT-LLM to simplify and accelerate the deployment of PyTorch models. - - [WeChat Discussion Group](https://github.com/NVIDIA/TensorRT-LLM/issues/5359): A real-time channel for TensorRT-LLM Q&A and news. + * [Quantized models on Hugging Face](https://huggingface.co/collections/nvidia/model-optimizer-66aa84f7966b3150262481a4): A growing collection of quantized (e.g., FP8, FP4) and optimized LLMs, including [DeepSeek FP4](https://huggingface.co/nvidia/DeepSeek-R1-FP4), ready for fast inference with TensorRT-LLM. + * [NVIDIA Dynamo](https://github.com/ai-dynamo/dynamo): A datacenter scale distributed inference serving framework that works seamlessly with TensorRT-LLM. + * [AutoDeploy](https://nvidia.github.io/TensorRT-LLM/torch/auto_deploy/auto-deploy.html): A prototype backend for TensorRT-LLM to simplify and accelerate the deployment of PyTorch models. + * [WeChat Discussion Group](https://github.com/NVIDIA/TensorRT-LLM/issues/5359): A real-time channel for TensorRT-LLM Q&A and news.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
README.md
(1 hunks)
🧰 Additional context used
🪛 markdownlint-cli2 (0.17.2)
README.md
256-256: Unordered list style
Expected: asterisk; Actual: dash
(MD004, ul-style)
🔇 Additional comments (2)
README.md (2)
256-256
: LGTM: AutoDeploy link now points to published docsThe update from a repo-local path to the docs site improves discoverability and avoids 404s when browsing on GitHub.
256-256
: URL Verification CompletedThe URL
https://nvidia.github.io/TensorRT-LLM/torch/auto_deploy/auto-deploy.html
returns HTTP 200 and does not redirect, confirming it’s canonical and stable.
/bot skip --comment "README change only" |
PR_Github #16090 [ skip ] triggered by Bot |
PR_Github #16090 [ skip ] completed with state |
as titled
Summary by CodeRabbit