Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions docs/source/architecture/add-model.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,19 +2,19 @@

# Adding a Model

This document describes how to add a typical decoder-only model in TensorRT-LLM.
This document describes how to add a typical decoder-only model in TensorRT LLM.

## Step 1. Write Modeling Part

TensorRT-LLM provides different levels of APIs:
TensorRT LLM provides different levels of APIs:

- Low-level functions, for example, `concat`, `add`, and `sum`.
- Basic layers, such as, `Linear` and `LayerNorm`.
- High-level layers, such as, `MLP` and `Attention`.
- Base class for typical decoder-only models, such as, `DecoderModelForCausalLM`.

1. Create a model directory in `tensorrt_llm/models`, for example `my_model`.
2. Write a `model.py` with TensorRT-LLM's APIs
2. Write a `model.py` with TensorRT LLM's APIs

```python
class MyDecoderLayer(Module):
Expand Down Expand Up @@ -52,7 +52,7 @@ class MyModelForCausalLM(DecoderModelForCausalLM):

## Step 2. Implement Weight Conversion

The weights from source framework need to be converted and bound to the new added TensorRT-LLM model. Here is an example of converting HuggingFace weights:
The weights from source framework need to be converted and bound to the new added TensorRT LLM model. Here is an example of converting HuggingFace weights:

```python
class MyModelForCausalLM(DecoderModelForCausalLM):
Expand All @@ -62,8 +62,8 @@ class MyModelForCausalLM(DecoderModelForCausalLM):
hf_model_dir,
dtype='float16',
mapping: Optional[Mapping] = None) -> MyModelForCausalLM
# create a TensorRT-LLM MyModelForCausalLM model object
# convert HuggingFace checkpoint to TensorRT-LLM expected weights dict
# create a TensorRT LLM MyModelForCausalLM model object
# convert HuggingFace checkpoint to TensorRT LLM expected weights dict
# load the weights to MyModelForCausalLM object
```

Expand Down
24 changes: 12 additions & 12 deletions docs/source/architecture/checkpoint.md
Original file line number Diff line number Diff line change
@@ -1,36 +1,36 @@
# TensorRT-LLM Checkpoint
# TensorRT LLM Checkpoint

## Overview

The earlier versions (pre-0.8 version) of TensorRT-LLM were developed with a very aggressive timeline. For those versions, emphasis was not put on defining a unified workflow. Now that TensorRT-LLM has reached some level of feature richness, the development team has decided to put more effort into unifying the APIs and workflow of TensorRT-LLM. This file documents the workflow around TensorRT-LLM checkpoint and the set of CLI tools to generate checkpoint, build engines, and evaluate engines.
The earlier versions (pre-0.8 version) of TensorRT LLM were developed with a very aggressive timeline. For those versions, emphasis was not put on defining a unified workflow. Now that TensorRT LLM has reached some level of feature richness, the development team has decided to put more effort into unifying the APIs and workflow of TensorRT LLM. This file documents the workflow around TensorRT LLM checkpoint and the set of CLI tools to generate checkpoint, build engines, and evaluate engines.

There are three steps in the workflow:

1. Convert weights from different source frameworks into TensorRT-LLM checkpoint.
2. Build the TensorRT-LLM checkpoint into TensorRT engines with a unified build command.
3. Load the engines to TensorRT-LLM model runner and evaluate with different evaluation tasks.
1. Convert weights from different source frameworks into TensorRT LLM checkpoint.
2. Build the TensorRT LLM checkpoint into TensorRT engines with a unified build command.
3. Load the engines to TensorRT LLM model runner and evaluate with different evaluation tasks.

```
NeMo -------------
|
HuggingFace ------
| convert build load
Modelopt --------- ----------> TensorRT-LLM Checkpoint --------> TensorRT Engine ------> TensorRT-LLM ModelRunner
Modelopt --------- ----------> TensorRT LLM Checkpoint --------> TensorRT Engine ------> TensorRT LLM ModelRunner
|
JAX --------------
|
DeepSpeed --------
```

## Prepare the TensorRT-LLM Checkpoint
## Prepare the TensorRT LLM Checkpoint

TensorRT-LLM aims at supporting different sources:
TensorRT LLM aims at supporting different sources:

1. Trained models from NVIDIA NeMo, Microsoft DeepSpeed, and JAX
2. Quantized models from NVIDIA Modelopt
3. Popular models from HuggingFace

TensorRT-LLM defines its own checkpoint format. A checkpoint directory includes:
TensorRT LLM defines its own checkpoint format. A checkpoint directory includes:

1. One config `json` file, which contains several model hyper-parameters.
2. One or several rank weights files, each file contains a dictionary of tensors (weights).
Expand Down Expand Up @@ -107,7 +107,7 @@ Here is the model specific config list:
### Rank Weights

Like PyTorch, the tensor (weight) name is a string containing hierarchical information,
which is uniquely mapped to a certain parameter of a TensorRT-LLM model.
which is uniquely mapped to a certain parameter of a TensorRT LLM model.

For example, each transformer layer of the OPT model contains an `Attention` layer, an `MLP` layer. and two `LayerNorm` layers.

Expand Down Expand Up @@ -169,7 +169,7 @@ Here is the AWQ scaling factors of `mlp.fc` linear layer:
- `transformer.layers.0.mlp.fc.prequant_scaling_factor`

```{note}
The linear weights in TensorRT-LLM checkpoint always follows (`out_feature`, `in_feature`) shape, whereas some quantized linear in TensorRT-LLM implemented by plugin may use (`in_feature`, `out_fature`) shape. The `trtllm-build` command adds a transpose operation to post-process it.
The linear weights in TensorRT LLM checkpoint always follows (`out_feature`, `in_feature`) shape, whereas some quantized linear in TensorRT LLM implemented by plugin may use (`in_feature`, `out_fature`) shape. The `trtllm-build` command adds a transpose operation to post-process it.

### Example

Expand Down Expand Up @@ -218,7 +218,7 @@ Here is the `config.json`:

## Build Checkpoint into TensorRT Engine

TensorRT-LLM provides a unified build command: `trtllm-build`. Before using it,
TensorRT LLM provides a unified build command: `trtllm-build`. Before using it,
you may need to add it to the `PATH`.

```bash
Expand Down
10 changes: 5 additions & 5 deletions docs/source/architecture/overview.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Architecture Overview

The `LLM` class is a core entry point for the TensorRT-LLM, providing a simplified `generate()` API for efficient large language model inference. This abstraction aims to streamline the user experience, as demonstrated with TinyLlama:
The `LLM` class is a core entry point for the TensorRT LLM, providing a simplified `generate()` API for efficient large language model inference. This abstraction aims to streamline the user experience, as demonstrated with TinyLlama:

```python
from tensorrt_llm import LLM
Expand All @@ -16,7 +16,7 @@ The `LLM` class automatically manages essential pre and post-processing steps, i

Internally, the `LLM` class orchestrates the creation of a dedicated `PyExecutor(Worker)` process on each rank.

![TRT-LLM Architecture Overview](../media/TRTLLM_Architecture_Overview.png)
![TensorRT LLM Architecture Overview](../media/TRTLLM_Architecture_Overview.png)

This `PyExecutor` operates in a continuous background loop, designed for the efficient, asynchronous processing of inference requests.

Expand Down Expand Up @@ -45,13 +45,13 @@ During each iteration of its background loop, the `PyExecutor` performs the foll

## Runtime Optimizations

TensorRT-LLM enhances inference throughput and reduces latency by integrating a suite of runtime optimizations, including CUDA Graph, [Overlap Scheduler](../features/overlap-scheduler.md), [Speculative decoding](../features/speculative-decoding.md), etc.
TensorRT LLM enhances inference throughput and reduces latency by integrating a suite of runtime optimizations, including CUDA Graph, [Overlap Scheduler](../features/overlap-scheduler.md), [Speculative decoding](../features/speculative-decoding.md), etc.

### CUDA Graph

CUDA Graphs drastically reduce the CPU-side overhead associated with launching GPU kernels, which is particularly impactful in PyTorch-based inference where Python's host-side code can be a bottleneck. By capturing a sequence of CUDA operations as a single graph, the entire sequence can be launched with one API call, minimizing CPU-GPU synchronization and driver overhead.

To maximize the "hit rate" of these cached graphs, TensorRT-LLM employs CUDA Graph padding. If an incoming batch's size doesn't match a captured graph, it's padded to the nearest larger, supported size for which a graph exists. While this incurs minor overhead from computing "wasted" tokens, it's often a better trade-off than falling back to slower eager mode execution. This optimization has a significant impact, demonstrating up to a 22% end-to-end throughput increase on certain models and hardware.
To maximize the "hit rate" of these cached graphs, TensorRT LLM employs CUDA Graph padding. If an incoming batch's size doesn't match a captured graph, it's padded to the nearest larger, supported size for which a graph exists. While this incurs minor overhead from computing "wasted" tokens, it's often a better trade-off than falling back to slower eager mode execution. This optimization has a significant impact, demonstrating up to a 22% end-to-end throughput increase on certain models and hardware.

### Overlap Scheduler

Expand All @@ -72,4 +72,4 @@ if self.previous_batch is not None:
self._process_previous_batch()
```

This approach effectively reduces GPU idle time and improves overall hardware occupancy. While it introduces one extra decoding step into the pipeline, the resulting throughput gain is a significant trade-off. For this reason, the Overlap Scheduler is enabled by default in TensorRT-LLM.
This approach effectively reduces GPU idle time and improves overall hardware occupancy. While it introduces one extra decoding step into the pipeline, the resulting throughput gain is a significant trade-off. For this reason, the Overlap Scheduler is enabled by default in TensorRT LLM.
Original file line number Diff line number Diff line change
@@ -1,18 +1,18 @@
# How to get best performance on DeepSeek-R1 in TensorRT-LLM
# How to get best performance on DeepSeek-R1 in TensorRT LLM

NVIDIA has announced world-record DeepSeek-R1 inference performance at NVIDIA GTC 2025. A single NVIDIA DGX system with eight NVIDIA Blackwell GPUs can achieve over 250 tokens per second per user or a maximum throughput of over 30,000 tokens per second on the massive, state-of-the-art 671 billion parameter DeepSeek-R1 model. [NVIDIA Blackwell Delivers World-Record DeepSeek-R1 Inference Performance](https://developer.nvidia.com/blog/nvidia-blackwell-delivers-world-record-deepseek-r1-inference-performance/)

In this blog, we share the configurations and procedures about how to reproduce the number on both B200 and H200 with PyTorch workflow.

## Table of Contents

- [How to get best performance on DeepSeek-R1 in TensorRT-LLM](#how-to-get-best-performance-on-deepseek-r1-in-tensorrt-llm)
- [How to get best performance on DeepSeek-R1 in TensorRT LLM](#how-to-get-best-performance-on-deepseek-r1-in-tensorrt-llm)
- [Table of Contents](#table-of-contents)
- [Prerequisites: Install TensorRT-LLM and download models](#prerequisites-install-tensorrt-llm-and-download-models)
- [1. Download TensorRT-LLM](#1-download-tensorrt-llm)
- [Prerequisites: Install TensorRT LLM and download models](#prerequisites-install-tensorrt-llm-and-download-models)
- [1. Download TensorRT LLM](#1-download-tensorrt-llm)
- [2. Download the DeepSeek R1 models](#2-download-the-deepseek-r1-models)
- [3. Build and run TensorRT-LLM container](#3-build-and-run-tensorrt-llm-container)
- [4. Compile and Install TensorRT-LLM](#4-compile-and-install-tensorrt-llm)
- [3. Build and run TensorRT LLM container](#3-build-and-run-tensorrt-llm-container)
- [4. Compile and Install TensorRT LLM](#4-compile-and-install-tensorrt-llm)
- [5. Optional: Tune GPU clocks](#5-optional-tune-gpu-clocks)
- [6. Dataset preparation](#6-dataset-preparation)
- [Reproducing steps](#reproducing-steps)
Expand All @@ -34,13 +34,13 @@ In this blog, we share the configurations and procedures about how to reproduce
- [Out of memory issues](#out-of-memory-issues)


## Prerequisites: Install TensorRT-LLM and download models
## Prerequisites: Install TensorRT LLM and download models

This section can be skipped if you already have TensorRT-LLM installed and have already downloaded the DeepSeek R1 model checkpoint.
This section can be skipped if you already have TensorRT LLM installed and have already downloaded the DeepSeek R1 model checkpoint.

#### 1. Download TensorRT-LLM
#### 1. Download TensorRT LLM

**You can also find more comprehensive instructions to install TensorRT-LLM in this [TensorRT-LLM installation guide](https://nvidia.github.io/TensorRT-LLM/installation/build-from-source-linux.html), refer to that guide for common issues if you encounter any here.**
**You can also find more comprehensive instructions to install TensorRT LLM in this [TensorRT LLM installation guide](https://nvidia.github.io/TensorRT-LLM/installation/build-from-source-linux.html), refer to that guide for common issues if you encounter any here.**

``` bash
# Prerequisites
Expand All @@ -50,7 +50,7 @@ git lfs install
# Replace with your actual path
YOUR_WORK_PATH=<YOUR_WORK_PATH>

# Clone the TensorRT-LLM repository
# Clone the TensorRT LLM repository
cd $YOUR_WORK_PATH
git clone https://github.com/NVIDIA/TensorRT-LLM.git
cd TensorRT-LLM
Expand All @@ -77,15 +77,15 @@ git clone https://huggingface.co/nvidia/DeepSeek-R1-FP4
git clone https://huggingface.co/deepseek-ai/DeepSeek-R1
```

#### 3. Build and run TensorRT-LLM container
#### 3. Build and run TensorRT LLM container

``` bash
cd TensorRT-LLM
make -C docker run LOCAL_USER=1 DOCKER_RUN_ARGS="-v $YOUR_MODEL_PATH:$YOUR_MODEL_PATH:ro -v $YOUR_WORK_PATH:$YOUR_WORK_PATH"
```
Here we set `LOCAL_USER=1` argument to set up the local user instead of root account inside the container, you can remove it if running as root inside container is fine.

#### 4. Compile and Install TensorRT-LLM
#### 4. Compile and Install TensorRT LLM
Here we compile the source inside the container:

``` bash
Expand Down Expand Up @@ -122,11 +122,11 @@ The command to generate synthetic dataset will be attached to the max throughput

This section provides the reproducing steps for NVIDIA Blackwell B200 and H200 GPUs, for both min-latency and max-throughput scenarios.

All the benchmarking is done by the trtllm-bench command line tool provided in the TensorRT-LLM installation, see [TensorRT-LLM Benchmarking](https://nvidia.github.io/TensorRT-LLM/performance/perf-benchmarking.html) for details of this tool.
All the benchmarking is done by the trtllm-bench command line tool provided in the TensorRT LLM installation, see [TensorRT LLM Benchmarking](https://nvidia.github.io/TensorRT-LLM/performance/perf-benchmarking.html) for details of this tool.

For brevity, we only provide the commands to reproduce the perf numbers without detailed explanation of the tools and options in this doc.

All these commands here are assumed to be running inside the container started by `make -C docker run ...` command mentioned in the [Build and run TensorRT-LLM container section](#3-build-and-run-tensorrt-llm-container)
All these commands here are assumed to be running inside the container started by `make -C docker run ...` command mentioned in the [Build and run TensorRT LLM container section](#3-build-and-run-tensorrt-llm-container)

### B200 min-latency
Our benchmark results are based on **Batch = 1, ISL = 1K, OSL = 2K, num_requests = 10 from real dataset**
Expand Down Expand Up @@ -158,7 +158,7 @@ trtllm-bench --model nvidia/DeepSeek-R1-FP4 \
```

Explanation:
- `trtllm-bench`: A CLI benchmarking utility that aims to make it easier for users to reproduce our officially published. See [TensorRT-LLM Benchmarking](https://nvidia.github.io/TensorRT-LLM/performance/perf-benchmarking.html) for details.
- `trtllm-bench`: A CLI benchmarking utility that aims to make it easier for users to reproduce our officially published. See [TensorRT LLM Benchmarking](https://nvidia.github.io/TensorRT-LLM/performance/perf-benchmarking.html) for details.
- `--dataset`: Prompt dataset used to benchmark. Our official benchmark dataset has ISL = 1K, OSL = 2K
- `--num_requests`: Num requests used for the benchmark.
- `--concurrency`: Total concurrency for the system.
Expand Down Expand Up @@ -186,7 +186,7 @@ Average request latency (ms): 7456.1219

Due to our evaluation found that FP8 KV cache does not introduce obvious accuracy drop compared to BF16 KV cache. See [Precision strategy](./tech_blog/blog3_Optimizing_DeepSeek_R1_Throughput_on_NVIDIA_Blackwell_GPUs.md#precision-strategy), the latest [DeepSeek-R1-0528-FP4](https://huggingface.co/nvidia/DeepSeek-R1-0528-FP4) checkpoint had enabled FP8 KV cache by-default.

We are seeing meaningful speedup using FP8 KV cache, thus refreshing the numbers here. The results are reproduced with TensorRT-LLM commit b6261862419c33d6ce2313aff1e7116067d6037d.
We are seeing meaningful speedup using FP8 KV cache, thus refreshing the numbers here. The results are reproduced with TensorRT LLM commit b6261862419c33d6ce2313aff1e7116067d6037d.

!! Note that the exact command to reproduce numbers can change as the API/options are refactored, the option and numbers here is a reference at given exact commit.

Expand Down Expand Up @@ -239,7 +239,7 @@ Per GPU Output Throughput (tps/gpu): 5393.2755
### B200 max-throughput for R1 with FP16 KV cache
Our benchmark results are based on **Batch = 3072, ISL = 1K, OSL = 2K, num_requests = 49152 from synthetic dataset**.

The results are reproduced with TensorRT-LLM commit b6261862419c33d6ce2313aff1e7116067d6037d.
The results are reproduced with TensorRT LLM commit b6261862419c33d6ce2313aff1e7116067d6037d.

!! Note that the exact command to reproduce numbers can change as the API/options are refactored, the option and numbers here is a reference at given exact commit.

Expand Down Expand Up @@ -401,7 +401,7 @@ Average request latency (ms): 181540.5739

## Exploring more ISL/OSL combinations

To benchmark TensorRT-LLM on DeepSeek models with more ISL/OSL combinations, you can use `prepare_dataset.py` to generate the dataset and use similar commands mentioned in the previous section. TensorRT-LLM is working on enhancements that can make the benchmark process smoother.
To benchmark TensorRT LLM on DeepSeek models with more ISL/OSL combinations, you can use `prepare_dataset.py` to generate the dataset and use similar commands mentioned in the previous section. TensorRT LLM is working on enhancements that can make the benchmark process smoother.
### WIP: Enable more features by default

Currently, there are some features that need to be enabled through a user-defined file `extra-llm-api-config.yml`, such as CUDA graph, overlap scheduler and attention dp. We're working on to enable those features by default, so that users can get good out-of-the-box performance on DeepSeek models.
Expand Down
Loading