Skip to content

Commit 3a71ddf

Browse files
authored
[TRTLLM-6859][doc] Add DeepSeek R1 deployment guide. (#6579)
Signed-off-by: Yuxian Qiu <[email protected]>
1 parent 5eae318 commit 3a71ddf

File tree

2 files changed

+387
-1
lines changed

2 files changed

+387
-1
lines changed

.pre-commit-config.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ repos:
2727
args: [--allow-multiple-documents]
2828
exclude: ".*/gitlab/.*.yml"
2929
- id: trailing-whitespace
30-
exclude: '\.patch$'
30+
exclude: '\.(patch|md)$'
3131
- id: check-toml
3232
- id: mixed-line-ending
3333
args: [--fix=lf]
Lines changed: 386 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,386 @@
1+
# Quick Start Recipe for DeepSeek R1 on TensorRT-LLM - Blackwell & Hopper Hardware
2+
3+
## Introduction
4+
5+
This deployment guide provides step-by-step instructions for running the DeepSeek R1 model using TensorRT-LLM with FP8 and NVFP4 quantization, optimized for NVIDIA GPUs. It covers the complete setup required; from accessing model weights and preparing the software environment to configuring TensorRT-LLM parameters, launching the server, and validating inference output.
6+
7+
The guide is intended for developers and practitioners seeking high-throughput or low-latency inference using NVIDIA’s accelerated stack—starting with the PyTorch container from NGC, then installing TensorRT-LLM for model serving, FlashInfer for optimized CUDA kernels, and ModelOpt to enable FP8 and NVFP4 quantized execution.
8+
9+
## Prerequisites
10+
11+
GPU: NVIDIA Blackwell or Hopper Architecture
12+
OS: Linux
13+
Drivers: CUDA Driver 575 or Later
14+
Docker with NVIDIA Container Toolkit installed
15+
Python3 and python3-pip (Optional, for accuracy evaluation only)
16+
17+
## Models
18+
19+
* FP8 model: [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528)
20+
* NVFP4 model: [DeepSeek-R1-0528-FP4](https://huggingface.co/nvidia/DeepSeek-R1-0528-FP4)
21+
22+
23+
Note that NVFP4 is only supported on NVIDIA Blackwell platform.
24+
25+
## Deployment Steps
26+
27+
### Run Docker Container
28+
29+
Run the docker container using the TensorRT-LLM NVIDIA NGC image.
30+
31+
```shell
32+
docker run --rm -it \
33+
--ipc=host \
34+
--gpus all \
35+
-p 8000:8000 \
36+
-v ~/.cache:/root/.cache:rw \
37+
--name tensorrt_llm \
38+
nvcr.io/nvidia/tensorrt-llm/release:1.0.0rc5 \
39+
/bin/bash
40+
```
41+
42+
Note:
43+
44+
* You can mount additional directories and paths using the \-v \<local\_path\>:\<path\> flag if needed, such as mounting the downloaded weight paths.
45+
* The command mounts your user .cache directory to save the downloaded model checkpoints which are saved to \~/.cache/huggingface/hub/ by default. This prevents having to redownload the weights each time you rerun the container. If the \~/.cache directory doesn’t exist please create it using mkdir \~/.cache
46+
* The command also maps port **8000** from the container to your host so you can access the LLM API endpoint from your host
47+
* See the [https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tensorrt-llm/containers/release/tags](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tensorrt-llm/containers/release/tags) for all the available containers. The containers published in the main branch weekly have “rcN” suffix, while the monthly release with QA tests has no “rcN” suffix. Use the rc release to get the latest model and feature support.
48+
49+
If you want to use latest main branch, you can choose to build from source to install TensorRT-LLM, the steps refer to [https://nvidia.github.io/TensorRT-LLM/latest/installation/build-from-source-linux.html](https://nvidia.github.io/TensorRT-LLM/latest/installation/build-from-source-linux.html)
50+
51+
### Creating the TRT-LLM Server config
52+
53+
We create a YAML configuration file /tmp/config.yml for the TensorRT-LLM Server and populate it with the following recommended performance settings.
54+
55+
```shell
56+
EXTRA_LLM_API_FILE=/tmp/config.yml
57+
58+
cat << EOF > ${EXTRA_LLM_API_FILE}
59+
enable_attention_dp: true
60+
cuda_graph_config:
61+
enable_padding: true
62+
max_batch_size: 128
63+
kv_cache_config:
64+
dtype: fp8
65+
stream_interval: 10
66+
speculative_config:
67+
decoding_type: MTP
68+
num_nextn_predict_layers: 1
69+
EOF
70+
```
71+
72+
For FP8 model, we need extra `moe_config`:
73+
74+
```shell
75+
EXTRA_LLM_API_FILE=/tmp/config.yml
76+
77+
cat << EOF > ${EXTRA_LLM_API_FILE}
78+
enable_attention_dp: true
79+
cuda_graph_config:
80+
enable_padding: true
81+
max_batch_size: 128
82+
kv_cache_config:
83+
dtype: fp8
84+
stream_interval: 10
85+
speculative_config:
86+
decoding_type: MTP
87+
num_nextn_predict_layers: 1
88+
moe_config:
89+
backend: DEEPGEMM
90+
max_num_tokens: 3200
91+
EOF
92+
```
93+
94+
### Launch the TRT-LLM Server
95+
96+
Below is an example command to launch the TRT-LLM server with the DeepSeek-R1 model from within the container. The command is specifically configured for the 1024/1024 Input/Output Sequence Length test. The explanation of each flag is shown in the “Configs and Parameters” section.
97+
98+
```shell
99+
trtllm-serve deepseek-ai/DeepSeek-R1-0528 \
100+
--host 0.0.0.0 \
101+
--port 8000 \
102+
--backend pytorch \
103+
--max_batch_size 1024 \
104+
--max_num_tokens 3200 \
105+
--max_seq_len 2048 \
106+
--kv_cache_free_gpu_memory_fraction 0.8 \
107+
--tp_size 8 \
108+
--ep_size 8 \
109+
--trust_remote_code \
110+
--extra_llm_api_options ${EXTRA_LLM_API_FILE}
111+
```
112+
113+
After the server is set up, the client can now send prompt requests to the server and receive results.
114+
115+
### Configs and Parameters
116+
117+
These options are used directly on the command line when you start the `trtllm-serve` process.
118+
#### `--tp_size`
119+
120+
&emsp;**Description:** Sets the **tensor-parallel size**. This should typically match the number of GPUs you intend to use for a single model instance.
121+
122+
#### `--ep_size`
123+
124+
&emsp;**Description:** Sets the **expert-parallel size** for Mixture-of-Experts (MoE) models. Like `tp_size`, this should generally match the number of GPUs you're using. This setting has no effect on non-MoE models.
125+
126+
#### `--kv_cache_free_gpu_memory_fraction`
127+
128+
&emsp;**Description:** A value between 0.0 and 1.0 that specifies the fraction of free GPU memory to reserve for the KV cache after the model is loaded. Since memory usage can fluctuate, this buffer helps prevent out-of-memory (OOM) errors.
129+
130+
&emsp;**Recommendation:** If you experience OOM errors, try reducing this value to **0.7** or lower.
131+
132+
#### `--backend pytorch`
133+
134+
&emsp;**Description:** Tells TensorRT-LLM to use the **pytorch** backend.
135+
136+
#### `--max_batch_size`
137+
138+
&emsp;**Description:** The maximum number of user requests that can be grouped into a single batch for processing.
139+
140+
#### `--max_num_tokens`
141+
142+
&emsp;**Description:** The maximum total number of tokens (across all requests) allowed inside a single scheduled batch.
143+
144+
#### `--max_seq_len`
145+
146+
&emsp;**Description:** The maximum possible sequence length for a single request, including both input and generated output tokens.
147+
148+
#### `--trust_remote_code`
149+
150+
&emsp;**Description:** Allows TensorRT-LLM to download models and tokenizers from Hugging Face. This flag is passed directly to the Hugging Face API.
151+
152+
153+
#### Extra LLM API Options (YAML Configuration)
154+
155+
These options provide finer control over performance and are set within a YAML file passed to the trtllm-serve command via the \--extra\_llm\_api\_options argument.
156+
157+
#### `kv_cache_config`
158+
159+
&emsp;**Description**: A section for configuring the Key-Value (KV) cache.
160+
161+
&emsp;**Options**:
162+
163+
&emsp;&emsp;dtype: Sets the data type for the KV cache.
164+
165+
&emsp;&emsp;**Default**: auto (uses the data type specified in the model checkpoint).
166+
167+
#### `cuda_graph_config`
168+
169+
&emsp;**Description**: A section for configuring CUDA graphs to optimize performance.
170+
171+
&emsp;**Options**:
172+
173+
&emsp;&emsp;enable\_padding: If true, input batches are padded to the nearest cuda\_graph\_batch\_size. This can significantly improve performance.
174+
175+
&emsp;&emsp;**Default**: false
176+
177+
&emsp;&emsp;max\_batch\_size: Sets the maximum batch size for which a CUDA graph will be created.
178+
179+
&emsp;&emsp;**Default**: 0
180+
181+
&emsp;&emsp;**Recommendation**: Set this to the same value as the \--max\_batch\_size command-line option.
182+
183+
&emsp;&emsp;batch\_sizes: A specific list of batch sizes to create CUDA graphs for.
184+
185+
&emsp;&emsp;**Default**: None
186+
187+
#### `moe_config`
188+
189+
&emsp;**Description**: Configuration for Mixture-of-Experts (MoE) models.
190+
191+
&emsp;**Options**:
192+
193+
&emsp;&emsp;backend: The backend to use for MoE operations.
194+
195+
&emsp;&emsp;**Default**: CUTLASS
196+
197+
#### `attention_backend`
198+
199+
&emsp;**Description**: The backend to use for attention calculations.
200+
201+
&emsp;**Default**: TRTLLM
202+
203+
See the [TorchLlmArgs class](https://nvidia.github.io/TensorRT-LLM/llm-api/reference.html#tensorrt_llm.llmapi.TorchLlmArgs) for the full list of options which can be used in the extra\_llm\_api\_options`.`
204+
205+
## Testing API Endpoint
206+
207+
### Basic Test
208+
209+
Start a new terminal on the host to test the TensorRT-LLM server you just launched.
210+
211+
You can query the health/readiness of the server using:
212+
213+
```shell
214+
curl -s -o /dev/null -w "Status: %{http_code}\n" "http://localhost:8000/health"
215+
```
216+
217+
When the `Status: 200` code is returned, the server is ready for queries. Note that the very first query may take longer due to initialization and compilation.
218+
219+
After the TRT-LLM server is set up and shows Application startup complete, you can send requests to the server.
220+
221+
```shell
222+
curl http://localhost:8000/v1/completions -H "Content-Type: application/json" -d '{
223+
"model": "deepseek-ai/DeepSeek-R1-0528",
224+
"prompt": "Where is New York?",
225+
"max_tokens": 16,
226+
"temperature": 0
227+
}'
228+
```
229+
230+
Here is an example response, showing that the TRT-LLM server returns “New York is a state located in the northeastern United States. It is bordered by”, completing the input sequence.
231+
232+
```json
233+
{"id":"cmpl-e728f08114c042309efeae4df86a50ca","object":"text_completion","created":1754294810,"model":"deepseek-ai/DeepSeek-R1-0528","choices":[{"index":0,"text":" / by Megan Stine ; illustrated by John Hinderliter.\n\nBook | Gross","token_ids":null,"logprobs":null,"context_logits":null,"finish_reason":"length","stop_reason":null,"disaggregated_params":null}],"usage":{"prompt_tokens":6,"total_tokens":22,"completion_tokens":16},"prompt_token_ids":null}
234+
```
235+
236+
### Troubleshooting Tips
237+
238+
* If you encounter CUDA out-of-memory errors, try reducing max\_batch\_size or max\_seq\_len
239+
* Ensure your model checkpoints are compatible with the expected format
240+
* For performance issues, check GPU utilization with nvidia-smi while the server is running
241+
* If the container fails to start, verify that the NVIDIA Container Toolkit is properly installed
242+
* For connection issues, make sure port 8000 is not being used by another application
243+
244+
### Running Evaluations to Verify Accuracy (Optional)
245+
246+
We use the lm-eval tool to test the model’s accuracy. For more information see [https://github.com/EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness).
247+
248+
To run the evaluation harness exec into the running TensorRT-LLM container and install with this command:
249+
250+
```shell
251+
docker exec -it tensorrt_llm /bin/bash
252+
253+
pip install lm_eval
254+
```
255+
256+
FP8 command for GSM8K:
257+
258+
* Note: The tokenizer will add BOS (beginning of sentence token) before input prompt by default which leads to accuracy regression on GSM8K task for DeepSeek R1 model. So, set add\_special\_tokens=False to avoid it.
259+
260+
```
261+
MODEL_PATH=deepseek-ai/DeepSeek-R1-0528
262+
263+
lm_eval --model local-completions --tasks gsm8k --batch_size 256 --gen_kwargs temperature=0.0,add_special_tokens=False --num_fewshot 5 --model_args model=${MODEL_PATH},base_url=http://localhost:8000/v1/completions,num_concurrent=32,max_retries=20,tokenized_requests=False --log_samples --output_path trtllm.fp8.gsm8k
264+
```
265+
266+
Sample result in Blackwell:
267+
268+
```shell
269+
|Tasks|Version| Filter |n-shot| Metric | |Value | |Stderr|
270+
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
271+
|gsm8k| 3|flexible-extract| 5|exact_match||0.9538|± |0.0058|
272+
| | |strict-match | 5|exact_match||0.9500|± |0.0060|
273+
```
274+
275+
FP4 command for GSM8K:
276+
277+
* Note: The tokenizer will add BOS before input prompt by default, which leads to accuracy regression on GSM8K task for DeepSeek R1 model. So set add\_special\_tokens=False to avoid it.
278+
279+
```shell
280+
MODEL_PATH=nvidia/DeepSeek-R1-0528-FP4
281+
282+
lm_eval --model local-completions --tasks gsm8k --batch_size 256 --gen_kwargs temperature=0.0,add_special_tokens=False --num_fewshot 5 --model_args model=${MODEL_PATH},base_url=http://localhost:8000/v1/completions,num_concurrent=32,max_retries=20,tokenized_requests=False --log_samples --output_path trtllm.fp4.gsm8k
283+
```
284+
285+
Sample result in Blackwell:
286+
287+
```shell
288+
|Tasks|Version| Filter |n-shot| Metric | |Value | |Stderr|
289+
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
290+
|gsm8k| 3|flexible-extract| 5|exact_match||0.9462|± |0.0062|
291+
| | |strict-match | 5|exact_match||0.9447|± |0.0063|
292+
```
293+
294+
## Benchmarking Performance
295+
296+
To benchmark the performance of your TensorRT-LLM server you can leverage the built-in “benchmark\_serving.py” script. To do this first creating a wrapper [bench.sh](http://bench.sh) script.
297+
298+
```shell
299+
cat <<EOF > bench.sh
300+
concurrency_list="32 64 128 256 512 1024 2048 4096"
301+
multi_round=5
302+
isl=1024
303+
osl=1024
304+
result_dir=/tmp/deepseek_r1_output
305+
306+
for concurrency in ${concurrency_list}; do
307+
num_prompts=$((concurrency * multi_round))
308+
python -m tensorrt_llm.serve.scripts.benchmark_serving \
309+
--model deepseek-ai/DeepSeek-R1-0528 \
310+
--backend openai \
311+
--dataset-name "random" \
312+
--random-input-len ${isl} \
313+
--random-output-len ${osl} \
314+
--random-prefix-len 0 \
315+
--random-ids \
316+
--num-prompts ${num_prompts} \
317+
--max-concurrency ${concurrency} \
318+
--ignore-eos \
319+
--tokenize-on-client \
320+
--percentile-metrics "ttft,tpot,itl,e2el"
321+
done
322+
EOF
323+
chmod +x bench.sh
324+
```
325+
326+
To benchmark the FP4 model, replace \--model deepseek-ai/DeepSeek-R1-0528 with \--model nvidia/DeepSeek-R1-0528-FP4.
327+
328+
If you want to save the results to a file add the following options.
329+
330+
```shell
331+
--save-result \
332+
--result-dir "${result_dir}" \
333+
--result-filename "concurrency_${concurrency}.json"
334+
```
335+
336+
For more benchmarking options see. [https://github.com/NVIDIA/TensorRT-LLM/blob/main/tensorrt\_llm/serve/scripts/benchmark\_serving.py](https://github.com/NVIDIA/TensorRT-LLM/blob/main/tensorrt_llm/serve/scripts/benchmark_serving.py)
337+
338+
Run bench.sh to begin a serving benchmark. This will take a long time if you run all the concurrencies mentioned in the above bench.sh script.
339+
340+
```shell
341+
./bench.sh
342+
```
343+
344+
Sample TensorRT-LLM serving benchmark output. Your results may vary due to ongoing software optimizations.
345+
346+
```
347+
============ Serving Benchmark Result ============
348+
Successful requests: 16
349+
Benchmark duration (s): 17.66
350+
Total input tokens: 16384
351+
Total generated tokens: 16384
352+
Request throughput (req/s): [result]
353+
Output token throughput (tok/s): [result]
354+
Total Token throughput (tok/s): [result]
355+
User throughput (tok/s): [result]
356+
---------------Time to First Token----------------
357+
Mean TTFT (ms): [result]
358+
Median TTFT (ms): [result]
359+
P99 TTFT (ms): [result]
360+
-----Time per Output Token (excl. 1st token)------
361+
Mean TPOT (ms): [result]
362+
Median TPOT (ms): [result]
363+
P99 TPOT (ms): [result]
364+
---------------Inter-token Latency----------------
365+
Mean ITL (ms): [result]
366+
Median ITL (ms): [result]
367+
P99 ITL (ms): [result]
368+
----------------End-to-end Latency----------------
369+
Mean E2EL (ms): [result]
370+
Median E2EL (ms): [result]
371+
P99 E2EL (ms): [result]
372+
==================================================
373+
```
374+
375+
### Key Metrics
376+
377+
* Median Time to First Token (TTFT)
378+
* The typical time elapsed from when a request is sent until the first output token is generated.
379+
* Median Time Per Output Token (TPOT)
380+
* The typical time required to generate each token *after* the first one.
381+
* Median Inter-Token Latency (ITL)
382+
* The typical time delay between the completion of one token and the completion of the next.
383+
* Median End-to-End Latency (E2EL)
384+
* The typical total time from when a request is submitted until the final token of the response is received.
385+
* Total Token Throughput
386+
* The combined rate at which the system processes both input (prompt) tokens and output (generated) tokens.

0 commit comments

Comments
 (0)