|
| 1 | +<!-- |
| 2 | +SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved. |
| 3 | +SPDX-License-Identifier: Apache-2.0 |
| 4 | +
|
| 5 | +Licensed under the Apache License, Version 2.0 (the "License"); |
| 6 | +you may not use this file except in compliance with the License. |
| 7 | +You may obtain a copy of the License at |
| 8 | +
|
| 9 | +http://www.apache.org/licenses/LICENSE-2.0 |
| 10 | +
|
| 11 | +Unless required by applicable law or agreed to in writing, software |
| 12 | +distributed under the License is distributed on an "AS IS" BASIS, |
| 13 | +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
| 14 | +See the License for the specific language governing permissions and |
| 15 | +limitations under the License. |
| 16 | +--> |
| 17 | + |
| 18 | +# Running KVBM in TensorRT-LLM |
| 19 | + |
| 20 | +This guide explains how to leverage KVBM (KV Block Manager) to mange KV cache and do KV offloading in TensorRT-LLM (trtllm). |
| 21 | + |
| 22 | +To learn what KVBM is, please check [here](https://docs.nvidia.com/dynamo/latest/architecture/kvbm_intro.html) |
| 23 | + |
| 24 | +> [!Note] |
| 25 | +> - Ensure that `etcd` is running before starting. |
| 26 | +> - KVBM does not currently support CUDA graphs in TensorRT-LLM. |
| 27 | +> - KVBM only supports TensorRT-LLM’s PyTorch backend. |
| 28 | +
|
| 29 | +## Quick Start |
| 30 | + |
| 31 | +To use KVBM in TensorRT-LLM, you can follow the steps below: |
| 32 | + |
| 33 | +```bash |
| 34 | +# start up etcd for KVBM leader/worker registration and discovery |
| 35 | +docker compose -f deploy/docker-compose.yml up -d |
| 36 | + |
| 37 | +# build a container containing trtllm and kvbm, note that KVBM integration is only availiable on TensorRT-LLM commit: TBD |
| 38 | +./container/build.sh --framework trtllm --tensorrtllm-commit TBD --enable-kvbm |
| 39 | + |
| 40 | +# launch the container |
| 41 | +./container/run.sh --framework trtllm -it --mount-workspace --use-nixl-gds |
| 42 | + |
| 43 | +# enable kv offloading to CPU memory |
| 44 | +# 4 means 4GB of pinned CPU memory would be used |
| 45 | +export DYN_KVBM_CPU_CACHE_GB=60 |
| 46 | + |
| 47 | +# enable kv offloading to disk |
| 48 | +# 8 means 8GB of disk would be used |
| 49 | +export DYN_KVBM_DISK_CACHE_GB=20 |
| 50 | +``` |
| 51 | + |
| 52 | +```bash |
| 53 | +# write an example LLM API config |
| 54 | +cat > "/tmp/kvbm_llm_api_config.yaml" <<EOF |
| 55 | +backend: pytorch |
| 56 | +cuda_graph_config: null |
| 57 | +kv_cache_config: |
| 58 | + enable_partial_reuse: false |
| 59 | + free_gpu_memory_fraction: 0.80 |
| 60 | +kv_connector_config: |
| 61 | + connector_module: dynamo.llm.trtllm_integration.connector |
| 62 | + connector_scheduler_class: DynamoKVBMConnectorLeader |
| 63 | + connector_worker_class: DynamoKVBMConnectorWorker |
| 64 | +EOF |
| 65 | + |
| 66 | +# serve an example LLM model |
| 67 | +trtllm-serve deepseek-ai/DeepSeek-R1-Distill-Llama-8B --host localhost --port 8000 --backend pytorch --extra_llm_api_options /tmp/kvbm_llm_api_config.yaml |
| 68 | + |
| 69 | +# make a call to LLM |
| 70 | +curl localhost:8000/v1/chat/completions -H "Content-Type: application/json" -d '{ |
| 71 | + "model": "deepseek-ai/DeepSeek-R1-Distill-Llama-8B", |
| 72 | + "messages": [ |
| 73 | + { |
| 74 | + "role": "user", |
| 75 | + "content": "In the heart of Eldoria, an ancient land of boundless magic and mysterious creatures, lies the long-forgotten city of Aeloria. Once a beacon of knowledge and power, Aeloria was buried beneath the shifting sands of time, lost to the world for centuries. You are an intrepid explorer, known for your unparalleled curiosity and courage, who has stumbled upon an ancient map hinting at ests that Aeloria holds a secret so profound that it has the potential to reshape the very fabric of reality. Your journey will take you through treacherous deserts, enchanted forests, and across perilous mountain ranges. Your Task: Character Background: Develop a detailed background for your character. Describe their motivations for seeking out Aeloria, their skills and weaknesses, and any personal connections to the ancient city or its legends. Are they driven by a quest for knowledge, a search for lost familt clue is hidden." |
| 76 | + } |
| 77 | + ], |
| 78 | + "stream":false, |
| 79 | + "max_tokens": 30 |
| 80 | + }' |
| 81 | +``` |
| 82 | + |
| 83 | +## Enable and View KVBM Metrics |
| 84 | + |
| 85 | +Follow below steps to enable metrics collection and view via Grafana dashboard: |
| 86 | +```bash |
| 87 | +# Start the basic services (etcd & natsd), along with Prometheus and Grafana |
| 88 | +docker compose -f deploy/docker-compose.yml --profile metrics up -d |
| 89 | + |
| 90 | +# write an example LLM API config |
| 91 | +cat > "/tmp/kvbm_llm_api_config.yaml" <<EOF |
| 92 | +backend: pytorch |
| 93 | +cuda_graph_config: null |
| 94 | +kv_cache_config: |
| 95 | + enable_partial_reuse: false |
| 96 | + free_gpu_memory_fraction: 0.80 |
| 97 | +kv_connector_config: |
| 98 | + connector_module: dynamo.llm.trtllm_integration.connector |
| 99 | + connector_scheduler_class: DynamoKVBMConnectorLeader |
| 100 | + connector_worker_class: DynamoKVBMConnectorWorker |
| 101 | +EOF |
| 102 | + |
| 103 | +# serve an example LLM model |
| 104 | +trtllm-serve deepseek-ai/DeepSeek-R1-Distill-Llama-8B --host localhost --port 8000 --backend pytorch --extra_llm_api_options /tmp/kvbm_llm_api_config.yaml |
| 105 | + |
| 106 | +# start vllm with DYN_SYSTEM_ENABLED set to true and DYN_SYSTEM_PORT port to 6880. |
| 107 | +# NOTE: Make sure port 6880 (for KVBM worker metrics) and port 6881 (for KVBM leader metrics) are available. |
| 108 | +DYN_SYSTEM_ENABLED=true DYN_SYSTEM_PORT=6880 trtllm-serve deepseek-ai/DeepSeek-R1-Distill-Llama-8B --host localhost --port 8000 --backend pytorch --extra_llm_api_options /tmp/kvbm_llm_api_config.yaml |
| 109 | + |
| 110 | +# optional if firewall blocks KVBM metrics ports to send prometheus metrics |
| 111 | +sudo ufw allow 6880/tcp |
| 112 | +sudo ufw allow 6881/tcp |
| 113 | +``` |
| 114 | + |
| 115 | +View grafana metrics via http://localhost:3001 (default login: dynamo/dynamo) and look for KVBM Dashboard |
0 commit comments