You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/commands/trtllm-serve/trtllm-serve.rst
+34-30Lines changed: 34 additions & 30 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -201,56 +201,60 @@ Metrics Endpoint
201
201
202
202
.. note::
203
203
204
-
This endpoint is beta maturity.
204
+
The metrics endpoint for the default PyTorch backend are in beta and are not as comprehensive as those for the TensorRT backend.
205
205
206
-
The statistics for the PyTorch backend are beta and not as comprehensive as those for the TensorRT backend.
206
+
Some fields, such as CPU memory usage, are not yet available for the PyTorch backend.
207
207
208
-
Some fields, such as CPU memory usage, are not available for the PyTorch backend.
208
+
Enabling ``enable_iter_perf_stats`` in the PyTorch backend can slightly impact performance, depending on the serving configuration.
209
209
210
-
Enabling ``enable_iter_perf_stats``in the PyTorch backend can impact performance slightly, depending on the serving configuration.
210
+
The ``/metrics``endpoint provides runtime iteration statistics such as GPU memory usage and KV cache details.
211
211
212
-
The ``/metrics`` endpoint provides runtime-iteration statistics such as GPU memory use and inflight-batching details.
213
-
For the TensorRT backend, these statistics are enabled by default.
214
-
However, for the PyTorch backend, you must explicitly enable iteration statistics logging by setting the `enable_iter_perf_stats` field in a YAML configuration file as shown in the following example:
212
+
For the default PyTorch backend, iteration statistics logging is enabled by setting the ``enable_iter_perf_stats`` field in a YAML file:
215
213
216
214
.. code-block:: yaml
217
215
218
-
# extra-llm-api-config.yml
219
-
pytorch_backend_config:
220
-
enable_iter_perf_stats: true
216
+
# extra_llm_config.yaml
217
+
enable_iter_perf_stats: true
221
218
222
-
Then start the server and specify the ``--extra_llm_api_options`` argument with the path to the YAML file as shown in the following example:
219
+
Start the server and specify the ``--extra_llm_api_options`` argument with the path to the YAML file:
After at least one inference request is sent to the server, you can fetch the runtime-iteration statistics by polling the `/metrics` endpoint:
225
+
After sending at least one inference request to the server, you can fetch runtime iteration statistics by polling the ``/metrics`` endpoint.
226
+
Since the statistics are stored in an internal queue and removed once retrieved, it's recommended to poll the endpoint shortly after each request and store the results if needed.
0 commit comments