-
Notifications
You must be signed in to change notification settings - Fork 625
Closed
Description
Checklist
- 1. I have searched related issues but cannot get the expected help.
- 2. The bug has not been fixed in the latest version.
- 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
Describe the bug
model: unsloth/gpt-oss-20b
[TM][INFO] TM_FUSE_SILU_ACT=1
2025-09-04 18:37:49,126 - lmdeploy - WARNING - turbomind.py:280 - get 14165 model params
not implemented: dtype=e2m1, is_fused_moe=1, sm=70
not implemented: dtype=e2m1Aborted
Reproduction
lmdeploy serve api_server --log-level INFO --tp 2 --model-name gpt-oss-20b /gpt-oss-20b
Environment
sys.platform: linux
Python: 3.10.12 (main, Aug 15 2025, 14:32:43) [GCC 11.4.0]
CUDA available: True
MUSA available: False
numpy_random_seed: 2147483648
GPU 0,1: Tesla V100-SXM2-16GB
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.4, V12.4.131
GCC: x86_64-linux-gnu-gcc (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0
PyTorch: 2.7.1+cu126
PyTorch compiling details: PyTorch built with:
- GCC 11.2
- C++ Version: 201703
- Intel(R) oneAPI Math Kernel Library Version 2024.2-Product Build 20240605 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v3.7.1 (Git Hash 8d263e693366ef8db40acc569cc7d8edf644556d)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- LAPACK is enabled (usually provided by MKL)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 12.6
- NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90
- CuDNN 90.5.1
- Magma 2.6.1
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, COMMIT_SHA=e2d141dbde55c2a4370fac5165b0561b6af4798b, CUDA_VERSION=12.6, CUDNN_VERSION=9.5.1, CXX_COMPILER=/opt/rh/gcc-toolset-11/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=1 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_FBGEMM -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=range-loop-construct -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-unknown-pragmas -Wno-unused-parameter -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, TORCH_VERSION=2.7.1, USE_CUDA=ON, USE_CUDNN=ON, USE_CUSPARSELT=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF,
TorchVision: 0.22.1+cu126
LMDeploy: 0.9.2+
transformers: 4.56.0
fastapi: 0.116.1
pydantic: 2.11.7
triton: 3.3.1
NVIDIA Topology:
GPU0 GPU1 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X PHB 0-11 0 N/A
GPU1 PHB X 0-11 0 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinksError traceback
2025-09-04 18:37:46,821 - lmdeploy - INFO - async_engine.py:254 - input backend=turbomind, backend_config=TurbomindEngineConfig(dtype='auto', model_format=None, tp=2, dp=1, device_num=None, attn_tp_size=None, attn_dp_size=None, mlp_tp_size=None, mlp_dp_size=None, outer_dp_size=None, session_len=None, max_batch_size=128, cache_max_entry_count=0.8, cache_chunk_size=-1, cache_block_seq_len=64, enable_prefix_caching=False, quant_policy=0, rope_scaling_factor=0.0, use_logn_attn=False, download_dir=None, revision=None, max_prefill_token_num=8192, num_tokens_per_iter=0, max_prefill_iters=1, devices=None, empty_init=False, communicator='nccl', hf_overrides=None, enable_metrics=False)
2025-09-04 18:37:46,821 - lmdeploy - INFO - async_engine.py:255 - input chat_template_config=None
2025-09-04 18:37:46,825 - lmdeploy - INFO - async_engine.py:267 - updated chat_template_onfig=ChatTemplateConfig(model_name='gpt-oss', system=None, meta_instruction=None, eosys=None, user=None, eoh=None, assistant=None, eoa=None, tool=None, eotool=None, separator=None, capability=None, stop_words=None)
2025-09-04 18:37:48,355 - lmdeploy - WARNING - converter.py:91 - data type fallback to float16 since torch.cuda.is_bf16_supported is False
2025-09-04 18:37:48,924 - lmdeploy - INFO - turbomind.py:255 - turbomind model config:
{
"model_config": {
"model_name": "",
"chat_template": "",
"model_arch": "GptOssForCausalLM",
"head_num": 64,
"kv_head_num": 8,
"hidden_units": 2880,
"vocab_size": 201088,
"embedding_size": 201088,
"tokenizer_size": 200019,
"num_layer": 24,
"inter_size": [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
],
"norm_eps": 1e-05,
"attn_bias": 1,
"mlp_bias": true,
"window_size": [
128,
0,
128,
0,
128,
0,
128,
0,
128,
0,
128,
0,
128,
0,
128,
0,
128,
0,
128,
0,
128,
0,
128,
0
],
"attn_sink": true,
"qk_norm": false,
"size_per_head": 64,
"group_size": 32,
"weight_type": "float16",
"expert_weight_type": "e2m1",
"session_len": 131072,
"attn_tp_size": 2,
"mlp_tp_size": 2,
"model_format": "mxfp4",
"expert_num": [
32,
32,
32,
32,
32,
32,
32,
32,
32,
32,
32,
32,
32,
32,
32,
32,
32,
32,
32,
32,
32,
32,
32,
32
],
"expert_router_bias": true,
"expert_inter_size": 2880,
"experts_per_token": 4,
"activation_type": "gpt-oss",
"moe_shared_gate": false,
"norm_topk_prob": true,
"routed_scale": 1.0,
"topk_group": 1,
"topk_method": "greedy",
"moe_group_num": 1,
"q_lora_rank": 0,
"kv_lora_rank": 0,
"qk_rope_dim": 0,
"v_head_dim": 0,
"tune_layer_num": 1
},
"attention_config": {
"softmax_scale": 0.0,
"cache_block_seq_len": 64,
"use_logn_attn": 0,
"max_position_embeddings": 131072,
"rope_param": {
"type": "yarn",
"base": 150000.0,
"dim": 64,
"factor": 32.0,
"max_position_embeddings": 4096,
"attention_factor": 1.3465735902799727,
"beta_fast": 32.0,
"beta_slow": 1.0,
"low_freq_factor": null,
"high_freq_factor": null,
"original_max_position_embeddings": null,
"mrope_section": null
}
},
"lora_config": {
"lora_policy": "",
"lora_r": 0,
"lora_scale": 0.0,
"lora_max_wo_r": 0,
"lora_rank_pattern": "",
"lora_scale_pattern": ""
},
"engine_config": {
"dtype": "auto",
"model_format": "mxfp4",
"tp": 2,
"dp": 1,
"device_num": 2,
"attn_tp_size": 2,
"attn_dp_size": 1,
"mlp_tp_size": 2,
"mlp_dp_size": 1,
"outer_dp_size": 1,
"session_len": 131072,
"max_batch_size": 128,
"cache_max_entry_count": 0.8,
"cache_chunk_size": -1,
"cache_block_seq_len": 64,
"enable_prefix_caching": false,
"quant_policy": 0,
"rope_scaling_factor": 0.0,
"use_logn_attn": false,
"download_dir": null,
"revision": null,
"max_prefill_token_num": 8192,
"num_tokens_per_iter": 8192,
"max_prefill_iters": 16,
"devices": [
0,
1
],
"empty_init": false,
"communicator": "nccl",
"hf_overrides": null,
"enable_metrics": false
}
}
[TM][WARNING] [LlamaTritonModel] `max_context_token_num` is not set, default to 131072.
[TM][INFO] Model:
head_num: 64
kv_head_num: 8
size_per_head: 64
num_layer: 24
vocab_size: 201088
attn_bias: 1
qk_norm: 0
max_batch_size: 128
max_context_token_num: 131072
num_tokens_per_iter: 8192
max_prefill_iters: 16
session_len: 131072
cache_max_entry_count: 0.8
cache_block_seq_len: 64
cache_chunk_size: -1
enable_prefix_caching: 0
model_name:
model_dir:
quant_policy: 0
group_size: 32
expert_per_token: 4
moe_method: 1
[TM][INFO] TM_FUSE_SILU_ACT=1
2025-09-04 18:37:49,126 - lmdeploy - WARNING - turbomind.py:280 - get 14165 model params
not implemented: dtype=e2m1, is_fused_moe=1, sm=70
not implemented: dtype=e2m1AbortedMetadata
Metadata
Assignees
Labels
No labels