Skip to content

Commit 74fa84e

Browse files
committed
revert mtp unrelated tokenizer-side workaround
Signed-off-by: Xuanyu Chen <[email protected]>
1 parent 726c7ee commit 74fa84e

File tree

1 file changed

+1
-6
lines changed

1 file changed

+1
-6
lines changed

tensorrt_llm/llmapi/tokenizer.py

Lines changed: 1 addition & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -94,13 +94,8 @@ def convert_ids_to_tokens(
9494
self,
9595
ids: Union[int, List[int]],
9696
skip_special_tokens: bool = False) -> Union[str, List[str]]:
97-
# DeepSeek vocabulary has token ids not mapped to any tokens, these will get converted to None
98-
# by the tokenizer. We need to filter them out.
99-
tokens = self.tokenizer.convert_ids_to_tokens(
97+
return self.tokenizer.convert_ids_to_tokens(
10098
ids, skip_special_tokens=skip_special_tokens)
101-
if isinstance(ids, int):
102-
return tokens # Single token, return as-is (could be None)
103-
return [token for token in tokens if token is not None]
10499

105100
def convert_tokens_to_string(
106101
self,

0 commit comments

Comments
 (0)