Skip to content

Commit 4c7629c

Browse files
authored
[V1][Structured Output] calculate vocab_size eagerly (vllm-project#14851)
Signed-off-by: Aaron Pham <[email protected]>
1 parent e0fdfa1 commit 4c7629c

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

vllm/v1/structured_output/__init__.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ def _delayed_init(self):
4040
tokenizer_group.ping()
4141

4242
tokenizer = tokenizer_group.get_lora_tokenizer(None)
43-
self.vocab_size = tokenizer.max_token_id + 1
43+
self.vocab_size = len(tokenizer.get_vocab())
4444
if isinstance(tokenizer, MistralTokenizer):
4545
# NOTE: ideally, xgrammar should handle this accordingly.
4646
# refer to https://github.com/mlc-ai/xgrammar/blob/d77c0a0173ef14779c918e3be7966ba852f7910f/python/xgrammar/tokenizer_info.py#L98

0 commit comments

Comments
 (0)