Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot tokenize byte sequences that are not valid UTF-8 due to design flaw #51

Open
sharpobject opened this issue Mar 8, 2025 · 1 comment

Comments

@sharpobject
Copy link

sharpobject commented Mar 8, 2025

Hello,

The BPE algorithm is capable of tokenizing any byte sequence, and LLMs generally accept any sequence of tokens and use token dictionaries that can successfully represent any byte sequence, but the encode method in bpe-openai accepts a type that has to be valid UTF-8. So there are lots of byte sequences, many of which are only 1 byte long, which you cannot tokenize using this library.

@aneubeck
Copy link
Collaborator

Hi,
the question is how the tokenization should behave when it's not valid utf8.
One has to disable unicode mode for the regex automaton which makes the regex patterns invalid.
I.e. the pretokenization of the input into smaller chunks which are then byte pair encoded is not well defined when the input is not utf-8.
One can obviously skip that pretokenization/splitting step entirely and just use the BPE algorithm.
But in that case you can also just use the inner bpe algorithm and its functions.
Does this make sense?

If something else is expected in case of non-utf8 input, please let us know.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants