You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The BPE algorithm is capable of tokenizing any byte sequence, and LLMs generally accept any sequence of tokens and use token dictionaries that can successfully represent any byte sequence, but the encode method in bpe-openai accepts a type that has to be valid UTF-8. So there are lots of byte sequences, many of which are only 1 byte long, which you cannot tokenize using this library.
The text was updated successfully, but these errors were encountered:
Hi,
the question is how the tokenization should behave when it's not valid utf8.
One has to disable unicode mode for the regex automaton which makes the regex patterns invalid.
I.e. the pretokenization of the input into smaller chunks which are then byte pair encoded is not well defined when the input is not utf-8.
One can obviously skip that pretokenization/splitting step entirely and just use the BPE algorithm.
But in that case you can also just use the inner bpe algorithm and its functions.
Does this make sense?
If something else is expected in case of non-utf8 input, please let us know.
Hello,
The BPE algorithm is capable of tokenizing any byte sequence, and LLMs generally accept any sequence of tokens and use token dictionaries that can successfully represent any byte sequence, but the encode method in bpe-openai accepts a type that has to be valid UTF-8. So there are lots of byte sequences, many of which are only 1 byte long, which you cannot tokenize using this library.
The text was updated successfully, but these errors were encountered: