Skip to content

Cannot tokenize byte sequences that are not valid UTF-8 due to design flaw #388

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
sharpobject opened this issue Mar 8, 2025 · 7 comments

Comments

@sharpobject
Copy link

Hello,

The BPE algorithm is capable of tokenizing any byte sequence, and LLMs generally accept any sequence of tokens and use token dictionaries that can successfully represent any byte sequence, but the encode method in tiktoken accepts a type that has to be valid UTF-8. So there are lots of byte sequences, many of which are only 1 byte long, which you cannot tokenize using this library.

@hauntsaninja
Copy link
Collaborator

BPE can operate on arbitrary byte sequences, but the regex splitting is in Unicode space.

There is a private _encode_bytes method (that is not guaranteed to return correct results, look at code for details) if you feel like taking your life into your own hands.

(Also as a general note, while there are many possible tokenisations, models only see a subset of possible tokenisations at training time, so if you tokenise something in a way that the model has not ever seen before, you will typically get degraded performance that is quite hard to debug)

@sharpobject
Copy link
Author

I greatly appreciate the pointer! It seems like the solution ought to be taking one of the more obvious generalizations of the pre-segmentation regex and shoving all the incorrect bytes into a character class of your choice (alphabetical?). But it would certainly be quite the lift.

@sharpobject
Copy link
Author

sharpobject commented Mar 8, 2025

Ok, well, _encode_bytes returns incorrect results for literally the first byte sequence I put into it, so I can file a separate issue or something.

In particular, if you put in b"\x80"*6, you get back []. and obviously if you decode [], you get 0 bytes back, not 6.

@hauntsaninja
Copy link
Collaborator

Hm yeah, code looks a little wrong (its only current use is as an internal helper in one of tiktoken's internal-only test). Let me fix.

@hauntsaninja
Copy link
Collaborator

#389 should fix and includes a basic property test.

(Even if things are now roundtripping correctly, the word of caution about putting models being out of distribution still applies!)

@sharpobject
Copy link
Author

amazing! thank you! get some sleep :)

@hauntsaninja
Copy link
Collaborator

hauntsaninja commented Mar 9, 2025

Thanks for the issue!

The other thing I'll mention (mostly in case someone later stumbles upon this issue) is that if your use case only involves single tokens, encode_single_token is a public API that can take bytes and is harder to misuse.

def encode_single_token(self, text_or_bytes: str | bytes) -> int:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants