Fix Unigram tokenizer vocabulary lookup (token_to_id and id_to_token) #117
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
Fixes the Unigram tokenizer's
token_to_id
andid_to_token
functions which previously contained TODO placeholders, making Unigram models unusable for decoding and vocabulary inspection.Problem
token_to_id
always returnedSome 0
for every tokenid_to_token
always returnedNone
for every IDSolution
token_map
) for O(1) lookupsList.iteri
None
for unknown tokens and out-of-bounds IDsChanges Made
Modified Files:
saga/lib/tokenizers/models.ml
unigram_model
type to includetoken_map
hashtableunigram
constructor to precompute token->ID hashtabletoken_to_id
using hashtable lookup (O(1) instead of O(n))id_to_token
with proper bounds checkingsaga/lib/tokenizers/models.mli
token_map
fieldsaga/lib/tokenizers/trainers.ml
train_unigram
to createtoken_map
when building modelsaga/test/test_tokenization.ml
Tests Added
None
)None
)