Hello,
I am trying to use pretrained tokenizers to test out encoding using the train_byte_level_bpe.py file. (From Restoring model from learned vocab/merges)
Even though I have created an environment using conda.yml file, I am having an error with .encode() function:
TypeError: encode() got an unexpected keyword argument 'pad_to_max_length'
When I searched, I have found that it can be related to the version of tokenizers or transformers library, do you know how can I solve it?
Thanks a lot for your help