You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As issue 1830, I meet the same question
when i add some special_tokens in Tokenizer.
But I think it is property self.all_special_tokens that slow the code.
property self.all_special_tokens will be called so many time when we added some special token.
An easy way to solve this problem is to create a temporary Set.
In my implementation, it faster about 10 times when 207 special tokens are added, I do not get a precise number because of multiprocessing : )
Yes, add_special_tokens method is reserved for a limited number of tokens with special properties and usage like CLS or MASK. For other uses, go for add_tokens.
Here is how we solved the performance issue when adding custom vocabulary: In the add_tokens method, we simply integrate new_tokens into the self.vocab.
from transformers import BertTokenizer, WordpieceTokenizer
from collections import OrderedDict
class CustomVocabBertTokenizer(BertTokenizer):
def add_tokens(self, new_tokens):
new_tokens = [token for token in tokens if not (token in self.vocab or token in self.all_special_tokens)]
self.vocab = OrderedDict([
*self.vocab.items(),
*[
(token, i + len(self.vocab))
for i, token in enumerate(new_tokens)
]
])
self.ids_to_tokens = OrderedDict([(ids, tok) for tok, ids in self.vocab.items()])
self.wordpiece_tokenizer = WordpieceTokenizer(vocab=self.vocab, unk_token=self.unk_token)
return len(new_tokens)
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
As issue 1830, I meet the same question
when i add some special_tokens in Tokenizer.
But I think it is property self.all_special_tokens that slow the code.
property self.all_special_tokens will be called so many time when we added some special token.
An easy way to solve this problem is to create a temporary Set.
In my implementation, it faster about 10 times when 207 special tokens are added, I do not get a precise number because of multiprocessing : )