Hi,
I am trying to understand the bert_model arg in run_classify.py. In the file, I can see
tokenizer = BertTokenizer.from_pretrained(args.bert_model)
where bert_model is expected to be the vocab text file of the model
However, I also see
model = BertForSequenceClassification.from_pretrained(args.bert_model, len(label_list))
where bert_model is expected to be a archive file containing the model checkpoint and config.
Please help to advice the correct use of bert_model if I have my pretrained model converted locally already.
Thanks!