-
Notifications
You must be signed in to change notification settings - Fork 31.4k
Created model card for xlm-roberta-xl #38597
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Created model card for xlm-roberta-xl #38597
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, thanks for your contribution! Please apply the same general changes (such as the badges) to your other XLM model cards
stevhliu
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a few more suggestions!
stevhliu
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, just a few more comments to ensure we're using the correct mask token.
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
* Created model card for xlm-roberta-xl * Update XLM-RoBERTa-XL model card with improved descriptions and usage examples * Minor option labeling fix * Added MaskedLM version of XLM RoBERTa XL to model card * Added quantization example for XLM RoBERTa XL model card * minor fixes to xlm roberta xl model card * Minor fixes to mask format in xlm roberta xl model card
|
So this was the third MR that unfortunately removes attribution of people, who actually contributed to add the model into the library years ago. |
|
Hi sorry about that, but I'll add that back in in a follow up PR 🙂 |
What does this PR do?
This pull request enhances the documentation for the XLM-RoBERTa-XL model, providing a more comprehensive overview of its features, usage, and implementation details as per #36979. The updates include new examples, usage tips, and key model details, making it easier for users to understand and apply the model.
Documentation Enhancements:
Added an overview of XLM-RoBERTa-XL, including its multilingual capabilities, parameter size (3.5B and 10.7B), and performance improvements over XLM-R and RoBERTa-Large. The section highlights its effectiveness for low-resource languages and benchmarks.
Included Python code examples for using XLM-RoBERTa-XL with the
pipelineandAutoModelAPIs for masked language modeling, demonstrating its application in English and French.Added a "Key Features" section detailing the model's architecture, multilingual training, and language detection capabilities, along with suggestions for handling its large size during training and inference.
Introduced a "Usage Tips" section with practical advice on model usage, such as avoiding the need for
langtensors, using model parallelism or gradient checkpointing, and optimizing inference with quantization or sharding.Added a
<Tip>section linking to the XLM-RoBERTa documentation for additional usage examples and input/output details.Before submitting
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@stevhliu