Skip to content

Conversation

@AshAnand34
Copy link
Contributor

@AshAnand34 AshAnand34 commented Jun 4, 2025

What does this PR do?

This pull request enhances the documentation for the XLM-RoBERTa-XL model, providing a more comprehensive overview of its features, usage, and implementation details as per #36979. The updates include new examples, usage tips, and key model details, making it easier for users to understand and apply the model.

Documentation Enhancements:

  • Added an overview of XLM-RoBERTa-XL, including its multilingual capabilities, parameter size (3.5B and 10.7B), and performance improvements over XLM-R and RoBERTa-Large. The section highlights its effectiveness for low-resource languages and benchmarks.

  • Included Python code examples for using XLM-RoBERTa-XL with the pipeline and AutoModel APIs for masked language modeling, demonstrating its application in English and French.

  • Added a "Key Features" section detailing the model's architecture, multilingual training, and language detection capabilities, along with suggestions for handling its large size during training and inference.

  • Introduced a "Usage Tips" section with practical advice on model usage, such as avoiding the need for lang tensors, using model parallelism or gradient checkpointing, and optimizing inference with quantization or sharding.

  • Added a <Tip> section linking to the XLM-RoBERTa documentation for additional usage examples and input/output details.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@stevhliu

Copy link
Member

@stevhliu stevhliu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, thanks for your contribution! Please apply the same general changes (such as the badges) to your other XLM model cards

Copy link
Member

@stevhliu stevhliu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a few more suggestions!

Copy link
Member

@stevhliu stevhliu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, just a few more comments to ensure we're using the correct mask token.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@stevhliu stevhliu merged commit b61c47f into huggingface:main Jun 9, 2025
10 checks passed
bvantuan pushed a commit to bvantuan/transformers that referenced this pull request Jun 12, 2025
* Created model card for xlm-roberta-xl

* Update XLM-RoBERTa-XL model card with improved descriptions and usage examples

* Minor option labeling fix

* Added MaskedLM version of XLM RoBERTa XL to model card

* Added quantization example for XLM RoBERTa XL model card

* minor fixes to xlm roberta xl model card

* Minor fixes to mask format in xlm roberta xl model card
@stefan-it
Copy link
Collaborator

So this was the third MR that unfortunately removes attribution of people, who actually contributed to add the model into the library years ago.

@stevhliu
Copy link
Member

Hi sorry about that, but I'll add that back in in a follow up PR 🙂

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants