Skip to content

Conversation

@pramodith
Copy link
Collaborator

What does this PR do?

The SFTTrainer currently doesn't account for the load-balancing/auxilliary loss commonly used in training MoE models to ensure that all experts have approximately the same number of tokens routed to them.

This PR adds this loss to the final loss if enabled by the model's config.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a GitHub issue? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@qgallouedec
Copy link
Member

qgallouedec commented Sep 4, 2025

Super cool PR! Only a few remarks, and we're good to merge end test!

Copy link
Member

@qgallouedec qgallouedec left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm!

@pramodith pramodith merged commit 1eb3801 into huggingface:main Sep 4, 2025
9 of 10 checks passed
SamY724 pushed a commit to SamY724/trl that referenced this pull request Sep 6, 2025
@pramodith pramodith deleted the pramodith/sft_for_moe branch September 8, 2025 20:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants