Skip to content

Conversation

@K11OntheBoat
Copy link
Collaborator

Limit the length of a model's reasoning.
For example, if a request with "enable_thinking": true and reasoning_max_tokens": 20 is sent, the model will be limited to a reasoning length of 20, and will then immediately begin responding.

@paddle-bot
Copy link

paddle-bot bot commented Aug 22, 2025

Thanks for your contribution!

@CLAassistant
Copy link

CLAassistant commented Aug 22, 2025

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
1 out of 2 committers have signed the CLA.

✅ luukunn
❌ K11OntheBoat


K11OntheBoat seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

@paddle-bot paddle-bot bot added the contributor External developers label Aug 22, 2025
@K11OntheBoat K11OntheBoat force-pushed the limit_think_len_2.1 branch 2 times, most recently from 968e1be to 5ca872e Compare August 22, 2025 05:12
@Jiang-Jia-Jun Jiang-Jia-Jun changed the title Support limit thinking len for text models [Feature] Support limit thinking len for text models Aug 22, 2025
Copy link
Collaborator

@xiaoxiaohehe001 xiaoxiaohehe001 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@Jiang-Jia-Jun Jiang-Jia-Jun merged commit 93d999b into PaddlePaddle:release/2.1 Aug 22, 2025
8 of 12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

contributor External developers

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants