Skip to content

Conversation

@liyonghua0910
Copy link
Collaborator

@liyonghua0910 liyonghua0910 commented Oct 15, 2025

由于代码冲突合并,当前主分支代码存在 cache_transfer_manager 与 cache_messager 重复创建 kv cache gpu tensor 的问题,将导致 PD 分离下显存占用异常高。该 PR 解决上述问题。

@paddle-bot
Copy link

paddle-bot bot commented Oct 15, 2025

Thanks for your contribution!

@CLAassistant
Copy link

CLAassistant commented Oct 15, 2025

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
1 out of 2 committers have signed the CLA.

✅ liyonghua0910
❌ ltd0924


ltd0924 seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

@Jiang-Jia-Jun Jiang-Jia-Jun merged commit b8d2354 into PaddlePaddle:develop Oct 20, 2025
13 of 16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants