-
-
Notifications
You must be signed in to change notification settings - Fork 4.7k
Reduce memory usage in PoolBuilder #12516
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
db0933e to
4267c63
Compare
Seldaek
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok I see this is trading off memory use at the cost of reading the file twice, and json decoding it twice.. I'm not super happy by the trade-off tbh. Will investigate this further as I wanted to try something anyway on the cache front..
|
Right. It's already sort of not very efficient now, though. It reads the entire cache file just for the |
|
So I tried storing cache files as I don't fully grasp why this is.. But anyway this isn't helping so I'll investigate your patch closer. |
|
Interestingly I don't see a memory improvement, just an almost 2x slowdown with your patch. I tried just reading the last-modified flag by doing a fopen+fseek to the end, and reading the last 80 chars to extract it, but that doesn't improve anything either.. Not sure what is going on today :D |
|
Different approach: Simply chunk the loading of packages in the Diff best viewed with hiding whitespace - it's basically just one more loop. |
e1543c9 to
86d92a5
Compare
19623b8 to
6ae0656
Compare
|
Thanks |
Not something you would notice when running a regular
composer updateor so because there are other - much more important - factors that influence memory usage. However, if you use e.g. theComposerRepositorypretty much isolated like I do in https://github.com/terminal42/composer-lock-validator, the fact that we're keeping all provider data in memory for all the packages provided is quite noticeable - the more packages, the more memory.This is because the promises cause
$contentsto never be cleaned up by GC as it is used until the promise is finally resolved.Loading packages in batches instead of all at once prevents such potential memory allocations quite nicely:
So
json_decode()still allocates83 MBin my case in total but because it does it in junks now, we reduce peak memory usage.