client: stream containers serially to conserve memory#12846
client: stream containers serially to conserve memory#12846ningmingxiao wants to merge 1 commit intocontainerd:mainfrom
Conversation
ad0d6c5 to
e65d34f
Compare
074c60f to
9b5eedf
Compare
4a47c4b to
89bcba9
Compare
21641e1 to
d821378
Compare
|
Thoughts: Swapping speed for memory, and vice verse, is a time old issue best left to config option IMO. These types of performance changes are fought over with each performance change to the codebase with some contributors optimizing for speed, some for memory, some for a combination of with resource restrictions. Thus, this preference should not a this or that decision, it should be a choice, possibly even dynamic based on SLA definitions driven by the user/client. |
0ba7106 to
f7bb162
Compare
76c14a3 to
074c60f
Compare
2df43c6 to
dbe301f
Compare
dbe301f to
1fe488f
Compare
|
Here is the newest benchmark test test 10 times. if let b.Run("use client.Containers", benchmarkGetContainers(ctx, false, client)) first the first test use more resources on my computer(looks strange). let -benchtime=10s The difference is not as obvious as when tested with "time -v". |
1fe488f to
a56f712
Compare
|
done @mxpv |
Co-authored-by: Sebastiaan van Stijn <[email protected]> Signed-off-by: ningmingxiao <[email protected]>
|
ping @mxpv can this pr be merged? |
|
@mikebrow or @thaJeztah PTAL? |
|
ping @mikebrow @thaJeztah |
|
@thaJeztah can you take a look? |
stream containers serially to conserve memory
ctr/nerdctl use
https://github.com/containerd/containerd/blob/v2.2.1/client/containerstore.go#L107 to get all containers list
if every container spec size is big we have to use much memory to store the container list.
what I did:
Process one item from the container at a time, without adding it to any list.
fix #12858
use /usr/bin/time -v ctr -n k8s.io c ls
after this pr
Maximum resident set size from 338328 reduce to 45556
@fuweid @mikebrow @mxpv @AkihiroSuda