Update shard memory usage after increase of default values#1810
Conversation
prometheus/prometheus#5267 increase the defaults by 5 times Signed-off-by: Christian Simon <[email protected]>
There was a problem hiding this comment.
Thanks!
In case you are curious about the math: it takes 40 bytes to store a sample, 8 bytes for each of the timestamp and the value, and 24 bytes for the slice header (the underlying data is reused across many samples). So 40 bytes per sample * 2500 samples per shard =~ 100KB per shard. I think we then put some room for growth in case other things (say another 8 bytes for a ref) show up.
If you think that math is not including something I am happy to correct it.
|
Thanks @csmarchbanks for sharing the math. Makes a lot of sense. I thought it would use a hash instead of the slice header (My mistake: I did not look in the code before this PR). I think on the basis of your math, you shouldn't have merged my PR or where is the factor of five coming from if I purely updated the value from |
|
I figured <500kB is still correct even if the real number is lower :). I forgot to include in my own math above the additional overhead per shard for things like the compression buffers, pending sample slice (which is in protobuf form and copies all the label values), etc that would bring it above 100kB right now. I should probably pull up an actual heap profile sometime but in the meantime 500kB seems reasonable. |
prometheus/prometheus#5267 increased the defaults by 5 times for both
max_samples_per_sendandcapacity.I think this value needs to be increased, too. Not too sure how realistic is 5 times and how to get to the
< 100kbin the first place.@csmarchbanks @bboreham