exclude replication backlog when computing used memory for key eviction#4668
exclude replication backlog when computing used memory for key eviction#46680xtonyxia wants to merge 1 commit intoredis:unstablefrom
Conversation
|
@0xtonyxia to the best of my understanding (although it might be just a guess), the reason we don't consider slave buffers for eviction is not the fact that they are not part of the actual dataset (i.e. there are many other things that are not part of the dataset and are not excluded). I think that as a user, if you suddenly increase the size of the backlog during the life of the process, you should also increase the maxmemory (to prevent eviction). |
|
Thanks @oranagra . I think your explanation makes sense. Currently only AOF and slave buffers are excluded. They all grow bigger when deleting keys for free memory. The comments of |
|
Hello @0xtonyxia, yep this is exactly like @oranagra said, the problem is that normal Redis users do not know all these details, so when they set "maxmemory 4GB" they expect the server to use maximum 4GB. We already fail users, because fragmentation plus these things that we cannot include in the memory usage will already inflate the memory usage, and users will have some problem understanding why this is the case. The more we don't count into maxmemory, the worse it is from the POV of what users expect. So better to take things as they are :-) Cheers. |
|
I see. Thanks for your reply. @antirez |
Sometimes in order to complete a full resync on a write traffic burst situation, we need to increase the size of replication backlog.
I have encountered a situation where replication backlog is set to 4GB which caused most of the keys in database evicted. This may be beyond my expectation and other Redis users'.
So i think exclude the replication backlog when computing used memory is a better idea, after all replication log is kind of cache and shouldn't be considered as part of data set.