Skip to content

INFO report peak memory before eviction#7894

Merged
oranagra merged 1 commit intoredis:unstablefrom
oranagra:peak_momentary_mem
Oct 18, 2020
Merged

INFO report peak memory before eviction#7894
oranagra merged 1 commit intoredis:unstablefrom
oranagra:peak_momentary_mem

Conversation

@oranagra
Copy link
Member

@oranagra oranagra commented Oct 8, 2020

In some cases one command added a very big bulk of memory, and this
would be "resolved" by the eviction before the next command.

The current peak memory which is only being tracked in serverCron, which
may be useful to see what was the highest steady state the server was
facing.

But in some cases seeing an unexplained mass eviction we would wish to
know the highest momentary usage too.
Tracking it in call() and beforeSleep() adds some hooks in AOF and RDB
loading.

The fix in clientsCronTrackExpansiveClients is related to #7874

@oranagra
Copy link
Member Author

oranagra commented Oct 8, 2020

@redis/core-team please let me know how you feel about this.
also, i now realize that i merged #7874 without raising lazy-consensus, so please take a look and let me know if you have any objection on the two new CLIENT LIST fields it adds.

@madolson
Copy link
Contributor

madolson commented Oct 8, 2020

I think it's confusing. We know that stat_peak_memory is sampled, but all the documentation indicates it's the "max memory consumed in bytes by Redis." So there is a peak, but also a higher peak? I would almost be okay with just updating stat_peak_memory so it included these momentary peaks.

@oranagra
Copy link
Member Author

oranagra commented Oct 8, 2020

@madolson that's what i was about to do (make sure to update the peak in other places), but then i realized that having one that doesn't include momentary spikes is beneficial too.

that said, i'm still hesitant, made the PR to get feedback more than an attempt to push it.

another alternative is to have the old used_memory_peak show the true peaks, and have another one that shows the steady state peaks.

would like to get more feedback.

@madolson
Copy link
Contributor

madolson commented Oct 8, 2020

@oranagra I can't think of a case where the steady state gives you that much more information. I would prefer to just update the used_memory_peak, but I adding the steady state metric seems fine, it's basically free.

@oranagra oranagra requested review from madolson and yossigo and removed request for yossigo October 14, 2020 10:07
In some cases one command added a very big bulk of memory, and this
would be "resolved" by the eviction before the next command.

Seeing an unexplained mass eviction we would wish to
know the highest momentary usage too.

Tracking it in call() and beforeSleep() adds some hooks in AOF and RDB
loading.
@oranagra
Copy link
Member Author

@yossigo @madolson
Judging by the feedback i figured you both agree that the steady peak isn't needed, so I "squashed" the momentary peak into the other peak.

wasn't sure if you mean i should keep the original measurement and just put it into a new info field, and have the new measurement in the old info field.
although info fields are free, i rather not add them if we're not sure it's useful. let me know what you think.

@oranagra oranagra changed the title INFO report peak momentary memory INFO report peak memory before eviction Oct 14, 2020
@yossigo
Copy link
Collaborator

yossigo commented Oct 18, 2020

@oranagra What I don't like about the "steady" peak is that it's not really steady, it's just a sampled value taken in an arbitrary (cron) interval.

@oranagra
Copy link
Member Author

@yossigo so if do we want it, maybe we can try to come up with other places to sample it.
i would have said we can sample it after freeMemoryIfNeeded, but since it's now non blocking (i.e performEvictions can return without reclaiming everything), that's no good.

if we can't come up with such a thing, then let's take what's currently in this PR, and have just one true peak.

@yossigo
Copy link
Collaborator

yossigo commented Oct 18, 2020

@oranagra I think we should take it as it is, the way I see it that's a bug fix.

@oranagra oranagra merged commit 457b707 into redis:unstable Oct 18, 2020
@oranagra oranagra deleted the peak_momentary_mem branch October 18, 2020 13:56
@oranagra oranagra mentioned this pull request Oct 26, 2020
oranagra added a commit that referenced this pull request Oct 27, 2020
In some cases one command added a very big bulk of memory, and this
would be "resolved" by the eviction before the next command.

Seeing an unexplained mass eviction we would wish to
know the highest momentary usage too.

Tracking it in call() and beforeSleep() adds some hooks in AOF and RDB
loading.

The fix in clientsCronTrackExpansiveClients is related to #7874

(cherry picked from commit 457b707)
JackieXie168 pushed a commit to JackieXie168/redis that referenced this pull request Nov 4, 2020
In some cases one command added a very big bulk of memory, and this
would be "resolved" by the eviction before the next command.

Seeing an unexplained mass eviction we would wish to
know the highest momentary usage too.

Tracking it in call() and beforeSleep() adds some hooks in AOF and RDB
loading.

The fix in clientsCronTrackExpansiveClients is related to redis#7874
jschmieg pushed a commit to memKeyDB/memKeyDB that referenced this pull request Nov 6, 2020
In some cases one command added a very big bulk of memory, and this
would be "resolved" by the eviction before the next command.

Seeing an unexplained mass eviction we would wish to
know the highest momentary usage too.

Tracking it in call() and beforeSleep() adds some hooks in AOF and RDB
loading.

The fix in clientsCronTrackExpansiveClients is related to redis#7874

(cherry picked from commit 457b707)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants