Save config file and brocast the PONG when configEpoch changed#1813
Merged
enjoy-binbin merged 2 commits intovalkey-io:unstablefrom Mar 11, 2025
Merged
Save config file and brocast the PONG when configEpoch changed#1813enjoy-binbin merged 2 commits intovalkey-io:unstablefrom
enjoy-binbin merged 2 commits intovalkey-io:unstablefrom
Conversation
This is somehow related with valkey-io#974 and valkey-io#1777. When the epoch changes, we should save the configuration file and broadcast a PONG as much as possible. For example, if a primary down after bumping the epoch, its replicas may initiate a failover, but the other primaries may refuse to vote because the epoch of the replica has not been updated. Or for example, for some reasons we bump the epoch, if the epoch is not updated in time in the cluster, it may affect the judgment of message staleness. Signed-off-by: Binbin <[email protected]>
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## unstable #1813 +/- ##
============================================
+ Coverage 70.97% 71.03% +0.05%
============================================
Files 123 123
Lines 65651 65665 +14
============================================
+ Hits 46593 46642 +49
+ Misses 19058 19023 -35
🚀 New features to boost your workflow:
|
madolson
approved these changes
Mar 6, 2025
Member
madolson
left a comment
There was a problem hiding this comment.
These broadcasts are expensive in large clusters, but none of these seem high frequency. So this seems OK to me. Minor suggestion.
Signed-off-by: Binbin <[email protected]>
zuiderkwast
pushed a commit
that referenced
this pull request
Mar 18, 2025
This is somehow related with #974 and #1777. When the epoch changes, we should save the configuration file and broadcast a PONG as much as possible. For example, if a primary down after bumping the epoch, its replicas may initiate a failover, but the other primaries may refuse to vote because the epoch of the replica has not been updated. Or for example, for some reasons we bump the epoch, if the epoch is not updated in time in the cluster, it may affect the judgment of message staleness. These broadcasts are expensive in large clusters, but none of these seem high frequency so it should be fine. --------- Signed-off-by: Binbin <[email protected]>
xbasel
pushed a commit
to xbasel/valkey
that referenced
this pull request
Mar 27, 2025
…y-io#1813) This is somehow related with valkey-io#974 and valkey-io#1777. When the epoch changes, we should save the configuration file and broadcast a PONG as much as possible. For example, if a primary down after bumping the epoch, its replicas may initiate a failover, but the other primaries may refuse to vote because the epoch of the replica has not been updated. Or for example, for some reasons we bump the epoch, if the epoch is not updated in time in the cluster, it may affect the judgment of message staleness. These broadcasts are expensive in large clusters, but none of these seem high frequency so it should be fine. --------- Signed-off-by: Binbin <[email protected]>
xbasel
pushed a commit
to xbasel/valkey
that referenced
this pull request
Mar 27, 2025
…y-io#1813) This is somehow related with valkey-io#974 and valkey-io#1777. When the epoch changes, we should save the configuration file and broadcast a PONG as much as possible. For example, if a primary down after bumping the epoch, its replicas may initiate a failover, but the other primaries may refuse to vote because the epoch of the replica has not been updated. Or for example, for some reasons we bump the epoch, if the epoch is not updated in time in the cluster, it may affect the judgment of message staleness. These broadcasts are expensive in large clusters, but none of these seem high frequency so it should be fine. --------- Signed-off-by: Binbin <[email protected]>
zarkash-aws
pushed a commit
to zarkash-aws/valkey
that referenced
this pull request
Apr 6, 2025
…y-io#1813) This is somehow related with valkey-io#974 and valkey-io#1777. When the epoch changes, we should save the configuration file and broadcast a PONG as much as possible. For example, if a primary down after bumping the epoch, its replicas may initiate a failover, but the other primaries may refuse to vote because the epoch of the replica has not been updated. Or for example, for some reasons we bump the epoch, if the epoch is not updated in time in the cluster, it may affect the judgment of message staleness. These broadcasts are expensive in large clusters, but none of these seem high frequency so it should be fine. --------- Signed-off-by: Binbin <[email protected]> Signed-off-by: Shai Zarka <[email protected]>
murphyjacob4
pushed a commit
to enjoy-binbin/valkey
that referenced
this pull request
Apr 13, 2025
…y-io#1813) This is somehow related with valkey-io#974 and valkey-io#1777. When the epoch changes, we should save the configuration file and broadcast a PONG as much as possible. For example, if a primary down after bumping the epoch, its replicas may initiate a failover, but the other primaries may refuse to vote because the epoch of the replica has not been updated. Or for example, for some reasons we bump the epoch, if the epoch is not updated in time in the cluster, it may affect the judgment of message staleness. These broadcasts are expensive in large clusters, but none of these seem high frequency so it should be fine. --------- Signed-off-by: Binbin <[email protected]>
enjoy-binbin
added a commit
to enjoy-binbin/valkey
that referenced
this pull request
Jun 5, 2025
When the primary changes the config epoch and then down immediately, the replica may not update the config epoch in time. Although we will broadcast the change in cluster (see valkey-io#1813), there may be a race in the network or in the code. In this case, the replica will never finish the failover since other primaries will refuse to vote because the replica's slot config epoch is old. We need a way to allow the replica can finish the failover in this case. When the primary refuses to vote because the replica's config epoch is less than the dead primary's config epoch, it can send an UPDATE packet to the replica to inform the replica about the dead primary. The UPDATE message contains information about the dead primary's config epoch and owned slots. The failover will time out, but later the replica can try again with the updated config epoch and it can succeed. Fixes valkey-io#2169. Signed-off-by: Binbin <[email protected]> Co-authored-by: Viktor Söderqvist <[email protected]>
hpatro
added a commit
that referenced
this pull request
Jun 10, 2025
When the primary changes the config epoch and then down immediately, the replica may not update the config epoch in time. Although we will broadcast the change in cluster (see #1813), there may be a race in the network or in the code. In this case, the replica will never finish the failover since other primaries will refuse to vote because the replica's slot config epoch is old. We need a way to allow the replica can finish the failover in this case. When the primary refuses to vote because the replica's config epoch is less than the dead primary's config epoch, it can send an UPDATE packet to the replica to inform the replica about the dead primary. The UPDATE message contains information about the dead primary's config epoch and owned slots. The failover will time out, but later the replica can try again with the updated config epoch and it can succeed. Fixes #2169. --------- Signed-off-by: Binbin <[email protected]> Signed-off-by: Harkrishn Patro <[email protected]> Co-authored-by: Viktor Söderqvist <[email protected]> Co-authored-by: Harkrishn Patro <[email protected]> Co-authored-by: Madelyn Olson <[email protected]>
hpatro
added a commit
to hpatro/valkey
that referenced
this pull request
Jun 10, 2025
…ey-io#2178) When the primary changes the config epoch and then down immediately, the replica may not update the config epoch in time. Although we will broadcast the change in cluster (see valkey-io#1813), there may be a race in the network or in the code. In this case, the replica will never finish the failover since other primaries will refuse to vote because the replica's slot config epoch is old. We need a way to allow the replica can finish the failover in this case. When the primary refuses to vote because the replica's config epoch is less than the dead primary's config epoch, it can send an UPDATE packet to the replica to inform the replica about the dead primary. The UPDATE message contains information about the dead primary's config epoch and owned slots. The failover will time out, but later the replica can try again with the updated config epoch and it can succeed. Fixes valkey-io#2169. --------- Signed-off-by: Binbin <[email protected]> Signed-off-by: Harkrishn Patro <[email protected]> Co-authored-by: Viktor Söderqvist <[email protected]> Co-authored-by: Harkrishn Patro <[email protected]> Co-authored-by: Madelyn Olson <[email protected]>
hpatro
added a commit
that referenced
this pull request
Jun 11, 2025
When the primary changes the config epoch and then down immediately, the replica may not update the config epoch in time. Although we will broadcast the change in cluster (see #1813), there may be a race in the network or in the code. In this case, the replica will never finish the failover since other primaries will refuse to vote because the replica's slot config epoch is old. We need a way to allow the replica can finish the failover in this case. When the primary refuses to vote because the replica's config epoch is less than the dead primary's config epoch, it can send an UPDATE packet to the replica to inform the replica about the dead primary. The UPDATE message contains information about the dead primary's config epoch and owned slots. The failover will time out, but later the replica can try again with the updated config epoch and it can succeed. Fixes #2169. --------- Signed-off-by: Binbin <[email protected]> Signed-off-by: Harkrishn Patro <[email protected]> Co-authored-by: Viktor Söderqvist <[email protected]> Co-authored-by: Harkrishn Patro <[email protected]> Co-authored-by: Madelyn Olson <[email protected]>
chzhoo
pushed a commit
to chzhoo/valkey
that referenced
this pull request
Jun 12, 2025
…ey-io#2178) When the primary changes the config epoch and then down immediately, the replica may not update the config epoch in time. Although we will broadcast the change in cluster (see valkey-io#1813), there may be a race in the network or in the code. In this case, the replica will never finish the failover since other primaries will refuse to vote because the replica's slot config epoch is old. We need a way to allow the replica can finish the failover in this case. When the primary refuses to vote because the replica's config epoch is less than the dead primary's config epoch, it can send an UPDATE packet to the replica to inform the replica about the dead primary. The UPDATE message contains information about the dead primary's config epoch and owned slots. The failover will time out, but later the replica can try again with the updated config epoch and it can succeed. Fixes valkey-io#2169. --------- Signed-off-by: Binbin <[email protected]> Signed-off-by: Harkrishn Patro <[email protected]> Co-authored-by: Viktor Söderqvist <[email protected]> Co-authored-by: Harkrishn Patro <[email protected]> Co-authored-by: Madelyn Olson <[email protected]> Signed-off-by: chzhoo <[email protected]>
vitarb
pushed a commit
to vitarb/valkey
that referenced
this pull request
Jun 12, 2025
…ey-io#2178) When the primary changes the config epoch and then down immediately, the replica may not update the config epoch in time. Although we will broadcast the change in cluster (see valkey-io#1813), there may be a race in the network or in the code. In this case, the replica will never finish the failover since other primaries will refuse to vote because the replica's slot config epoch is old. We need a way to allow the replica can finish the failover in this case. When the primary refuses to vote because the replica's config epoch is less than the dead primary's config epoch, it can send an UPDATE packet to the replica to inform the replica about the dead primary. The UPDATE message contains information about the dead primary's config epoch and owned slots. The failover will time out, but later the replica can try again with the updated config epoch and it can succeed. Fixes valkey-io#2169. --------- Signed-off-by: Binbin <[email protected]> Signed-off-by: Harkrishn Patro <[email protected]> Co-authored-by: Viktor Söderqvist <[email protected]> Co-authored-by: Harkrishn Patro <[email protected]> Co-authored-by: Madelyn Olson <[email protected]> (cherry picked from commit 476671b)
vitarb
pushed a commit
to vitarb/valkey
that referenced
this pull request
Jun 13, 2025
…ey-io#2178) When the primary changes the config epoch and then down immediately, the replica may not update the config epoch in time. Although we will broadcast the change in cluster (see valkey-io#1813), there may be a race in the network or in the code. In this case, the replica will never finish the failover since other primaries will refuse to vote because the replica's slot config epoch is old. We need a way to allow the replica can finish the failover in this case. When the primary refuses to vote because the replica's config epoch is less than the dead primary's config epoch, it can send an UPDATE packet to the replica to inform the replica about the dead primary. The UPDATE message contains information about the dead primary's config epoch and owned slots. The failover will time out, but later the replica can try again with the updated config epoch and it can succeed. Fixes valkey-io#2169. --------- Signed-off-by: Binbin <[email protected]> Signed-off-by: Harkrishn Patro <[email protected]> Co-authored-by: Viktor Söderqvist <[email protected]> Co-authored-by: Harkrishn Patro <[email protected]> Co-authored-by: Madelyn Olson <[email protected]> (cherry picked from commit 476671b)
shanwan1
pushed a commit
to shanwan1/valkey
that referenced
this pull request
Jun 13, 2025
…ey-io#2178) When the primary changes the config epoch and then down immediately, the replica may not update the config epoch in time. Although we will broadcast the change in cluster (see valkey-io#1813), there may be a race in the network or in the code. In this case, the replica will never finish the failover since other primaries will refuse to vote because the replica's slot config epoch is old. We need a way to allow the replica can finish the failover in this case. When the primary refuses to vote because the replica's config epoch is less than the dead primary's config epoch, it can send an UPDATE packet to the replica to inform the replica about the dead primary. The UPDATE message contains information about the dead primary's config epoch and owned slots. The failover will time out, but later the replica can try again with the updated config epoch and it can succeed. Fixes valkey-io#2169. --------- Signed-off-by: Binbin <[email protected]> Signed-off-by: Harkrishn Patro <[email protected]> Co-authored-by: Viktor Söderqvist <[email protected]> Co-authored-by: Harkrishn Patro <[email protected]> Co-authored-by: Madelyn Olson <[email protected]> Signed-off-by: shanwan1 <[email protected]>
ranshid
added a commit
to ranshid/valkey
that referenced
this pull request
Jun 18, 2025
…ated (valkey-io#2178) When the primary changes the config epoch and then down immediately, the replica may not update the config epoch in time. Although we will broadcast the change in cluster (see valkey-io#1813), there may be a race in the network or in the code. In this case, the replica will never finish the failover since other primaries will refuse to vote because the replica's slot config epoch is old. We need a way to allow the replica can finish the failover in this case. When the primary refuses to vote because the replica's config epoch is less than the dead primary's config epoch, it can send an UPDATE packet to the replica to inform the replica about the dead primary. The UPDATE message contains information about the dead primary's config epoch and owned slots. The failover will time out, but later the replica can try again with the updated config epoch and it can succeed. Fixes valkey-io#2169. Signed-off-by: Ran Shidlansik <[email protected]>
ranshid
added a commit
that referenced
this pull request
Jun 18, 2025
…dated (#2178) to 7.2 (#2232) When the primary changes the config epoch and then down immediately, the replica may not update the config epoch in time. Although we will broadcast the change in cluster (see #1813), there may be a race in the network or in the code. In this case, the replica will never finish the failover since other primaries will refuse to vote because the replica's slot config epoch is old. We need a way to allow the replica can finish the failover in this case. When the primary refuses to vote because the replica's config epoch is less than the dead primary's config epoch, it can send an UPDATE packet to the replica to inform the replica about the dead primary. The UPDATE message contains information about the dead primary's config epoch and owned slots. The failover will time out, but later the replica can try again with the updated config epoch and it can succeed. Fixes #2169. --------- Signed-off-by: Ran Shidlansik <[email protected]>
zuiderkwast
added a commit
to vitarb/valkey
that referenced
this pull request
Aug 15, 2025
…ey-io#2178) When the primary changes the config epoch and then down immediately, the replica may not update the config epoch in time. Although we will broadcast the change in cluster (see valkey-io#1813), there may be a race in the network or in the code. In this case, the replica will never finish the failover since other primaries will refuse to vote because the replica's slot config epoch is old. We need a way to allow the replica can finish the failover in this case. When the primary refuses to vote because the replica's config epoch is less than the dead primary's config epoch, it can send an UPDATE packet to the replica to inform the replica about the dead primary. The UPDATE message contains information about the dead primary's config epoch and owned slots. The failover will time out, but later the replica can try again with the updated config epoch and it can succeed. Fixes valkey-io#2169. --------- Signed-off-by: Binbin <[email protected]> Signed-off-by: Harkrishn Patro <[email protected]> Co-authored-by: Viktor Söderqvist <[email protected]> Co-authored-by: Harkrishn Patro <[email protected]> Co-authored-by: Madelyn Olson <[email protected]> (cherry picked from commit 476671b)
zuiderkwast
added a commit
to vitarb/valkey
that referenced
this pull request
Aug 15, 2025
…ey-io#2178) When the primary changes the config epoch and then down immediately, the replica may not update the config epoch in time. Although we will broadcast the change in cluster (see valkey-io#1813), there may be a race in the network or in the code. In this case, the replica will never finish the failover since other primaries will refuse to vote because the replica's slot config epoch is old. We need a way to allow the replica can finish the failover in this case. When the primary refuses to vote because the replica's config epoch is less than the dead primary's config epoch, it can send an UPDATE packet to the replica to inform the replica about the dead primary. The UPDATE message contains information about the dead primary's config epoch and owned slots. The failover will time out, but later the replica can try again with the updated config epoch and it can succeed. Fixes valkey-io#2169. --------- Signed-off-by: Binbin <[email protected]> Signed-off-by: Harkrishn Patro <[email protected]> Co-authored-by: Viktor Söderqvist <[email protected]> Co-authored-by: Harkrishn Patro <[email protected]> Co-authored-by: Madelyn Olson <[email protected]> (cherry picked from commit 476671b) Signed-off-by: Viktor Söderqvist <[email protected]>
zuiderkwast
added a commit
to vitarb/valkey
that referenced
this pull request
Aug 21, 2025
…ey-io#2178) When the primary changes the config epoch and then down immediately, the replica may not update the config epoch in time. Although we will broadcast the change in cluster (see valkey-io#1813), there may be a race in the network or in the code. In this case, the replica will never finish the failover since other primaries will refuse to vote because the replica's slot config epoch is old. We need a way to allow the replica can finish the failover in this case. When the primary refuses to vote because the replica's config epoch is less than the dead primary's config epoch, it can send an UPDATE packet to the replica to inform the replica about the dead primary. The UPDATE message contains information about the dead primary's config epoch and owned slots. The failover will time out, but later the replica can try again with the updated config epoch and it can succeed. Fixes valkey-io#2169. --------- Signed-off-by: Binbin <[email protected]> Signed-off-by: Harkrishn Patro <[email protected]> Co-authored-by: Viktor Söderqvist <[email protected]> Co-authored-by: Harkrishn Patro <[email protected]> Co-authored-by: Madelyn Olson <[email protected]> (cherry picked from commit 476671b) Signed-off-by: Viktor Söderqvist <[email protected]>
zuiderkwast
added a commit
that referenced
this pull request
Aug 22, 2025
When the primary changes the config epoch and then down immediately, the replica may not update the config epoch in time. Although we will broadcast the change in cluster (see #1813), there may be a race in the network or in the code. In this case, the replica will never finish the failover since other primaries will refuse to vote because the replica's slot config epoch is old. We need a way to allow the replica can finish the failover in this case. When the primary refuses to vote because the replica's config epoch is less than the dead primary's config epoch, it can send an UPDATE packet to the replica to inform the replica about the dead primary. The UPDATE message contains information about the dead primary's config epoch and owned slots. The failover will time out, but later the replica can try again with the updated config epoch and it can succeed. Fixes #2169. --------- Signed-off-by: Binbin <[email protected]> Signed-off-by: Harkrishn Patro <[email protected]> Co-authored-by: Viktor Söderqvist <[email protected]> Co-authored-by: Harkrishn Patro <[email protected]> Co-authored-by: Madelyn Olson <[email protected]> (cherry picked from commit 476671b) Signed-off-by: Viktor Söderqvist <[email protected]>
sarthakaggarwal97
pushed a commit
to sarthakaggarwal97/valkey
that referenced
this pull request
Sep 16, 2025
…ey-io#2178) When the primary changes the config epoch and then down immediately, the replica may not update the config epoch in time. Although we will broadcast the change in cluster (see valkey-io#1813), there may be a race in the network or in the code. In this case, the replica will never finish the failover since other primaries will refuse to vote because the replica's slot config epoch is old. We need a way to allow the replica can finish the failover in this case. When the primary refuses to vote because the replica's config epoch is less than the dead primary's config epoch, it can send an UPDATE packet to the replica to inform the replica about the dead primary. The UPDATE message contains information about the dead primary's config epoch and owned slots. The failover will time out, but later the replica can try again with the updated config epoch and it can succeed. Fixes valkey-io#2169. --------- Signed-off-by: Binbin <[email protected]> Signed-off-by: Harkrishn Patro <[email protected]> Co-authored-by: Viktor Söderqvist <[email protected]> Co-authored-by: Harkrishn Patro <[email protected]> Co-authored-by: Madelyn Olson <[email protected]> (cherry picked from commit 476671b) Signed-off-by: Viktor Söderqvist <[email protected]>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This is somehow related with #974 and #1777. When the epoch changes,
we should save the configuration file and broadcast a PONG as much
as possible.
For example, if a primary down after bumping the epoch, its replicas
may initiate a failover, but the other primaries may refuse to vote
because the epoch of the replica has not been updated.
Or for example, for some reasons we bump the epoch, if the epoch
is not updated in time in the cluster, it may affect the judgment
of message staleness.
These broadcasts are expensive in large clusters, but none of these
seem high frequency so it should be fine.