Skip to content

Check target node is a primary during cluster setslot.#10277

Merged
madolson merged 1 commit intoredis:unstablefrom
hpatro:set-slot-replica
Feb 11, 2022
Merged

Check target node is a primary during cluster setslot.#10277
madolson merged 1 commit intoredis:unstablefrom
hpatro:set-slot-replica

Conversation

@hpatro
Copy link
Contributor

@hpatro hpatro commented Feb 10, 2022

A replica can't own a slot, only primaries can own the slot. Hence, we need to disable the slot transfer from a primary to a replica.

Verify if the target node is a primary during cluster setslot <slot> NODE <id>

Before the change

127.0.0.1:7379> CLUSTER SETSLOT 1 NODE e2acf1a97c055fd09dcc2c0dcc62b19a6905dbc8
OK
127.0.0.1:7379> cluster nodes
b7e9acc0def782aabe6b596f67f06c73c2ffff93 127.0.0.1:7379@17379 myself,master - 0 1644506261000 18 connected 0 5-5460 11111-11112 16112
1461967c62eab0e821ed54f2c98e594fccfd8736 127.0.0.1:7381@17381 slave,fail? 71f058078c142a73b94767a4e78e9033d195dfb4 1644506176155 1644506173615 3 disconnected
9215e30cd4a71070088778080565de6ef75fd459 127.0.0.1:6380@16380 master,fail? - 1644506176155 1644506173615 16 disconnected 5461-10922
e2acf1a97c055fd09dcc2c0dcc62b19a6905dbc8 127.0.0.1:6379@16379 slave b7e9acc0def782aabe6b596f67f06c73c2ffff93 0 1644506261963 18 connected 1-4
71f058078c142a73b94767a4e78e9033d195dfb4 127.0.0.1:6381@16381 master,fail? - 1644506175545 1644506173615 3 disconnected 10923-11110 11113-16111 16113-16383
877fa59da72cb902d0563d3d8def3437fc3a0196 127.0.0.1:7380@17380 slave,fail? 9215e30cd4a71070088778080565de6ef75fd459 1644506174532 1644506173615 16 disconnected
1

After the change

127.0.0.1:6379> CLUSTER SETSLOT 1 NODE e2acf1a97c055fd09dcc2c0dcc62b19a6905dbc8
(error) ERR Please use SETSLOT only with masters.
127.0.0.1:6379> CLUSTER NODES
b7e9acc0def782aabe6b596f67f06c73c2ffff93 127.0.0.1:7379@17379 master - 0 1644518177561 18 connected 0-5460 11111-11112 16112
e2acf1a97c055fd09dcc2c0dcc62b19a6905dbc8 127.0.0.1:6379@16379 myself,slave b7e9acc0def782aabe6b596f67f06c73c2ffff93 0 1644518177000 18 connected
1461967c62eab0e821ed54f2c98e594fccfd8736 127.0.0.1:7381@17381 slave,fail? 71f058078c142a73b94767a4e78e9033d195dfb4 1644505039475 1644505037542 3 disconnected
9215e30cd4a71070088778080565de6ef75fd459 127.0.0.1:6380@16380 master,fail? - 1644505040086 1644505037542 16 disconnected 5461-10922
71f058078c142a73b94767a4e78e9033d195dfb4 127.0.0.1:6381@16381 master,fail? - 1644505040086 1644505037542 3 disconnected 10923-11110 11113-16111 16113-16383
877fa59da72cb902d0563d3d8def3437fc3a0196 127.0.0.1:7380@17380 slave,fail? 9215e30cd4a71070088778080565de6ef75fd459 1644505038461 1644505037542 16 disconnected

@madolson
Copy link
Contributor

I realized that this came up in the past, #9360, and I thought that we shouldn't add any check here because during slot migration a single node might not "know" the state of another node so it should be okay to force its state.

I was probably wrong, if we get an error here it just means that the node will learn about the state more slowly.

Copy link
Contributor

@madolson madolson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@madolson
Copy link
Contributor

Also apparently this is a duplicate of #7424.

@madolson madolson changed the title Assert target node is a primary during cluster setslot. Check target node is a primary during cluster setslot. Feb 11, 2022
@madolson madolson merged commit a5d17f0 into redis:unstable Feb 11, 2022
@madolson madolson added the release-notes indication that this issue needs to be mentioned in the release notes label Feb 11, 2022
@madolson
Copy link
Contributor

Since this as administration command, don't think we need to backport, but we can callout that it's safer now in 7.

@zuiderkwast
Copy link
Contributor

For reference, this was also done already in #3450 (though it was mixed with other changes).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

release-notes indication that this issue needs to be mentioned in the release notes

Projects

Archived in project

Development

Successfully merging this pull request may close these issues.

3 participants