Skip to content

[NEW] Allow keeping only relevant slots to node, from DEBUG RELOAD NOSAVE from standalone generated RDB on a cluster environment. Currently we retain orphaned keys #9506

@filipecosta90

Description

@filipecosta90

Related issue: #7153

The problem/use-case that the feature addresses

Given an RDB produced at a standalone instance, if we reload the RDB on oss cluster primaries we get orphaned keys. Should we allow for an easy way of doing DEBUG RELOAD NOSAVE while only keeping the relevant slots for that node ( even for testing it eases the process )?

To reproduce
Using standalone instance fill it with 1M keys.
Assuming you have a standard dump.rdb on the project root folder:

memtier_benchmark --key-pattern=P:P --key-maximum 999999 -n allkeys --ratio=1:0
cd utils/create-cluster
./create-cluster start
./create-cluster create
for x in `seq 1 6`; do cp ../../dump.rdb dump-3000$x.rdb ; done

# force to reload the rdb on each shard
./create-cluster call debug reload nosave

We now have each shard with 1M keys:

./create-cluster call info keyspace
# Keyspace
db0:keys=1000000,expires=0,avg_ttl=0
# Keyspace
db0:keys=1000000,expires=0,avg_ttl=0
# Keyspace
db0:keys=1000000,expires=0,avg_ttl=0
# Keyspace
db0:keys=1000000,expires=0,avg_ttl=0
# Keyspace
db0:keys=1000000,expires=0,avg_ttl=0
# Keyspace
db0:keys=1000000,expires=0,avg_ttl=0

Given the above keyspace info we have multiple "owners" for the same key:

redis-cli --cluster check 127.0.0.1:30001 --cluster-search-multiple-owners
127.0.0.1:30001 (fc4cbf8b...) -> 1000001 keys | 5461 slots | 1 slaves.
127.0.0.1:30003 (7b3823ad...) -> 1000001 keys | 5461 slots | 1 slaves.
127.0.0.1:30002 (52155b20...) -> 1000001 keys | 5462 slots | 1 slaves.
[OK] 3000003 keys in 3 masters.
183.11 keys per slot on average.
>>> Performing Cluster Check (using node 127.0.0.1:30001)
M: fc4cbf8b8e93b10817ece6c8edca1b939c3ae358 127.0.0.1:30001
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: 7b3823ad330242bf64bb009d0a8268e60c41a9ce 127.0.0.1:30003
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 1ec7b684f54c9ab030e5a76da574c883f0b04d68 127.0.0.1:30006
   slots: (0 slots) slave
   replicates 7b3823ad330242bf64bb009d0a8268e60c41a9ce
S: 83b9d3b734b2ee8552f9657e9e932dd2096d91ce 127.0.0.1:30004
   slots: (0 slots) slave
   replicates fc4cbf8b8e93b10817ece6c8edca1b939c3ae358
S: 41d710abe889a20dc8355d2b5a0e5f12fef12569 127.0.0.1:30005
   slots: (0 slots) slave
   replicates 52155b20230dd94346367436c9d6ed1a515686fc
M: 52155b20230dd94346367436c9d6ed1a515686fc 127.0.0.1:30002
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Check for multiple slot owners...
[WARNING] Slot 0 has 3 owners:
    127.0.0.1:30001
    127.0.0.1:30003
    127.0.0.1:30002
[WARNING] Slot 1 has 3 owners:
    127.0.0.1:30001
    127.0.0.1:30003
    127.0.0.1:30002
[WARNING] Slot 2 has 3 owners:
    127.0.0.1:30001
    127.0.0.1:30003
    127.0.0.1:30002
[WARNING] Slot 3 has 3 owners:
    127.0.0.1:30001
    127.0.0.1:30003
    127.0.0.1:30002
(...)

Expected behavior

Allow to easily specify that on reload we only want to retain the keys that belong to the given node and the all the others will be deleted.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions