Hfe serialization listpack#13243
Conversation
e568656 to
99d8f40
Compare
|
@ronen-kalish , if it is not too much effort at this point in time, please consider to We will be able to optimize later on the allocation of |
There was a problem hiding this comment.
Following this change, it won't convert any more to HT in case of db is NULL.
Instead I think we should modify hashTypeConvert() to accept pointer of redisDb instead of ebuckets hexpires and let it handle with NULL. It will simplifes conditions at rdb.c as well.
There was a problem hiding this comment.
Not sure I follow - what does this change have to do with conversion to dict? Any why should we care about conversion to dict when db is NULL? In this case, the requirement (AFAIK) is to check RDB validity, the resulting object should be discarded (at least in this repo). What am I missing?
There was a problem hiding this comment.
You don't miss anything. Just saw a change in the logic which i tend to avoid if not necessary at large changes. And in addition to the fact that passing redisDb to hashTypeConvert() it will relax the condtions at rdb.c, made me to write my thought.
serialize metadata hashes to RDB using a new type, saving per each field the key, one byte indicating what else is saved and than (optionally) the value and the TTL (if available).
15a9acf to
d9d4440
Compare
d9d4440 to
5eb92e0
Compare
|
LGTM! If there are flaky tests in the daily, we may fix it later on the way (or you may comment out for now, or merge later with another PR after resolving the issue etc). It is up to you. |
|
@tezc, I'm not sure the flaky tests in the daily are related to this PR, unless we're sure they weren't flaky before. Is this indeed the case? |
|
hmm, I'm not sure. @sundb I see there is one hfe defrag and a corrupt/dump fuzzer failure. Maybe you can comment on that? Could you take a quick look? #13243 (comment) |
|
@tezc , @sundb , if anyone can approve this PR once done reviewing, It'll make @YaacovHazan very happy... |
sundb
left a comment
There was a problem hiding this comment.
we can ignore the failure in the fully CI. it's a known optinization.
|
we need a document PR for this in redis/redis-doc. |
Hi @sundb , can you please point to where RDB format is documented? Or is there another aspect that need to be documented? Thanks! |
|
@ronen-kalish the RDB format doesn't need to be documented, just add document for |
|
@sundb, now that @moticless has removed hash field expiration on load, this specific field is gone and there is no need for documentation. |
|
@ronen-kalish yes, i can't remove the needs-doc-pr tag. |
Add RDB de/serialization for HFE This PR adds two new RDB types: `RDB_TYPE_HASH_METADATA` and `RDB_TYPE_HASH_LISTPACK_TTL` to save HFE data. When the hash RAM encoding is dict, it will be saved in the former, and when it is listpack it will be saved in the latter. Both formats just add the TTL value for each field after the data that was previously saved, i.e HASH_METADATA will save the number of entries and, for each entry, key, value and TTL, whereas listpack is saved as a blob. On read, the usual dict <--> listpack conversion takes place if required. In addition, when reading a hash that was saved as a dict fields are actively expired if expiry is due. Currently this slao holds for listpack encoding, but it is supposed to be removed. TODO: Remove active expiry on load when loading from listpack format (unless we'll decide to keep it)
…#13277) FIELDS keyword was added as part of [redis#13270](redis#13270). It was missing in [redis#13243](redis#13243)
Add RDB de/serialization for HFE
This PR adds two new RDB types:
RDB_TYPE_HASH_METADATAandRDB_TYPE_HASH_LISTPACK_TTLto save HFE data.When the hash RAM encoding is dict, it will be saved in the former, and when it is listpack it will be saved in the latter.
Both formats just add the TTL value for each field after the data that was previously saved, i.e HASH_METADATA will save the number of entries and, for each entry, key, value and TTL, whereas listpack is saved as a blob.
On read, the usual dict <--> listpack conversion takes place if required.
In addition, when reading a hash that was saved as a dict fields are actively expired if expiry is due. Currently this slao holds for listpack encoding, but it is supposed to be removed.
Until Ozan's listpack implementation will be merged to the integ branch, review should be done by commits.
TODO:
Remove active expiry on load when loading from listpack format (unless we'll decide to keep it)