Skip to content

Conversation

@jonasschnelli
Copy link
Contributor

Currently, we keep only keypool indexes (a int64_t) in memory.
If we need the CKeyID of a keypool key (or if we need to know if it's an internal key), we need to load the whole CKeyPool entry from the database.

This PR will change the in-memory set entries from int64_t to KeypoolCache.
This essentially allows us to efficiently scan the CKeyIDs of our keypool which is almost a pre-requirement if we want to have an efficient HD rescan (otherwise we would have to load the complete key pool each time we found an IsMine or IsFromMe tx).

There are no direct performance optimisations implemented in this PR.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be more straightforward to replace setKeyPool with a std::map<int64_t, CKeyID> instead of taking this approach which basically emulates a map using set entries with a custom < operator.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. But IMO holding a flexible data-structure (KeypoolCache) allows us to be more flexible in future. Though no strong opinion.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

holding a flexible data-structure (KeypoolCache) allows us to be more flexible in future

I don't have any problem with keeping KeypoolCache if it will be useful later.

What seems bad to me is using std::set with a custom comparison function to emulate a map. Would suggest removing nIndex and operator< from KeypoolCache and using std::map<int64_t, KeypoolCache>

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair point. The question is should we do it in this PR or later. But since we significant change the set's content, It should be changed now (which I will do soon).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note: keyId member is not actually used anywhere in this PR. (Fine, just pointing out to help review).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. This is true, though, the keyId is relatively important for future performance improvements. We could add it later,.. but I guess it doesn't hurt to already have it there in this PR.

@paveljanik
Copy link
Contributor

Rebase needed.

@jonasschnelli jonasschnelli force-pushed the 2017/04/keypool_fix_a branch from a4ed0d7 to 2e61435 Compare May 4, 2017 12:42
@jonasschnelli
Copy link
Contributor Author

Rebased.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is it added ? where is it used ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh. That's a rebase issue. fFileBackand was removed in the meantime, Will remove it here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed.

Copy link
Contributor

@NicolasDorier NicolasDorier May 4, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am wondering if it is not better (minimum diff) to just change ReadPool signature to take KeyPoolCache instead of index.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not just take KeypoolCache ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm.. then CReserveKey::Return key would have to create a KeypoolCache()?

Copy link
Contributor

@NicolasDorier NicolasDorier May 4, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why ? I mean KeypoolCache is (int, KeyId) anyway. It is just strange to pass them as two distinct parameters here.

@ryanofsky
Copy link
Contributor

ryanofsky commented Jul 25, 2017

In #10882 it seems like looking up keypool index using key id is more useful than looking up key id using key index, so I'm not sure if there is a direct use for this PR and maybe it should be reconsidered.

If the goal is to avoid reading all the unused keys from disk each time an IsMine transaction is encountered, I think there are three ways it could be done:

  1. Easiest way is probably just to just cache the result of GetAllReserveKeys during the rescan so it doesn't get called multiple times in the same scan.
  2. Another way could be update this PR so GetAllReserveKeys() could work from memory without touching the database. This optimization is not exclusive with (1) above and both could be done together.
  3. Another way could be to update this PR to store a bidirectional mapping from key id <-> key index instead of just forward mapping so GetAllReserveKeys() wouldn't have to do any work and could just return a reference to the reverse map. This could be done with boost multindex or just two plain maps.

@jonasschnelli
Copy link
Contributor Author

No longer really useful, closing for now.

@TheBlueMatt
Copy link
Contributor

Hmm, I actually think this was a really nice change, though I guess it hits at the broader specifically-define-what-we-cache-from-db issue that has persisted for some time.

@bitcoin bitcoin locked as resolved and limited conversation to collaborators Sep 8, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants