-
Notifications
You must be signed in to change notification settings - Fork 38.7k
Relay blocks when pruning #6148
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Fine with me. Reorg is still a problem, but relaying the tip is at least better than relaying nothing. To solve the reorg-problem, I would want to also call FindNextBlocksToDownload on pruned-nodes, if it is ensured that we can not ask them for pruned blocks. |
|
ACK. Tested for tip relay for master and reorg relay (up to a certain depth) for 0.9. |
|
I just sopped my pruned node, added this PR on top of the current master, ran again and had the following issue: Stopping: Restarting (after serval seconds/minutes): I doubt that it has something to do with this PR but it could lead to another existing issue. |
|
My |
|
Further up i can see in my log that |
|
@jonasschnelli any chance you had been running with #6118 and then switched to running without? This looks exactly like what happens when trying to downgrade from lazy updating of mapBlockIndex. If that is the issue you can workaround by restarting your node with #6118 and then quickly stopping; the block index is refreshed on startup. You could then restart without the lazy updating code and things should work fine. |
|
@sdaftuar pretty sure i did this. I'll now try to run a master-full-node, sync up, add this PR on top and continue to confirm the missing-lazy-update issue. |
|
Stopped a full-node, copied datadir, ran a new master-full-node with the just copied datadir and pruning target 550, stopped after a while, added this PR on top and it looks good. Now testing block relying. |
|
Did some testing. (no headers response to headers request) Connecting a fresh node to a non-pruned full node results in: Will also try to connect a "half-synced" full node to a pruned node where the half-synced node has just some blocks over the prune level of the pruned node. But this needs a little bit time to set up. |
|
@jonasschnelli The fresh node won't load the whole headers from the pruning node on startup, because we only choose full NODE_NETWORK nodes to sync the headers chain. However, if you were to wait long enough for the pruning node to inv a block, then at that point at getheaders message should be sent to the pruning node, and headers should then sync. |
|
Nice, I'm running 0.11.0rc1 and bitcoin in %appdata% is only taking up 1.33 GB of space.. Would love to be able to relay blocks. I'd actually leave this on with this small of a memory footprint. |
src/main.cpp
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you use a symbolic constant here? Or some calculation based on consensus params?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sipa Fixed to use a calculation based on consensus params.
|
I haven't followed up on all proposed changes related to this. Is there a plan to also make peers use the inventories we send out? With a change like this, the immediate fetching logic may react to an fClient peer sending a block inv, but the asynchronous fetching won't, making this (for now) useless for reorgs, for example. |
|
I'm confused by the several related and seemingly-overlapping PRs here. |
|
@sipa Sorry for the PR confusion. #6130 fixes a bug where pruning nodes would respond badly to a getblocks message by inv'ing blocks they don't actually have. This pull adds block relaying for pruning nodes, which would not work without the fix in #6130, so I built it on top. Initially I thought the bug fix in #6130 should stand on its own, but on further thought I think it could only be triggered once we actually implement block relay. I'll go ahead and close that pull in favor of this one. As for next steps, yes this pull is insufficient for relaying reorgs to 0.10 and later peers. I was thinking that we'd hopefully be able to put together a sharding implementation and at that point we'd also implement the more intelligent block requesting at the same time. I guess if that work seems like it's taking too long, we could (I guess before the 0.12 release) instead do some other change to the parallel-fetch code to try to request recent blocks from pruning peers. Would you rather see a solution like that as part of this pull? |
|
No, just querying what the current state/plan is :) |
When responding to a getblocks message, only return inv's as long as we HAVE_DATA for blocks in the chain, and only for blocks that we aren't likely to delete in the near future.
f3b92da to
ae6f957
Compare
|
Been running on bitcoin.sipa.be for a week, no problems. |
|
@jonasschnelli I saw your comment in #6460 about this PR -- I think this could be merged as-is without any new service bits. Adding a service bit would be needed to implement a system for selectively downloading historical blocks from less-than-full nodes, but I don't think it's needed to enable relaying at the tip. |
|
@sdaftuar: Agreed. Will test again soon and sry for the delay. |
|
ACK (have done a "it works" test before). @laanwj Acceptable for 0.11? |
|
Running since >18h on a pruned synced full node (master). tested ACK |
|
I've been testing this against master and it's happily relaying at the tip among my peers without issue. Appears to work great. Has someone tried the case where the peer is on a fork and needs to fetch futher back than the pruning window to complete the reorg? |
|
To be clear: ACK. |
|
concept ACK |
|
@laanwj I think this is ready to be merged; are there any outstanding concerns here? |
Relay blocks when pruning Cherry-picked from bitcoin/bitcoin#6148 Part of #2074.
Relay blocks when pruning Cherry-picked from bitcoin/bitcoin#6148 Part of #2074.
This is an alternate to #6122, built off of #6130. With #6130, it should be safe to always inv the tip regardless of what height our peer is at, because we'll only inv blocks that we actually have.
As noted in the discussion of #6122, 0.10 and later peers would still not be able to download a reorg from a pruning peer; they would only be able to receive new blocks that build on their tip (0.9 and earlier peers using getblocks would be able to download reorgs from pruning nodes).
ping @cozz