-
Notifications
You must be signed in to change notification settings - Fork 38.7k
Autoprune: Do relay blocks if possible #6122
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
We should do something like this, but it won't help without also having
some means of advertizing that we can relay blocks, as right now, nobody
should even be connecting to pruned nodes (which advertize as spv nodes).
|
|
@sipa Can you point me to the code, where we dont connect to pruned nodes, please? |
|
Unsure, addnode may work, but automatic discovery shouldn't.
|
|
So you dont know the code? Didnt you write the address-manager? Well, I will go bother and test now, if a node without NODE_NETWORK announces his ip, and whether or not we add it and connect to it with the address-manager. Because exactly that is, what I was assuming. |
|
So I dont know how to test this NODE_NETWORK thing. The address-manager does not care about nServices, so from my understanding, there is no difference between NODE_NETWORK-nodes and non-NODE_NETWORK-nodes. I any case this pull request does not hurt. Block relay is highest priority, and I believe we really should always announce the block, whenever possible. Lets say someone runs a bunch of full nodes, because he wants to contribute to the network, then If what you are saying is true, that nobody connects to pruned nodes, this would be also bad, because |
|
ACK concept. I don't think this pull-req should be gated on whether or not we advertise that other nodes should connect to us; relaying is always valuable to ensure the P2P network is well connected. Note how if you start a node with -listen=0 you'll never have anyone connect to you, yet we still relay blocks. The same logic should apply here. What I don't get is why make this dependent on the peer's advertised starting height? Why should their height (which may be out of date) have any relevance to whether or not we relay a block? Instead I think this should be based only on our height, with an appropriate window to prevent stalls. As for the size of that window... isn't this code only triggered when we get a new block? You'd have to get a burst of >288 blocks in a row in non-initial-download mode to run into that problem - seems rather unlikely. I'm not sure it's really worth having extra code for such an unlikely case. |
|
@petertodd The code I added only cares about the height of the other node, for both nStartingHeight and pindexBestKnownBlock. The reason is that this avoids, that the other node asks us for blocks we dont have. If we know, that the other has at least what we have (minus 288), nothing can go wrong. I am just very unhappy, with block relay being completely disabled for pruning. To me, this is a bug. |
|
@cozz Whoops, yeah, brainfart on my part. That exception is reasonable, although the fact we need it is annoying. :( Pity that we still deal with stalling peers so badly. Definitely agree re: the total disabling. Heck, I even argue that SPV nodes should be forwarding blocks. |
src/main.cpp
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comment above needs to be updated.
|
I made a couple code comments, but in reviewing this I noticed another issue. I think pruning nodes should respond to a "getblocks" message differently than non-pruning nodes, so that they don't ever generate an inv for a block they don't have on disk. I haven't generated this scenario in a test yet, but I believe it's possible to be in a situation where an 0.9 (or earlier) node could send a getblocks message to a pruning node, and if the 0.9 node is far enough behind, that could result in a pruning node serving up inv's for blocks it doesn't have (which the 0.9 node would then request, but not be able to receive, and then eventually disconnect). Assuming we agree that behavior should change, then I think perhaps this pull could be much simpler, where we don't need to bother with the special case here in the event that we're pruning (ie we just remove the
(I'm not all that familiar with the p2p behavior of 0.9 and 0.8 nodes though, hopefully someone more familiar can confirm that the above is correct.) I guess one thing I am not sure how to consider is the p2p behavior of even older bitcoin core clients (which I haven't looked at) and non-bitcoin-core clients. But my first thought is that if we're advertising ourselves as not being NODE_NETWORK and we're only generating inv's for blocks that we have (and are likely to still have for some reasonable time window), that this ought to be acceptable network behavior. |
|
@sdaftuar Sounds very reasonable to me. 0.8 and 0.9 should never fetch
anything that was not inv'ed, as far as I know.
For 0.10, we should only inv in case the last common ancestor of
pindexLastCommonBlock and the block being inved is not pruned (otherwise we
may trigger an async fetch that crosses pruned block ranges). That is
probably a safe mechanism in any case.Sorry if the code already does this,
reviewing is currently hard for me.
@cozz Yes, I understand. I think the current pruning facility is extremely
limited in usefulness (e.g. you can't even -addnode within a private
network to it and expect a useful result), and I agree the relaying issue
is independent from the discovery/announcement.
|
|
Fixed nits. So I believe this pruning feature might be better to be declared as "experimental" for 0.11, if there is such limited usefulness and potential negative side effects. For example besides block relay, if a lot of nodes decide to use this feature, then the address-manager gets "spammed" with pruned nodes. As for this pull, I'd also say we can just always inv, because we dont call FindNextBlocksToDownload for pruned nodes. But then again, its unclear why it was even disabled in the first place. |
|
Hmm, @sdaftuar mentions that 0.10 does not download from fClient nodes. That's not entirely true - the direct fetching loop (in immediate response to an inv) may fetch from anyone who sends an inv, but the asynchronous fetching won't. That means that even with this code, 0.10 and up will not be able to retrieve a reorg from a pruned peer. |
|
@sipa Sorry if I was unclear but yes that was what I was referring to when I said 0.10 wouldn't download non-inv'ed blocks except from NODE_NETWORK peers. I agree that it seems unfortunate that 0.10 clients specifically wouldn't be able to retrieve a reorg from a pruned peer (while 0.8/0.9 would, via their getblocks logic). Perhaps we should consider #5307 with some small cap on the number of blocks to inv, so that small reorgs would be successfully relayed from a pruning node to an 0.10 node (though with the n^2 overhead mentioned in that pull)? |
|
EDIT: the suggestion I posted in this comment doesn't actually work; inv's received for blocks that are already in mapBlockIndex don't get directly fetched, so this approach wouldn't help 0.10/0.11 nodes receive reorg's after all. #5307 may still be an option if we think this is important...
|
|
Continued in #6148 |
|
OH misundertood, #6148 is not based on this, but is a different implementation reopening. |
|
Closing for #6148 We only announce what we have and not delete that soon anyway. And the asynchronous fetching is not called for pruned nodes. So what I did here is not even necessary. |
Currently we dont relay blocks in prune-mode, which seems to be very bad, as such a negative side-effect is not even mentioned in the help-message.
So with this pr we still relay if the peer has at least up to our pruning threshold.
I added another 6, to give a little more room for preventing the unlikely case that we are pruning the block
while the request is in flight. This means that we still relay, if the guy is not more than 282 blocks behind.