Skip to content

Conversation

@cozz
Copy link
Contributor

@cozz cozz commented May 12, 2015

Currently we dont relay blocks in prune-mode, which seems to be very bad, as such a negative side-effect is not even mentioned in the help-message.

So with this pr we still relay if the peer has at least up to our pruning threshold.
I added another 6, to give a little more room for preventing the unlikely case that we are pruning the block
while the request is in flight. This means that we still relay, if the guy is not more than 282 blocks behind.

@sipa
Copy link
Member

sipa commented May 12, 2015 via email

@cozz
Copy link
Contributor Author

cozz commented May 12, 2015

@sipa Can you point me to the code, where we dont connect to pruned nodes, please?
Because I assumed we dont distinguish on service-bits in the connection logic.

@sipa
Copy link
Member

sipa commented May 12, 2015 via email

@cozz
Copy link
Contributor Author

cozz commented May 12, 2015

So you dont know the code? Didnt you write the address-manager?

Well, I will go bother and test now, if a node without NODE_NETWORK announces his ip, and whether or not we add it and connect to it with the address-manager. Because exactly that is, what I was assuming.

@laanwj laanwj added the P2P label May 12, 2015
@cozz
Copy link
Contributor Author

cozz commented May 12, 2015

So I dont know how to test this NODE_NETWORK thing.

The address-manager does not care about nServices, so from my understanding, there is no difference between NODE_NETWORK-nodes and non-NODE_NETWORK-nodes.
We make a difference after we have connected in the version message, but not before.

I any case this pull request does not hurt. Block relay is highest priority, and I believe we really should always announce the block, whenever possible.

Lets say someone runs a bunch of full nodes, because he wants to contribute to the network, then
our -prune feature is really bad, if suddenly all those nodes dont relay blocks anymore.
The guy thinks that he is doing something good for the network and himself (with pruning) while
its not.

If what you are saying is true, that nobody connects to pruned nodes, this would be also bad, because
this is not even mentioned in the help-message. If so, there is a big fat warning missing.
But again, I can not even find this behavior in the source code.

@petertodd
Copy link
Contributor

ACK concept.

I don't think this pull-req should be gated on whether or not we advertise that other nodes should connect to us; relaying is always valuable to ensure the P2P network is well connected. Note how if you start a node with -listen=0 you'll never have anyone connect to you, yet we still relay blocks. The same logic should apply here.

What I don't get is why make this dependent on the peer's advertised starting height? Why should their height (which may be out of date) have any relevance to whether or not we relay a block? Instead I think this should be based only on our height, with an appropriate window to prevent stalls.

As for the size of that window... isn't this code only triggered when we get a new block? You'd have to get a burst of >288 blocks in a row in non-initial-download mode to run into that problem - seems rather unlikely. I'm not sure it's really worth having extra code for such an unlikely case.

@cozz
Copy link
Contributor Author

cozz commented May 12, 2015

@petertodd The code I added only cares about the height of the other node, for both nStartingHeight and pindexBestKnownBlock. The reason is that this avoids, that the other node asks us for blocks we dont have. If we know, that the other has at least what we have (minus 288), nothing can go wrong.
The code already is executed for (!fInitialDownload), which is based on our height. So there must have been a reason why it has been disabled in the first place. So I added, to also care about the other nodes height.

I am just very unhappy, with block relay being completely disabled for pruning. To me, this is a bug.

@petertodd
Copy link
Contributor

@cozz Whoops, yeah, brainfart on my part. That exception is reasonable, although the fact we need it is annoying. :( Pity that we still deal with stalling peers so badly.

Definitely agree re: the total disabling. Heck, I even argue that SPV nodes should be forwarding blocks.

src/main.cpp Outdated
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment above needs to be updated.

@sdaftuar
Copy link
Member

I made a couple code comments, but in reviewing this I noticed another issue.

I think pruning nodes should respond to a "getblocks" message differently than non-pruning nodes, so that they don't ever generate an inv for a block they don't have on disk. I haven't generated this scenario in a test yet, but I believe it's possible to be in a situation where an 0.9 (or earlier) node could send a getblocks message to a pruning node, and if the 0.9 node is far enough behind, that could result in a pruning node serving up inv's for blocks it doesn't have (which the 0.9 node would then request, but not be able to receive, and then eventually disconnect).

Assuming we agree that behavior should change, then I think perhaps this pull could be much simpler, where we don't need to bother with the special case here in the event that we're pruning (ie we just remove the if (nLocalServices & NODE_NETWORK)) on line 2286. My reasoning:

  • 0.10 and later nodes will not download non-inv'ed blocks except from NODE_NETWORK peers (there's a guard around the call to FindNextBlocksToDownload)
  • 0.9 and 0.8 nodes rely on inv's, including those generated in response to a getblocks, to determine what to ask for from a peer

(I'm not all that familiar with the p2p behavior of 0.9 and 0.8 nodes though, hopefully someone more familiar can confirm that the above is correct.)

I guess one thing I am not sure how to consider is the p2p behavior of even older bitcoin core clients (which I haven't looked at) and non-bitcoin-core clients. But my first thought is that if we're advertising ourselves as not being NODE_NETWORK and we're only generating inv's for blocks that we have (and are likely to still have for some reasonable time window), that this ought to be acceptable network behavior.

@sipa
Copy link
Member

sipa commented May 12, 2015 via email

@cozz
Copy link
Contributor Author

cozz commented May 12, 2015

Fixed nits.

So I believe this pruning feature might be better to be declared as "experimental" for 0.11, if there is such limited usefulness and potential negative side effects. For example besides block relay, if a lot of nodes decide to use this feature, then the address-manager gets "spammed" with pruned nodes.
Non-pruned nodes should have priority over pruned ones, as for address-relay and address-selection, which is not even implemented currently.

As for this pull, I'd also say we can just always inv, because we dont call FindNextBlocksToDownload for pruned nodes. But then again, its unclear why it was even disabled in the first place.

@sipa
Copy link
Member

sipa commented May 13, 2015

Hmm, @sdaftuar mentions that 0.10 does not download from fClient nodes. That's not entirely true - the direct fetching loop (in immediate response to an inv) may fetch from anyone who sends an inv, but the asynchronous fetching won't. That means that even with this code, 0.10 and up will not be able to retrieve a reorg from a pruned peer.

@sdaftuar
Copy link
Member

@sipa Sorry if I was unclear but yes that was what I was referring to when I said 0.10 wouldn't download non-inv'ed blocks except from NODE_NETWORK peers. I agree that it seems unfortunate that 0.10 clients specifically wouldn't be able to retrieve a reorg from a pruned peer (while 0.8/0.9 would, via their getblocks logic). Perhaps we should consider #5307 with some small cap on the number of blocks to inv, so that small reorgs would be successfully relayed from a pruning node to an 0.10 node (though with the n^2 overhead mentioned in that pull)?

@sdaftuar
Copy link
Member

EDIT: the suggestion I posted in this comment doesn't actually work; inv's received for blocks that are already in mapBlockIndex don't get directly fetched, so this approach wouldn't help 0.10/0.11 nodes receive reorg's after all. #5307 may still be an option if we think this is important...

Another idea: perhaps pruning nodes could behave differently when they're responding to a "getheaders" message, so that in addition to returning headers, they would also send an inv if the headers being requested are for blocks that are all on disk and unlikely to be pruned (just as we would do for getblocks in #6130)? This could serve as a hint to a peer that it can download those blocks from us, and would assist a peer in either handling a reorg or completing a small-ish sync (after, say, being down for less than 2 days).

This feels a little ugly, and I'm guessing not how we'd want to behave in the long-run, but maybe this could be a workaround for the time being.

@laanwj
Copy link
Member

laanwj commented May 16, 2015

Continued in #6148

@laanwj laanwj closed this May 16, 2015
@laanwj
Copy link
Member

laanwj commented May 16, 2015

OH misundertood, #6148 is not based on this, but is a different implementation reopening.

@laanwj laanwj reopened this May 16, 2015
@cozz
Copy link
Contributor Author

cozz commented May 16, 2015

Closing for #6148

We only announce what we have and not delete that soon anyway. And the asynchronous fetching is not called for pruned nodes. So what I did here is not even necessary.

@cozz cozz closed this May 16, 2015
@bitcoin bitcoin locked as resolved and limited conversation to collaborators Sep 8, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants