Skip to content

Conversation

@morcos
Copy link
Contributor

@morcos morcos commented Jul 16, 2015

Nodes can have divergent policies on which transactions they will accept and relay. This can cause you to repeatedly request and reject the same tx after its inved to you from various peers which have accepted it.

This commit puts a 15 min timer on rerequesting the same tx hash if you just rejected it. I think the proper value for this can be debated, it might be almost as effective with a significantly smaller timer like 1 min.

Nodes can have divergent policies on which transactions they will accept and relay.  This can cause you to repeatedly request and reject the same tx after its inved to you from various peers which have accepted it.  This commit puts a 15 min timer on rerequesting the same tx hash if you just rejected it.
@petertodd
Copy link
Contributor

I've noticed this issue too on my full-RBF nodes, doubly so on the full-RBF nodes that have a higher min-relay fee set. In some cases that resulted in huge amounts of bandwidth being wasted, as those nodes are otherwise well connected with nearly the full number of peers. So strong ACK on the intent.

I think we could make the table quite a bit bigger though. Each entry is about 40 bytes, so 2000 replacements is only about 80k. Equally, for the purpose of rejecting txs it'd be perfectly acceptable to only store 64bit partial txids if we first hash them with a per-node nonce, so that gets us 16bytes/entry.

If someone is trying to do a bandwidth attack by flooding the network, right now the re-requests are a pretty big multiplier - if half your ~100 incoming peers relay a tx and half don't, you've increased your incoming bandwidth used by the flood by about 50x. With the patch that's reduced drastically.

Now, to defeat this protection you basically need to relay enough txs to overflow the limitedmap, before all your peers send you the associated INVs. (modulo cases where the attacker is directy connected to you) The minimum tx size is about 60 bytes, so at 1MB/second you might receive 16k txs in a really extreme case - but that overflows the map multiple times over.

Spending more like, say, 5MB of RAM to mitigate that kind of bandwidth attack seems reasonable to me. w/ the existing design, that's a 125,000 entry limitedmap; if you implement the 64-bit hash idea that's a 312,000 entry limited map.

Also, would the attack be harder to pull off if dropping entries from the map was probabalistic? I suspect yes.

@petertodd
Copy link
Contributor

One more idea: How about discarding the entire recent rejects map every time the best block hash changes? I suspect the most likely scenario where a previously rejected transaction is now again valid will probably be validity changes due to new/reorged blocks. For instance the main reason why my #6216 pull-req causes no issues w/ relaying is that if a node receives a transaction that it rejects due to the tx being non-final, it will be accepted again later as it winds its way around the network; this pull-req might break that behavior.

If you do this it'd probably be fine to remove the time expiration entirely, saving RAM, and implement the map with a simple set with txids dropped from it randomly.

@petertodd
Copy link
Contributor

(or for that matter, a simple bloom filter with a bit randomly cleared on each insertion to keep the fp rate steady)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, this will relay invalid transactions from whitelisted peers as well.

@petertodd
Copy link
Contributor

Created #6452 with a bloom filter instead of limitedmap. Thoughts?

@morcos
Copy link
Contributor Author

morcos commented Jul 17, 2015

@ajweiss had suggested a bloom filter when I created this, but I was concerned about false positives. I hadn't thought it through that much. I'm not tied to a specific approach. I was looking at this less as a DoS attack prevention and more as a response to the unfortunate p2p feedback loop of different acceptance policies, but I guess you're right its important to make sure it still works under load.
I do like the idea of clearing when a block comes in.

@petertodd
Copy link
Contributor

Yeah, it's quite cheap to get extremely low fp rates - my bloom filter approach has a one in a million FP rate. The (bloom filter) code also already exists and is well tested, as the netaddr stuff uses it.

@laanwj
Copy link
Member

laanwj commented Jul 18, 2015

Concept ACK.
Using a bloom filter is slightly more risky, but if we make sure that at least every node uses an (unpredictable) tweak the impact of false positives should be minimized.

@morcos
Copy link
Contributor Author

morcos commented Jul 19, 2015

Closing in favor of #6452

@morcos morcos closed this Jul 19, 2015
@bitcoin bitcoin locked as resolved and limited conversation to collaborators Sep 8, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants