Fluffy: Implement offer cache to hold content ids of recent offers#3233
Fluffy: Implement offer cache to hold content ids of recent offers#3233
Conversation
There was a problem hiding this comment.
I'm fine with this cache addition as offers do typically come in fairly close to eachother and the change is quite minimal.
Two things however:
- Curious to see any actual data on this.
- I was also wondering how often this occurs versus the version where offers come in too close to each other that the actual content of the first offer is not stored yet. Of course for that scenario we cannot really add a cache as the content offered could be failed to send or invalid.
| for k, v in p.offerCache.mpairs(): | ||
| v = false |
There was a problem hiding this comment.
Probably faster to just reinitialize the cache? And that way you also don't need to boolean I think?
There was a problem hiding this comment.
Probably faster to just reinitialize the cache? And that way you also don't need to boolean I think?
I was looking though the minilru code and I didn't find a clear function so I went with this method. But yes, reinit is probably better. I'll update.
I guess I can add some cache hit cache miss metrics for this and gossip some data in a local testnet to see the results.
This is something we could use metrics to get data on a well. To address this problem I think we should put a limit on the max number of concurrent offers per content id. The current limits are per content id and per peer but there is no limit on multiple peers sending the same content concurrently. |
|
@kdeme Here are some metrics showing the usage of the offer cache when running a local testnet with 16 nodes and gossipping content using 20 workers running for 10 mins or so. All content is sent to one of the fluffy instances (node 2) which gossips the content to its peers. Node 1: Node 2 (the node which the portal bridge is connected to): Node 3: Node 4: It appears that at least 50% of the content lookups hit the cache during the gossip process. Of course the other benefit of this change is DOS protection in which case rejecting recently offered content would be much faster and not require a database lookup. |
Reduces load on the database during gossip process as it is very common to receive multiple copies of the same content from different peers as content is gossipped through the network.
Changes in this PR: