-
Notifications
You must be signed in to change notification settings - Fork 150
Description
Problem
While sending wantlists to peers reuses streams when possible https://github.com/ipfs/go-bitswap/blob/2b51297a0b68198b6c4bcacdd8868a6df8dcd182/network/ipfs_impl.go#L105-L108
Responding to messages involves opening up new streams https://github.com/ipfs/go-bitswap/blob/2b51297a0b68198b6c4bcacdd8868a6df8dcd182/network/ipfs_impl.go#L326-L334
The main issues with this AFAICT are:
- Reduced utility of backpressure since instead of messages getting queued up and handled one at a time the multiplexer has to deal with all the responses together
- Extra resource consumption in opening up new streams. Opening up streams isn't crazy expensive, but it's a waste
- Multiplexer window resetting:
- Similar to protocols like TCP multiplexers like Yamux have window sizes of how many bytes they're willing to send at a time without acknowledgement
- The Yamux multiplexer has a maximum window size of 16MiB https://github.com/libp2p/go-libp2p-yamux/blob/90fad39d3b2797e48d3a400d077f3cf282ed25d1/transport.go#L22
- But an initial window of 256KiB https://github.com/libp2p/go-yamux/blob/e7b88075cff76b8c8372efdc93884a0d1c3cfe9d/const.go#L118
- This means the window is reset for every block
- An example of where this would be brutal is when sequentially (i.e. strictly verifiably) downloading a linear DAG of 1MiB blocks since each block will require multiple round trips due to needing to increase the window size each time
- This also effects default go-ipfs UnixFS DAG transfer since the default chunker creates 256KiB blocks (with protobuf wrapping pushing their size above 256KiB) and then extra Bitswap message padding increases the response size above 256KiB meaning that each block requires two round trips to transfer
Discovered while doing some performance benchmarking in ipfs/test-plans#4.
Proposed Solution
It turns out that client side code is more than happy to continuously process multiple messages per stream
https://github.com/ipfs/go-bitswap/blob/dbfc6a1d986e18402d8301c7535a0c39b2461cb7/network/ipfs_impl.go#L406
With some basic inspection it appears the code supporting this has been around since ancient times (2016) ipfs/go-bitswap@58d1750 and so should be safe to use.
Reusing the sending stream for responses is not supported and would be a protocol breaking change and so we need two long lived streams per peer.
We should be able to reuse or copy most of the stream reuse code for the streamMessageSender used by the wantlists for sending responses. However, some care will have to be given to make sure backpressure gets happily handled when the stream is backed up so we don't pull tasks off the task queue and load blocks from disk if we have nowhere to send them.
- Advantages:
- Non protocol breaking change
- On the surface seems not too bad to implement (although backpressure code might end up being more involved)
- Solves the problem
- Disadvantages:
- Clients can't safely assume this is happening everywhere until all servers have upgraded
- Without a protocol break signal or gross things like leveraging UserAgent strings clients can't prioritize fetching from updated peers (although to be fair, there's no codepaths for this anyway so probably not much is lost here at least in go-bitswap)