Skip to content

[ipfs/go-bitswap] Bitswap servers should reuse streams for sending responses #80

@aschmahmann

Description

@aschmahmann

Problem

While sending wantlists to peers reuses streams when possible https://github.com/ipfs/go-bitswap/blob/2b51297a0b68198b6c4bcacdd8868a6df8dcd182/network/ipfs_impl.go#L105-L108

Responding to messages involves opening up new streams https://github.com/ipfs/go-bitswap/blob/2b51297a0b68198b6c4bcacdd8868a6df8dcd182/network/ipfs_impl.go#L326-L334

The main issues with this AFAICT are:

  1. Reduced utility of backpressure since instead of messages getting queued up and handled one at a time the multiplexer has to deal with all the responses together
  2. Extra resource consumption in opening up new streams. Opening up streams isn't crazy expensive, but it's a waste
  3. Multiplexer window resetting:

Discovered while doing some performance benchmarking in ipfs/test-plans#4.

Proposed Solution

It turns out that client side code is more than happy to continuously process multiple messages per stream
https://github.com/ipfs/go-bitswap/blob/dbfc6a1d986e18402d8301c7535a0c39b2461cb7/network/ipfs_impl.go#L406

With some basic inspection it appears the code supporting this has been around since ancient times (2016) ipfs/go-bitswap@58d1750 and so should be safe to use.

Reusing the sending stream for responses is not supported and would be a protocol breaking change and so we need two long lived streams per peer.

We should be able to reuse or copy most of the stream reuse code for the streamMessageSender used by the wantlists for sending responses. However, some care will have to be given to make sure backpressure gets happily handled when the stream is backed up so we don't pull tasks off the task queue and load blocks from disk if we have nowhere to send them.

  • Advantages:
    • Non protocol breaking change
    • On the surface seems not too bad to implement (although backpressure code might end up being more involved)
    • Solves the problem
  • Disadvantages:
    • Clients can't safely assume this is happening everywhere until all servers have upgraded
    • Without a protocol break signal or gross things like leveraging UserAgent strings clients can't prioritize fetching from updated peers (although to be fair, there's no codepaths for this anyway so probably not much is lost here at least in go-bitswap)

Metadata

Metadata

Assignees

No one assigned

    Labels

    P1High: Likely tackled by core team if no one steps upkind/enhancementA net-new feature or improvement to an existing feature

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions