Skip to content

buffer: Box the inner service's reponse future#586

Merged
olix0r merged 1 commit intomainfrom
ver/buffer-opaque
Jun 30, 2020
Merged

buffer: Box the inner service's reponse future#586
olix0r merged 1 commit intomainfrom
ver/buffer-opaque

Conversation

@olix0r
Copy link
Member

@olix0r olix0r commented Jun 29, 2020

The buffer middleware is parameterized on the inner service's repsonse
future, which "leaks" the inner service's type to the outer components
(which slows down compilations).

This changes the signature to instead be parameterized by the response
type, boxing the inner service's response future to avoid exposing its
type signature.

Relates to linkerd/linkerd2#4676

The buffer middleware is parameterized on the inner service's repsonse
future, which "leaks" the inner service's type to the outer components
(which slows down compilations).

This changes the signature to instead be parameterized by the response
type, boxing the inner service's response future to avoid exposing its
type signature.
@olix0r olix0r requested a review from a team June 29, 2020 17:35
@olix0r
Copy link
Member Author

olix0r commented Jun 29, 2020

This appears to shave ~4mins off of the integration test build in CI.

@hawkw
Copy link
Contributor

hawkw commented Jun 29, 2020

Are we able to run the performance tests with this change? My assumption is that the added Box is not going to have a noticeable impact, but it would be nice to have a measurement regardless --- could serve as a useful data point for similar changes in the future.

@olix0r
Copy link
Member Author

olix0r commented Jun 29, 2020

@hawkw I'll run some tests.

My hunch is that no single Box is going to influence the overall numbers substantially. If so, my general preference would be to bias for our ability to make changes to the proxy than trying to shave of µs until we know where/why they matter.

@hawkw
Copy link
Contributor

hawkw commented Jun 29, 2020

@olix0r sure, i totally agree --- i'm just thinking it would be nice for that hunch to be validated by numbers. it's not a blocker for merging this!

Copy link
Contributor

@hawkw hawkw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

type Response = Rsp;
type Error = Error;
type Future = ResponseFuture<F>;
type Future = Pin<Box<dyn Future<Output = Result<Rsp, Error>> + Send + 'static>>;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'd be curious to see if switching to nightly and returning impl Future resulted in the same build time improvement or not --- but we can do that separately. it would be good to know eventually, since that could help guide future changes...

@olix0r olix0r force-pushed the ver/buffer-opaque branch from f4612a5 to d64c96e Compare June 29, 2020 23:29
@hawkw hawkw requested a review from a team June 30, 2020 17:28
Copy link
Contributor

@kleimkuhler kleimkuhler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@olix0r olix0r merged commit 9f8b3fc into main Jun 30, 2020
@olix0r olix0r deleted the ver/buffer-opaque branch June 30, 2020 18:50
hawkw added a commit that referenced this pull request Jul 1, 2020
The use of a buffer hides some of the type complexity of the inner
service (all of the type complexity, after #586). However, Linkerd
currently only uses buffers at the `Service` level, not at the
`MakeService` level. Since `MakeService`/`NewService`s tend to be
generic over both inner `MakeService` types _and_ over the produced
`Service`s (as well as potentially two future types), they probably have
much longer concrete types than `Service`s in most cases.

As @olix0r suggested, we can replace the `Lock` that's currently used to
ensure exclusive access to `Cache`s (which are at the `MakeService`)s
with `Buffer`s. The `Lock` is currently essentially doing something
quite similar to adding a `Buffer` anyway. Introducing buffers around
all caches erases the inner type for everything layered around the
cache, which should make overall type length much shorter.

This seems to have a fairly noticeable impact on build time and memory
use (see linkerd/linkerd2#4676). On my machine, running `make docker` on
#586 gets SIGKILLed (presumably by the OOM killer) after 7m53s. After
this change, `make docker` completes successfully after 1m44s.

Also, the `linkerd2-lock` crate can be removed, as it was used only by 
`Cache`.

To work around increased tail latencies when the buffer fills up, this
branch also sets the default buffer capacity to be equal to the 
default concurrency limit, rather than 10. This is fine, since the buffer
capacity isn't _actually_ what enforces a bound on proxy memory use.
The spawned tasks waiting on a full buffer are still sitting on the Tokio
executor, and it's actually the in flight limit that stops us from accepting
any more requests when we have too many in flight.

Depends on #586.
Fixes linkerd/linkerd2#4676.

Signed-off-by: Eliza Weisman <[email protected]>
olix0r added a commit to linkerd/linkerd2 that referenced this pull request Jul 2, 2020
This release increases the default buffer size to match the proxy's
in-flight request limit. This reduces contention in overload--especially
high-concurrency--situations, substantially reducing tail latency.

---

* update test-support clients and servers to be natively async (linkerd/linkerd2-proxy#580)
* Print build diagnostics in docker (linkerd/linkerd2-proxy#583)
* update test controllers to std::future/Tonic; remove threads (linkerd/linkerd2-proxy#585)
* buffer: Box the inner service's reponse future (linkerd/linkerd2-proxy#586)
* Eliminate Bind & Listen traits (linkerd/linkerd2-proxy#584)
* cache: replace Lock with Buffer (linkerd/linkerd2-proxy#587)
cpretzer pushed a commit to linkerd/linkerd2 that referenced this pull request Jul 2, 2020
This release increases the default buffer size to match the proxy's
in-flight request limit. This reduces contention in overload--especially
high-concurrency--situations, substantially reducing tail latency.

---

* update test-support clients and servers to be natively async (linkerd/linkerd2-proxy#580)
* Print build diagnostics in docker (linkerd/linkerd2-proxy#583)
* update test controllers to std::future/Tonic; remove threads (linkerd/linkerd2-proxy#585)
* buffer: Box the inner service's reponse future (linkerd/linkerd2-proxy#586)
* Eliminate Bind & Listen traits (linkerd/linkerd2-proxy#584)
* cache: replace Lock with Buffer (linkerd/linkerd2-proxy#587)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants