concurrency-limit: Share a limit across Services#429
Merged
Conversation
Tower's `ConcurrencyLimit` middleware instantiates a new semaphore for each service created by the layer. However, in an upcoming proxy change, I'd like to avoid having a single `Service` instance that all requests must flow through and, instead, wrap a service created for each connection. In order to enforce a global concurrency limit, the middleware needs to be changed to share a semaphore across all services created by the layer. This presents no functional change to how the concurrency limit is applied today.
hawkw
approved these changes
Feb 19, 2020
| @@ -0,0 +1,172 @@ | |||
| //! A layer that limits the number of in-flight requests to inner service. | |||
| //! | |||
| //! Modified from tower-concurrency-limit so that the Layer holds a semaphore | |||
Contributor
There was a problem hiding this comment.
is this a change we want to get into tower eventually? why/why not?
Member
Author
There was a problem hiding this comment.
IDK. Both behaviors are useful. In general, I think tower's impl would be more flexible if the service constructor took a Semaphore, though.
olix0r
added a commit
to linkerd/linkerd2
that referenced
this pull request
Feb 19, 2020
This release includes the results from continued profiling & performance analysis. In addition to modifying internals to prevent unwarranted memory growth, we've introduced new metrics to aid in debugging and diagnostics: a new `request_errors_total` metric exposes the number of requests that receive synthesized responses due to proxy errors; and a suite of `stack_*` metrics expose proxy internals that can help us identify unexpected behavior. --- * trace: update `tracing-subscriber` dependency to 0.2.1 (linkerd/linkerd2-proxy#426) * Reimplement the Lock middleware with tokio::sync (linkerd/linkerd2-proxy#427) * Add the request_errors_total metric (linkerd/linkerd2-proxy#417) * Expose the number of service instances in the proxy (linkerd/linkerd2-proxy#428) * concurrency-limit: Share a limit across Services (linkerd/linkerd2-proxy#429) * profiling: add benchmark and profiling scripts (linkerd/linkerd2-proxy#406) * http-box: Box HTTP payloads via middleware (linkerd/linkerd2-proxy#430) * lock: Generalize to protect a guarded value (linkerd/linkerd2-proxy#431)
olix0r
added a commit
to linkerd/linkerd2
that referenced
this pull request
Feb 19, 2020
This release includes the results from continued profiling & performance analysis. In addition to modifying internals to prevent unwarranted memory growth, we've introduced new metrics to aid in debugging and diagnostics: a new `request_errors_total` metric exposes the number of requests that receive synthesized responses due to proxy errors; and a suite of `stack_*` metrics expose proxy internals that can help us identify unexpected behavior. --- * trace: update `tracing-subscriber` dependency to 0.2.1 (linkerd/linkerd2-proxy#426) * Reimplement the Lock middleware with tokio::sync (linkerd/linkerd2-proxy#427) * Add the request_errors_total metric (linkerd/linkerd2-proxy#417) * Expose the number of service instances in the proxy (linkerd/linkerd2-proxy#428) * concurrency-limit: Share a limit across Services (linkerd/linkerd2-proxy#429) * profiling: add benchmark and profiling scripts (linkerd/linkerd2-proxy#406) * http-box: Box HTTP payloads via middleware (linkerd/linkerd2-proxy#430) * lock: Generalize to protect a guarded value (linkerd/linkerd2-proxy#431)
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Tower's
ConcurrencyLimitmiddleware instantiates a new semaphore foreach service created by the layer. However, in an upcoming proxy change,
I'd like to avoid having a single
Serviceinstance that all requestsmust flow through and, instead, wrap a service created for each
connection. In order to enforce a global concurrency limit, the
middleware needs to be changed to share a semaphore across all services
created by the layer.
This presents no functional change to how the concurrency limit is
applied today.
This is basically copy/pasted from https://github.com/tower-rs/tower/blob/v0.1.x/tower-limit/src/concurrency/service.rs