Skip to content

Introduce a SkipDetect layer to preempt detection#620

Merged
olix0r merged 10 commits intomainfrom
ver/accept-split
Aug 5, 2020
Merged

Introduce a SkipDetect layer to preempt detection#620
olix0r merged 10 commits intomainfrom
ver/accept-split

Conversation

@olix0r
Copy link
Member

@olix0r olix0r commented Aug 5, 2020

This change introduces a new SkipDetect layer that configures whether
protocol detection should be attempted. This module will later be
replaced/augmented by discovery.

Furthermore, this change eliminates the Accept trait. Instead of
modeling the accept stack as a simple service whose response is a
future, we now model the stack as, effectively, a
MakeService<Meta, TcpStream>. This is intended to support caching of
the service that handles the tcp stream (i.e. to hold discovery
responses).

This change also removes the Detect trait. It's no longer useful.

Detection timeouts have been moved from a dedicated layer into the
detection modules.

This change introduces a new SkipDetect layer that configures whether
protocol detection should be attempted. This module will later be
replaced/augmented by discovery.

Furthermore, this change eliminates the `Accept` trait. Instead of
modeling the accept stack as a simple service whose response is a
future, we know model the stack as, effectively, a
`MakeService<Meta, TcpStream>`. This is intended to support caching of
the service that handles the tcp stream (i.e. to hold discovery
responses).

Detection timeouts have been moved from a dedicated layer into the
detection modules.
@olix0r olix0r requested a review from a team August 5, 2020 17:29
@hawkw
Copy link
Contributor

hawkw commented Aug 5, 2020

Looks like the tap test failed, but that may be flakiness? Restarting it.

@olix0r
Copy link
Member Author

olix0r commented Aug 5, 2020

@hawkw tests pass locally...

Copy link
Contributor

@hawkw hawkw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, this change seems good. However, I noticed that the way the tracing span is used in the accept loop is wrong, and will lead to messed up traces; we should fix that.

Beyond that, I had a few other minor nits and questions, but no other real blockers.

Copy link
Contributor

@hawkw hawkw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good to me now!

Copy link
Contributor

@kleimkuhler kleimkuhler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! I have a non-blocking comment.

@olix0r olix0r merged commit a233e1a into main Aug 5, 2020
@olix0r olix0r deleted the ver/accept-split branch August 5, 2020 20:53
olix0r added a commit to linkerd/linkerd2 that referenced this pull request Aug 5, 2020
This release enables a multi-threaded runtime. Previously, the proxy
would only ever use a single thread for data plane processing; now, when
the proxy is allocated more than 1 CPU share, the proxy allocates a
thread per available CPU. This has shown substantial latency
improvements in benchmarks, especially when the proxy is serving
requests for many concurrent connections.

---

* Add a `multicore` feature flag (linkerd/linkerd2-proxy#611)
* Add `multicore` to default features (linkerd/linkerd2-proxy#612)
* admin: add an endpoint to dump spawned Tokio tasks (linkerd/linkerd2-proxy#595)
* trace: roll `tracing` and `tracing-subscriber` dependencies (linkerd/linkerd2-proxy#615)
* stack: Add NewService::into_make_service (linkerd/linkerd2-proxy#618)
* trace: tweak tracing & test support for the multithreaded runtime (linkerd/linkerd2-proxy#616)
* Make FailFast cloneable (linkerd/linkerd2-proxy#617)
* Move HTTP detection & server into linkerd2_proxy_http (linkerd/linkerd2-proxy#619)
* Mark tap integration tests as flakey (linkerd/linkerd2-proxy#621)
* Introduce a SkipDetect layer to preempt detection (linkerd/linkerd2-proxy#620)
adleong pushed a commit to linkerd/linkerd2 that referenced this pull request Aug 6, 2020
This release enables a multi-threaded runtime. Previously, the proxy
would only ever use a single thread for data plane processing; now, when
the proxy is allocated more than 1 CPU share, the proxy allocates a
thread per available CPU. This has shown substantial latency
improvements in benchmarks, especially when the proxy is serving
requests for many concurrent connections.

---

* Add a `multicore` feature flag (linkerd/linkerd2-proxy#611)
* Add `multicore` to default features (linkerd/linkerd2-proxy#612)
* admin: add an endpoint to dump spawned Tokio tasks (linkerd/linkerd2-proxy#595)
* trace: roll `tracing` and `tracing-subscriber` dependencies (linkerd/linkerd2-proxy#615)
* stack: Add NewService::into_make_service (linkerd/linkerd2-proxy#618)
* trace: tweak tracing & test support for the multithreaded runtime (linkerd/linkerd2-proxy#616)
* Make FailFast cloneable (linkerd/linkerd2-proxy#617)
* Move HTTP detection & server into linkerd2_proxy_http (linkerd/linkerd2-proxy#619)
* Mark tap integration tests as flakey (linkerd/linkerd2-proxy#621)
* Introduce a SkipDetect layer to preempt detection (linkerd/linkerd2-proxy#620)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants