-
Notifications
You must be signed in to change notification settings - Fork 284
add header-based routing using the outbound policy API #1992
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
It looks like in the "inbound" API, there is metadata for each authorization and each httproute. This makes sense because they correspond to k8s resources (e.g. to AuthorizationPolicy and HttpRoute). For "outbound", we attach HttpRoute resources to Services directly. We could have a top level metadata field which describes the Service resource, but this doesn't seem super useful to me. |
hmm, yeah, i suppose the service would already be recorded in other labels.. |
|
Food for thought: If I understand correctly, in the proxy we do a profile lookup in order to get a logical addr (i.e. Service DNS name), but then throw that away and instead do a client policy lookup using the orig dst. The controller uses that target ip address and does a lookup in a Service index to find the Service name with that IP. This is redundant. Would it make sense to move the client policy lookup earlier in the proxy: before the lookup of the logical addr? I think this helps us move toward a world where the service profile API isn't used at all when a client policy exists. |
Depends on #1992 This branch adds a very rough implementation of traffic splitting based on client policy backends. Client policy traffic splitting is implemented by moving the existing traffic split implementation into the `linkerd-client-policy` crate, and changing it to operate on the `Backend` type. The `Target` struct stored in a ServiceProfile are replaced with the `Backend` type, so that the same traffic split layer can work with a list of backends provided by either a client policy or a ServiceProfile. The `Logical` target type has a new impl of `Param<Vec<Backend>>` and `Param<BackendStream>` that chooses whether to return backends from the client policy discovery or the ServiceProfile based on whether or not a client policy was discovered for that logical destination. This could use some polish, and some cases aren't yet implemented (client policy backends may be either a named address _or_ a direct endpoint address, which is not supported by ServiceProfiles, so we'll need to add additional switching code to handle that case). However, this should be a decent working steelthread.
Signed-off-by: Eliza Weisman <[email protected]>
these are served on separate ports
Depends on #1992 This branch adds a very rough implementation of traffic splitting based on client policy backends. Client policy traffic splitting is implemented by moving the existing traffic split implementation into the `linkerd-client-policy` crate, and changing it to operate on the `Backend` type. The `Target` struct stored in a ServiceProfile are replaced with the `Backend` type, so that the same traffic split layer can work with a list of backends provided by either a client policy or a ServiceProfile. The `Logical` target type has a new impl of `Param<Vec<Backend>>` and `Param<BackendStream>` that chooses whether to return backends from the client policy discovery or the ServiceProfile based on whether or not a client policy was discovered for that logical destination. This could use some polish, and some cases aren't yet implemented (client policy backends may be either a named address _or_ a direct endpoint address, which is not supported by ServiceProfiles, so we'll need to add additional switching code to handle that case). However, this should be a decent working steelthread.
This branch adds a rough draft of a router for selecting client policy routes. Right now, once a route is selected, it just gets logged as a demo that route matching works. A follow-up PR will implement HTTPRoute traffic splitting once a route is matched. Depends on #2021
This branch adds a rough draft of a router for selecting client policy routes. Right now, once a route is selected, it just gets logged as a demo that route matching works. A follow-up PR will implement HTTPRoute traffic splitting once a route is matched. Depends on #2021
|
updated the PR description to reflect the current state of this branch |
| impl<T, N> NewService<T> for NewServiceRouter<N> | ||
| where | ||
| T: Param<Option<Receiver>> + Clone, | ||
| N: NewService<(Option<RoutePolicy>, T)> + Clone, | ||
| { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we really need to handle these as optional? We should use the stack to enforce that these are present; and all requests must have routes that go through this service, so I'd rather not promulgate optionality to the inner stack.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know this is different from how it's handled in service profiles; but I think we want to drive towards a state where there is a route description on all requests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the route is no longer optional as of fc32de7; the policy rx is still an Option, but i could change this to operate on a separate target type (there's already a TODO about this)...
otherwise, we still build a policy stack. perhaps this should change, and we should have a default route always, that builds a profile stack...but that seems Complicateder...
f70e578 to
7255fcd
Compare
This reverts commit b2b0bd1. That was a work-in-progress change that I didn't mean to commit, my bad!
| // for now, the client-policy route stack is just a fixed traffic split | ||
| let policy = concrete | ||
| .check_new_service::<(ConcreteAddr, Logical), _>() | ||
| .push_map_target(|(concrete, PolicyRoute { route: _, logical }): (ConcreteAddr, PolicyRoute)| { | ||
| (concrete, logical) | ||
| }) | ||
| .push(policy::split::layer()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I may be wrong, but my reading of this is that the policy split creates an inner stack for every backend of every route. Does this mean that multiple routes to a single backend service do not share load balancers/connections? Or where are the load balancers cached if not here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's correct; we should probably add a caching layer here so that each concrete address's client stack is shared across multiple routes that share that backend.
When we add more forms of per-route policy, though (e.g. different retry policies, HTTPRoute filters, and per route metrics), we'll want to ensure that the portions of that stack that may differ based on which route is used are before the client stack cache, though.
| // Any per-route policy has to go between the split layer and | ||
| // the concrete stack cache, since we _don't_ want to share | ||
| // client policy across routes that have the same backend! | ||
| .push_map_target( | ||
| |(concrete, PolicyRoute { route: _, logical }): (ConcreteAddr, PolicyRoute)| { | ||
| (concrete, logical) | ||
| }, | ||
| ) | ||
| .push(policy::split::layer()) | ||
| .push_on_service( | ||
| svc::layers() | ||
| .push(svc::layer::mk(svc::SpawnReady::new)) | ||
| .push( | ||
| rt.metrics | ||
| .proxy | ||
| .stack | ||
| .layer(stack_labels("http", "HTTPRoute")), | ||
| ) | ||
| .push(svc::FailFast::layer("HTTPRoute", dispatch_timeout)) | ||
| .push_spawn_buffer(buffer_capacity), | ||
| ) | ||
| .check_new_clone::<PolicyRoute>() | ||
| .push_map_target(|(route, logical)| { | ||
| tracing::debug!(?route); | ||
| PolicyRoute { route, logical } | ||
| }) | ||
| .push(policy::http::NewServiceRouter::layer()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the concrete stack is cached, can we omit the cache/buffer around the split so that every route gets its own split?
If so, perhaps it makes sense to pull it all into one layer that is Logical-outside and Concrete-inside... We'll need to consider how/where per-route functionality is added for things like retries/tmeouts.
This branch implements header-based routing using the new client policy
API added in linkerd/linkerd2-proxy-api#165.
Overview
In particular, this branch does the following:
Client policy discovery:
linkerd-client-policycrate with internal typesrepresenting the client policies and code for converting protobuf
policy messages to those representations
policymodule inlinkerd-app-outboundwith an API clientpush_discoverstack.
This means that profile discovery and client policy discovery are:
discovery is dropped when no profile is resolved or if the profile is a
direct endpoint.
propagating original destination addresses to new parts of the outbound
stack.
Client policy HTTPRoute matching:
linkerd-client-policyfor recognizingan HTTPRoute defined in a client policy
push_http_logicalstack thatis switched to if a client policy is present, rather than the current
Service Profile-based HTTP logical stack
Client policy traffic splitting:
linkerd-service-profilesinto
linkerd-client-policy, and updatelinkerd-service-profilestodepend on it from
client-policy.Dynamic(updated from discovery) traffic splits, because updates to the set of
routes in a client policy will be handled by the policy router.
HTTPRoute rule in the client policy stack.
Not Currently Implemented
backendRefs that are a direct endpoint rather than a logicalname are currently ignored
can be set)
sequentially, but could be parallelized (with some rewriting of the
current profile discovery code)