Skip to content

proxy fails inbound requests when --ha control plane is degraded #4624

@listenerme

Description

@listenerme

Bug Report

What is the issue?

Randomly resulted in "503" error on linkerd-proxy associated with application containers

How can it be reproduced?

Its not everytime, happens during node shutdown or cluster upgrade..evicted pods are created in another node, will have (1/2) in READY status. Same applies for linkerd containers too(whose linkerd-proxy failed with 503 error itself"

linkerd-controller-64cf8696f7-d79tx 1/2 Running 1 26m
linkerd-destination-57847f975b-8sj7v 1/2 Running 0 26m
linkerd-proxy-injector-5689db6474-mzxjg 1/2 Running 1 19m
linkerd-sp-validator-84d86bf697-w5746 1/2 Running 1 26m
linkerd-tap-799498f9f8-kqzcv 1/2 Running 1 26m

Logs, error output, etc

Warning Unhealthy 2m58s kubelet, aks-devpool-XXXXXXX-vmss000000 Readiness probe failed: Get http://10.X.X.101:4191/ready: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Warning Unhealthy 29s (x15 over 2m49s) kubelet, aks-devpool-XXXXXXX-vmss000000 Readiness probe failed: HTTP probe failed with statuscode: 503

(If the output is long, please create a gist and
paste the link here.)

linkerd check output

your output here ...

Environment

  • Kubernetes Version:
  • Cluster Environment: (GKE, AKS, kops, ...)
  • Host OS:
  • Linkerd version:

Possible solution

Additional context

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions