-
Notifications
You must be signed in to change notification settings - Fork 43k
Parallel pod management with maxUnavailable set should not ignore minReadySeconds #112307
Copy link
Copy link
Closed
Closed
Copy link
Labels
kind/bugCategorizes issue or PR as related to a bug.Categorizes issue or PR as related to a bug.priority/important-longtermImportant over the long term, but may not be staffed and/or may need multiple releases to complete.Important over the long term, but may not be staffed and/or may need multiple releases to complete.priority/important-soonMust be staffed and worked on either currently, or very soon, ideally in time for the next release.Must be staffed and worked on either currently, or very soon, ideally in time for the next release.sig/appsCategorizes an issue or PR as relevant to SIG Apps.Categorizes an issue or PR as relevant to SIG Apps.triage/acceptedIndicates an issue or PR is ready to be actively worked on.Indicates an issue or PR is ready to be actively worked on.
Metadata
Metadata
Assignees
Labels
kind/bugCategorizes issue or PR as related to a bug.Categorizes issue or PR as related to a bug.priority/important-longtermImportant over the long term, but may not be staffed and/or may need multiple releases to complete.Important over the long term, but may not be staffed and/or may need multiple releases to complete.priority/important-soonMust be staffed and worked on either currently, or very soon, ideally in time for the next release.Must be staffed and worked on either currently, or very soon, ideally in time for the next release.sig/appsCategorizes an issue or PR as relevant to SIG Apps.Categorizes an issue or PR as relevant to SIG Apps.triage/acceptedIndicates an issue or PR is ready to be actively worked on.Indicates an issue or PR is ready to be actively worked on.
Type
Projects
Status
Closed
What happened?
StatefulSet spec's template was updated:
Details
Kubernetes has updated all pods one-by-one in a quick succession, the next pod was updated as soon as previous one has passed the startup probe.
status.availableReplicashas dropped from 5 to 0 over the course of the push (which took ≈3 minutes).What did you expect to happen?
I did expected that kubernetes would update pods one-by-one, taking
minReadySecondsinto account, the next pod was expected to be updatedminReadySecondsseconds after previous one has passed the startup probe.status.availableReplicasshould not have been dropped below 4 at any point (asmaxUnavailableis explicitly set to 1).How can we reproduce it (as minimally and precisely as possible)?
Create a
StatefulSetwith the spec above, and update itsspec.templatefield to trigger a rollout.MaxUnavailableStatefulSetandStatefulSetMinReadySecondsfeature gates must be enabled.Anything else we need to know?
No response
Kubernetes version
Details
Cloud provider
Details
In-house kubernetes setup.OS version
Details
Install tools
Details
Container runtime (CRI) and version (if applicable)
Details
Related plugins (CNI, CSI, ...) and versions (if applicable)
Details