-
Notifications
You must be signed in to change notification settings - Fork 42k
[WIP] Server-side Apply: Track ownership for scale subresource for Deployments #83294
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Hi @julianvmodesto. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: julianvmodesto The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
This is a good start, but I think it might be difficult to implement the necessary logic to update the managedfields for scale here, since the code for parsing it and encoding it is defined in k8s.io/apiserver/pkg/endpoints/handlers/fieldmanager/internal. I had an idea for how to get around that, let me know if you think it makes sense. The idea is to create a "scale fieldmanager" (that only understands how to update the scale field's ownership of an object), in the k8s.io/apiserver/.../fieldmanager package, and add it to the scale request scope, so it gets passed into ScaleRest.Update by the scale subresource handler automatically as a transformer for the updatedObjectInfo. To wire this from the subresource to the main resource, autoscaling.v1/Scale already has a metadata field which includes managedFields. We could use that to adjust the managedfields, by modifying scaleFromDeployment to populate that field, then the transformer to the updatedObjectInfo which we passed in earlier would update the ownership for us, and we just need to make sure we also copy back the value from the scale's managedFields into the deployment object. The "scale fieldmanager" would operate on objects of type autoscaling.v1/Scale, but with managedFields representing field management on the underlying resource. Then, it just needs to know what fieldpath in the underlying resource corresponds to Scale.spec.replicas, in deployment's case it would be the same fieldpath, but I think CRD can define the path themselves, so we would have to construct the scale field manager with that path as an argument. |
|
Another option is to pass in a Deployment FieldManager into the sub-update as a transformer here |
|
@jennybuckley oh nice! Thank you, this makes sense! I'll give it a shot |
|
@jennybuckley would your suggestion mean, that we would have to have a fieldManager for every subresource? |
What type of PR is this?
/kind bug
/wg apply
What this PR does / why we need it:
Track the owner for
.spec.replicasfor the Deployment scale subresource.Not sure if this will actually work, or if we'll have to fix scale somehow.
Which issue(s) this PR fixes:
Fixes #82046
Does this PR introduce a user-facing change?: