fix: preserve policy status across multiple gatewayclasses#6153
fix: preserve policy status across multiple gatewayclasses#6153zhaohuabing wants to merge 5 commits intoenvoyproxy:mainfrom
Conversation
Codecov Report❌ Patch coverage is
❌ Your patch status has failed because the patch coverage (0.00%) is below the target coverage (60.00%). You can increase the patch coverage or adjust the target coverage. Additional details and impacted files@@ Coverage Diff @@
## main #6153 +/- ##
==========================================
- Coverage 70.80% 70.69% -0.12%
==========================================
Files 220 220
Lines 37132 37189 +57
==========================================
- Hits 26291 26289 -2
- Misses 9300 9358 +58
- Partials 1541 1542 +1 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
fd17502 to
3d16457
Compare
076c387 to
e521ed8
Compare
f1c1290 to
8fb5af5
Compare
8fb5af5 to
b184eed
Compare
9e8c3a8 to
ec5caa3
Compare
Signed-off-by: Huabing (Robin) Zhao <[email protected]>
ec5caa3 to
9b6ec74
Compare
Signed-off-by: Huabing (Robin) Zhao <[email protected]>
9b6ec74 to
69bf81a
Compare
| ancestorsFromOtherGatewayClasses := make([]gwapiv1a2.PolicyAncestorStatus, 0) | ||
| for _, ancestor := range policyStatus.Ancestors { | ||
| belongToCurrentGatewayClass := false | ||
| if ancestor.AncestorRef.Group == nil || *ancestor.AncestorRef.Group == "gateway.networking.k8s.io" { |
There was a problem hiding this comment.
This assumes all xPolicy types use Gateway as the status ancestor, which is recommended by the Gateway API spec.
We may need to clean up the xPolicy status in a follow-up PR in case where other types of resource are used for policy status ancestor.
|
This fix takes a different approach from #4587 because #4587 has a loophole - the status merging logic can't delete an old status ancestor after a target is removed. I plan to raise a follow-up PR to fix that using the same strategy if we agree on this one. We still have another issue: if the selector changes and no targets are selected anymore, the existing status can't be deleted. This is because the current translator ignores the xPolicy without targets and won't update their statuses. This will be addressed in a separate PR. |
Signed-off-by: Huabing (Robin) Zhao <[email protected]>
Signed-off-by: Huabing (Robin) Zhao <[email protected]>
|
@zhaohuabing dont xRoutes status have the same issue ? I believe we solved those in the provider status updater layer ? |
@arkodg Yes, we fixed a similar issue for xRoute in #4587, but I prefer this solution because the xRoute fix has a loophole, more in this comment: #6153 (comment) |
|
hey @zhaohuabing thanks for sharing the past history :) , this looks like two issues
|
Agree.
I have no strong opinion on this. Things we may need to consider regarding Gateway vs GatewayClass as the policy status ancestor:
|
|
with Gateway as the ancestor, we cannot create unique status conditions, because the conditions may vary across Gateway linked to different GatewayClasses |
Thought it again - this can't be solved by ObservedGeneration. apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
generation: 11
name: test
spec:
connection:
bufferLimit: 100M
targetRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: gw
status:
ancestors:
- ancestorRef:
group: gateway.networking.k8s.io
kind: Gateway
name: gw
namespace: default
conditions:
- lastTransitionTime: "2025-06-25T00:55:28Z"
message: Policy has been accepted.
observedGeneration: 11
reason: Accepted
status: "True"
type: Accepted
controllerName: gateway.envoyproxy.io/gatewayclass-controller |
|
@zhaohuabing same is true for xRoute and linked resources ( backendRef) |
You mean this issue #774 ? I'm planning to work on that after fixing the xPolicy status. |
|
This pull request has been automatically marked as stale because it has not had activity in the last 30 days. Please feel free to give a status update now, ping for review, when it's ready. Thank you for your contributions! |
fixes: #6125
Release note: yes
This is a pretty major change on the policy status, so let's hold off for a bit before cherry-picking it into 1.4 and 1.3, just to be safe.