Revert "policy: Replace versioned with part.Map"#43237
Revert "policy: Replace versioned with part.Map"#43237jrajahalme merged 1 commit intocilium:mainfrom
Conversation
|
Commits 0d76100, 89f30d6, 17f077a, ea7b381 do not match "(?m)^Signed-off-by:". Please follow instructions provided in https://docs.cilium.io/en/stable/contributing/development/contributing_guide/#developer-s-certificate-of-origin |
|
/test |
|
Locally I'm able to repro the following error with latest main; With this revert I'm not able to reproduce - so its clearly some kind of race here. I think the reason for the CI fail was because some other test took a long time, causing the go testing tooling to realize that the selectorcache is leaking the goroutine It looks like its only 0d76100 causing issues here, so I'll revert only that one and add the sign-off message 😄 |
ea7b381 to
88838be
Compare
|
/test |
|
Hitting #42922. Will rerun |
|
/ci-clustermesh |
|
cc @jrajahalme |
|
/ci-e2e-upgrade |
88838be to
5ff22c4
Compare
jrajahalme
left a comment
There was a problem hiding this comment.
OK with the revert, thanks for the repro, will investigate ASAP.
|
Thanks! We should probably deal with the "leaked" handleUserNotifications goroutine as well separately. When running loads of tests, each selector cache will create a separate leaked goroutine, that causes loads of noise if tests timeout - and its very hard to understand what the actual problem is. |
|
/test |
|
Another instance of #42922. Will retry |
Looks like most of the fails in |
This reverts commit 4c4c7c9. Signed-off-by: Odin Ugedal <[email protected]> Signed-off-by: Odin Ugedal <[email protected]>
Head branch was pushed to by a user without write access
6b94d0b to
3e1ee89
Compare
|
/test |
|
/ci-e2e-upgrade |
|
Merge keeps failing due to some unrelated infrastructure issue on image build workflows, trying again: |
|
Looks like some github.com issues. All tests are passing now. |
Reverts the last commit from #42992.
Seeing pretty consistent failures of the
TestTransactionalUpdatetest onmainnow. Looks like this can be bisected back to this change.Locally I'm able to repro the following error with latest main;
With this revert I'm not able to reproduce so far - so its clearly some kind of race here, but its not fully clear to me why. I think we should revert for now until we fully understand whats going on. Its not deterministically failing, and it only does so very rarely - but there has to be a race somewhere.
I think the reason for the CI fail in #43231 was because some other test took a long time, causing the go testing tooling to realize that the selectorcache is leaking the goroutine
handleUserNotificationsthats never cleaned up. So if all tests take more time than the test timeout, it can hit that. Reproducing that test failure is easy locally, since you can do something likego test -v -run "TestTransactionalUpdate" -count=100000 -timeout=1s. This is also probably something we should fix at some point.