Skip to content

helm: move admission out of crds folder#1548

Closed
zirain wants to merge 3 commits intoenvoyproxy:mainfrom
zirain:improve-helm
Closed

helm: move admission out of crds folder#1548
zirain wants to merge 3 commits intoenvoyproxy:mainfrom
zirain:improve-helm

Conversation

@zirain
Copy link
Copy Markdown
Member

@zirain zirain commented Jun 19, 2023

What this PR does / why we need it:

this PR introduce a new tool named gtwapi-manifests, which will parse manifests from upstream release.

Which issue(s) this PR fixes:

Fixes #1506

Alternatives: move Gatway API project resources out of EG's chart,users mange resources themself.

@zirain zirain marked this pull request as ready for review June 19, 2023 02:51
@zirain zirain requested a review from a team as a code owner June 19, 2023 02:51
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

find a way to keep this

@codecov
Copy link
Copy Markdown

codecov bot commented Jun 19, 2023

Codecov Report

Merging #1548 (6a61072) into main (69c767f) will increase coverage by 0.13%.
The diff coverage is n/a.

@@            Coverage Diff             @@
##             main    #1548      +/-   ##
==========================================
+ Coverage   61.65%   61.78%   +0.13%     
==========================================
  Files          81       81              
  Lines       12131    12131              
==========================================
+ Hits         7479     7495      +16     
+ Misses       4184     4171      -13     
+ Partials      468      465       -3     

see 2 files with indirect coverage changes

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

auto read current gwapi version?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hejianpeng added 3 commits June 19, 2023 22:36
Signed-off-by: hejianpeng <[email protected]>
Signed-off-by: hejianpeng <[email protected]>
Signed-off-by: hejianpeng <[email protected]>
@Alice-Lilith
Copy link
Copy Markdown
Member

IMO we shouldn't move this stuff into the main chart directory since removing these items on a helm uninstall is not desirable and cleaning them up manually when you really want to fully uninstall EG from your cluster is pretty easy. In multitenancy situations where you have multiple instances of EG per-cluster, you wouldn't want these resources getting removed when you helm uninstall only one instance so I think it makes sense for them to remain in the crds directory.

@zirain
Copy link
Copy Markdown
Member Author

zirain commented Jun 28, 2023

there's two concerns:

  1. install CRDs have const timeout(60s) which will cause flaky, see helm: move admission out of crds folder #1548
  2. IIRC, it's not a good pratiace to put namespace resources in an helm chart

what about move these out of EG chart?

@Alice-Lilith
Copy link
Copy Markdown
Member

Alice-Lilith commented Jul 5, 2023

IIRC, it's not a good pratiace to put namespace resources in an helm chart

Generally speaking IMO it's better to helm install example --namespace foo --create-namespace so that users know what namespace everything is going into and have control over it unless you're using the default ns. Putting anything other than CRDs inside Helm's crds directory is probably "bad practice" but I can think of many legitimate reasons why you might need to. I think the reason why we have a Namespace object is also why we have this stuff in the crds directory is so that Helm won't try to install them again if you're running multiple instances per cluster. Likewise you won't want these objects/namespace removed when doing a helm uninstall.

what about move these out of EG chart?

That would probably be better practice than any of the above helm related options and I'd be a +1 for that. or moving the CRDs into their own helm chart along with the admission stuff so that you can do one install of the admission/crds per-cluster and then the main EG install/chart can be done multiple times per cluster in a clean way if needed. I'm personally a fan of that approach wrt handling CRD installations. I don't think it's that big a deal, but IMO helm's handling of the crds directory causes more problems than it solves.

@arkodg
Copy link
Copy Markdown
Contributor

arkodg commented Jul 5, 2023

Will creating a sub chart for gateway-api help here ?

@zirain
Copy link
Copy Markdown
Member Author

zirain commented Jul 6, 2023

Will creating a sub chart for gateway-api help here ?

I think that chart should be provided by gateway-api project.

@arkodg
Copy link
Copy Markdown
Contributor

arkodg commented Jul 6, 2023

Will creating a sub chart for gateway-api help here ?

I think that chart should be provided by gateway-api project.

theoretically I agree with you :)

@arkodg
Copy link
Copy Markdown
Contributor

arkodg commented Jul 12, 2023

@AliceProxy and I discussed this issue in the community meeting today and we were +1 on moving the gateway-api resources into a separate chart with chart values such as

gateway-api
  enabled: true

or

gateway-api:
  crds:
    enabled: true
  admissionServer:
    enabled: true

this would imply not adding the crds in a /crds dir so they can be easily opted-in and out during install and uninstall

@Alice-Lilith
Copy link
Copy Markdown
Member

@AliceProxy and I discussed this issue in the community meeting today and we were +1 on moving the gateway-api resources into a separate chart with chart values such as

gateway-api
  enabled: true

or

gateway-api:
  crds:
    enabled: true
  admissionServer:
    enabled: true

this would imply not adding the crds in a /crds dir so they can be easily opted-in and out during install and uninstall

+1, I would prefer two charts (1 for CRDs + admission server and one for the controller). However, a sub chart with the installation of the admission server and CRDs being opt-in would be an acceptable solution as well, that IMO is better than the current way we're handling this.

@zirain
Copy link
Copy Markdown
Member Author

zirain commented Jul 27, 2023

I'd like close this now, will handle this in v0.6.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

flaky: make kube-deploy fails

4 participants