-
Notifications
You must be signed in to change notification settings - Fork 44
Resources override #362
Resources override #362
Conversation
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jcrossley3 The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
knative-prow-robot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jcrossley3: 1 warning.
Details
In response to this:
Proposed Changes
- Introduce
resourcesfield in theKnativeServingspec, which is a mapping of deployment names to a list ofResourceRequirements- This will override the containers' resources specified in the deployment's spec
- If not specified in
KnativeServing, the resource requests will be removed from the deployment's spec.The effect of this is that all of the resource limits specified in the embedded manifest will be removed by default. This is arguably the correct behavior anyway, as the limits are platform-specific and often lead to OOMErrors that frustrate new users.
To prevent their removal, the following
KnativeServingexample would be required:apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: knative-serving spec: resources: activator: - requests: cpu: 300m memory: 60Mi limits: cpu: 1000m memory: 600Mi autoscaler: - requests: cpu: 30m memory: 40Mi limits: cpu: 300m memory: 400Mi controller: - requests: cpu: 100m memory: 100Mi limits: cpu: 1000m memory: 1000Mi webhook: - requests: cpu: 20m memory: 20Mi limits: cpu: 200m memory: 200Mi autoscaler-hpa: - requests: cpu: 30m memory: 40Mi limits: cpu: 300m memory: 400Mi networking-istio: - requests: cpu: 30m memory: 40Mi limits: cpu: 300m memory: 400MiRelease Note
By default, no resource limits will be specified for any knative deployment containers. You must specify them in the `KnativeServing` custom resource using the new `resources` field.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
It's a map of deployment names to a []v1.ResourceRequirement. The array corresponds to the order of the containers in the deployment spec.
Added a dummy container to the webhook just to show that capability, even if yet unused
0e39182 to
2601e2b
Compare
This removes any assumption of container order in the deploy spec, with the minor trade-off of container names expected to be unique within the manifest.
I think this bit should be discussed and implemented upstream first 🤔. It doesn't feel right to have the defaults of both installation modes differ that vastly. |
markusthoemmes
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code LGTM modulo the requests thing.
|
I'm going to change the logic to not delete the upstream config, by default, so... |
We now merge the resource overrides with the existing specs, so that the upstream defaults remain intact if there are no overrides in the KnativeServing CR. We also converted the map to an array to abide by k8s conventions.
|
The following is the coverage report on the affected files.
|
|
/hold cancel |
|
/lgtm |
Fixes #161
Related #302 #329
Proposed Changes
resourcesfield in theKnativeServingspec, which is a list ofResourceRequirementsOverrides, enabling users to override the resources consumed by any of the containers listed in the embedded manifest.This exposes a "knob" in the
KnativeServingCR that would enable users to workaround issues like knative/serving#7195Release Note