Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pod SecurityContext Changes in 1.12.0-rc2 do not have backwards compatible defaults #69647

Closed
MarcPow opened this issue Oct 11, 2018 · 6 comments · Fixed by #69694
Closed

Pod SecurityContext Changes in 1.12.0-rc2 do not have backwards compatible defaults #69647

MarcPow opened this issue Oct 11, 2018 · 6 comments · Fixed by #69694
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one.

Comments

@MarcPow
Copy link
Contributor

MarcPow commented Oct 11, 2018

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:

When deploying a Manifest that is compatible with 1.11.3 against a newly provisioned 1.12.0-rc2 cluster, older manifests no longer worked.

2018-10-11T01:00:01.3680048Z error: error validating: error validating data: field spec.template.spec.containers[0].securityContext.procMount for v1.SecurityContext is required; if you choose to ignore these errors, turn validation off with --validate=false

What you expected to happen:

1.11.3 compatible manifests should work against 1.12.0.rc-2

How to reproduce it (as minimally and precisely as possible):

kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
hostPort: 5001
- name: admin
containerPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
args:
- --api
- --kubernetes
- --logLevel=INFO

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):

Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.9", GitCommit:"57729ea3d9a1b75f3fc7bbbadc597ba707d47c8a", GitTreeState:"clean", BuildDate:"2018-06-29T01:07:01Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"

kubectl delivered by VSTS Build.

2018-10-11T00:59:54.7599385Z ##[section]Starting: Deploy Traefik Ingress
2018-10-11T00:59:54.7604992Z ==============================================================================
2018-10-11T00:59:54.7606132Z Task : Deploy to Kubernetes
2018-10-11T00:59:54.7606242Z Description : Deploy, configure, update your Kubernetes cluster in Azure Container Service by running kubectl commands.
2018-10-11T00:59:54.7606322Z Version : 0.1.31
2018-10-11T00:59:54.7606416Z Author : Microsoft Corporation
2018-10-11T00:59:54.7606512Z Help : More Information
2018-10-11T00:59:54.7606590Z ==============================================================================

  • Cloud provider or hardware configuration: ACS-Engine 0.22.4. Windows 1803.
@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Oct 11, 2018
@k8s-ci-robot
Copy link
Contributor

@MarcPow: There are no sig labels on this issue. Please add a sig label by either:

  1. mentioning a sig: @kubernetes/sig-<group-name>-<group-suffix>
    e.g., @kubernetes/sig-contributor-experience-<group-suffix> to notify the contributor experience sig, OR

  2. specifying the label manually: /sig <group-name>
    e.g., /sig scalability to apply the sig/scalability label

Note: Method 1 will trigger an email to the group. See the group list.
The <group-suffix> in method 1 has to be replaced with one of these: bugs, feature-requests, pr-reviews, test-failures, proposals.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@MarcPow
Copy link
Contributor Author

MarcPow commented Oct 11, 2018

My expectation would be that this would default to "securityContext.procMount: default"

@jessfraz - This appears to be related to 30dcca6.

I notice that you've put this behind a feature gate. I'll admit that I'm relatively new to Kubernetes. My expectation would be that new fields would default - but is there some convention that says for features of a certain size, or of a certain complexity, we default them to on, and then force backwards compatibility via feature disable? It's not my intent to buck standard approach here.

@jessfraz
Copy link
Contributor

jessfraz commented Oct 11, 2018 via email

@MarcPow
Copy link
Contributor Author

MarcPow commented Oct 11, 2018

It's not defaulting at all. In the presence of this older manifest definition:

securityContext:
capabilities:
drop:

  • ALL
    add:
  • NET_BIND_SERVICE
    args:
  • --api
  • --kubernetes
  • --logLevel=INFO

It's yelling at me that procMount is required.

2018-10-11T01:00:01.3680048Z error: error validating: error validating data: field spec.template.spec.containers[0].securityContext.procMount for v1.SecurityContext is required; if you choose to ignore these errors, turn validation off with --validate=false

@jessfraz
Copy link
Contributor

ah ok I will do a fix

@jessfraz jessfraz self-assigned this Oct 11, 2018
@liggitt liggitt changed the title PodSecurityPolicy Changes in 1.12.0-rc2 do not have backwards compatible defaults Pod SecurityContext Changes in 1.12.0-rc2 do not have backwards compatible defaults Oct 12, 2018
@alexellis
Copy link

I'm also getting this error for the OpenFaaS helm chart with K8s 1.12. It's not clear from this thread what I need to do to fix this. Can anyone make a suggestion? cc @LucasRoesler @stefanprodan

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants