-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KEP-2579: PodSecurity beta / 1.23 updates #2895
Conversation
@@ -609,39 +609,23 @@ pod_security_evaluations_total | |||
|
|||
The metric will use the following labels: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @dashpole (listed in the KEP as the instrumentation reviewer) and @logicalhan (who had questions about cardinality in the implementing PR)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
@@ -630,22 +630,21 @@ The metric will use the following labels: | |||
|
|||
The following audit annotations will be added: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
made coherent with the current implementation and the audit tracking issue in kubernetes/kubernetes#103923
/lgtm |
LGTM for Windows related updates |
added benchmark stats to PRR resource question |
`escalate-privilege` verb on the ephemeral containers subresource). | ||
|
||
<<[/UNRESOLVED]>> | ||
This means that an existing pod which is not valid according to the current |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Existing and in the case where an ephemeral container tries to add CAP_SYS_PTRACE, the baseline policy will reject?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure I parsed your question correctly, but:
- ephemeral containers themselves must pass the current enforce level
- adding/updating an ephemeral container is considered a relevant update which requires the whole pod (existing containers) to pass the current enforce level
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I expect CAP_SYS_PTRACE to be a common request for debugging, and I was even considering making it the default if we decide to adopt debugging profiles (#2834). Does PTRACE enable a privilege escalation?
Since ephemeral containers cannot be removed, it sounds like it's possible for a pod to fail the current enforce level even if the ephemeral container is no longer running, blocking future updates that would otherwise be allowed at this level. Is that desirable?
I think it's fine for my intended use of breaking glass to debug a pod and then deleting the pod soon afterwards, but I don't think I've thought through all the scenarios yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since ephemeral containers cannot be removed, it sounds like it's possible for a pod to fail the current enforce level even if the ephemeral container is no longer running, blocking future updates that would otherwise be allowed at this level. Is that desirable?
I think so? Some of the changes an escalated ephemeralContainer could make could persist even after the ephemeral container is no longer running. Presence of an ephemeral container in a pod that doesn't fit within the current enforce level seems like it should be rejected, even if it is not running.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are basically 3 operations that would trigger a pod to be reevaluated, and potentially block the update:
- Change the seccomp annotation - deprecated behavior that will be removed soon.
- Updating the container image - goes against best practices
- Adding an ephemeral container
So, if an exempt user adds a privileged ephemeral container, the pod enters a "violating" state, and non-exempt users can no longer perform any of those updates. This seems reasonable to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One way this could be handled under the current model is:
- Exempt a special username (not one that can be authenticated directly) from policy enforcement,
e.g.ops:privileged-debugger
- Grant the special user permission to ONLY operate on the ephemeral containers subresource (it is
critical that they cannot create or update pods directly).- Grant (real) users that should have privileged debug capability the ability to impersonate the
exempt user.
I still think it's useful to document this workflow somewhere. This behavior would be a bit more intuitive if we had group exemptions (impersonate the breakglass group), but the workflow doesn't change that much as long as RBAC bindings are mostly granted to groups.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like that this proposal applies the same policy to all of the container types, keeping the default implementation simple.
I'd like to ensure that kubectl debug is useful with the default cluster configuration, ideally with a break glass option. I'm not sure yet how we'll implement that, but it's an explicit goal in this KEP and I'm confident we'll be able to work out the details before the default policy changes to baseline.
/lgtm
`escalate-privilege` verb on the ephemeral containers subresource). | ||
|
||
<<[/UNRESOLVED]>> | ||
This means that an existing pod which is not valid according to the current |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I expect CAP_SYS_PTRACE to be a common request for debugging, and I was even considering making it the default if we decide to adopt debugging profiles (#2834). Does PTRACE enable a privilege escalation?
Since ephemeral containers cannot be removed, it sounds like it's possible for a pod to fail the current enforce level even if the ephemeral container is no longer running, blocking future updates that would otherwise be allowed at this level. Is that desirable?
I think it's fine for my intended use of breaking glass to debug a pod and then deleting the pod soon afterwards, but I don't think I've thought through all the scenarios yet.
prr and the updates for beta lgtm /approve /cc @enj |
@deads2k: GitHub didn't allow me to request PR reviews from the following users: mo. Note that only kubernetes members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: deads2k, liggitt, verb The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/hold cancel |
One-line PR description: beta updates for PodSecurity
Issue link: PodSecurity admission (PodSecurityPolicy replacement) #2579
Best reviewed commit-by-commit (for most reviewers, there's only a single relevant commit)
/sig auth