Kubernetes Audit Logging Introduction
Explanation of Kubernetes Audit logging and an example of some policy configurations.
Overview
Kubernetes Auditing is part of the kube-apiserver, and will log all requests that the API Server processes for audit purposes.
This is what an audit log looks like:
These logs can give very useful information about what is happening in your cluster, and can even be required for compliance purposes.
This is what a basic audit logging policy looks like:
level? omitStages? What the heck. Let’s explain what those terms mean!
Stages
- RequestReceived: The stage for events generated as soon as the audit handler receives the request.
- ResponseStarted: Once the response headers are sent, but before the response body is sent. This stage is only generated for long-running requests (e.g. watch).
- ResponseComplete: Once the response body has been completed.
- Panic: Events generated when a panic occurred.
Levels
- None: don’t log events that match this rule.
- Metadata: log request metadata (requesting user, timestamp, resource, verb, etc.) but not request or response body.
- Request: log event metadata and request body but not response body.
- RequestResponse: log event metadata, request and response bodies.
Configuration
Auditing is configurable at two levels:
- Policy: What is recorded.
- Backends: How are records persisted and broadcast.
This is a basic policy that would log everything at the Metadata level.
The policy below was created for GCE, but it’s a great starting point for an audit policy. It is much less verbose than the simple audit policy above, which can be costly if you’re using a SaaS centralized logging solution.
Something to note about audit policies is that when an event is processed it is compared against the audit policy rules in order, and the first matching rule sets the audit level of the event. It’s pretty weird. It would make a lot more sense to read the entire policy and apply rules according to most restrictive; like is done with Kubernetes RBAC policies.
Once we’ve defined a policy like above, we need to apply it to the Kubernetes API Server.
For Kops, we can apply the changes with kops edit cluster <cluster>
.
NOTE: If you’re using a cluster management tool other than Kops, you’ll need to find some way to get the audit policy on the Kubernetes master node.
In the kube configuration above, we’re using a log backend, to read about log backends and webhook backends, check the official Kubernetes documentation.
OK! Once we’ve rotated our master node with the new configuration, we can pickup the audit logs from the auditLogPath
using some kid of log exporter and send them to a centralized logging solution. My preference is LogDNA.
You should be getting some pretty audit logs now. Have fun!