-
Notifications
You must be signed in to change notification settings - Fork 3.7k
CFP: Enable/disable L3/L4 policy enrichment in hubble flows via flag or configmap option #37528
Description
Cilium Feature Proposal
Currently the L3/L4 network policy enrichment for hubble flows is on by default.
The parser responsible is here: threefour
To turn this feature off some code needs to be commended out:
cilium/pkg/hubble/parser/threefour/parser.go
Lines 234 to 236 in daea10f
| if p.endpointGetter != nil { | |
| correlation.CorrelatePolicy(p.endpointGetter, decoded) | |
| } |
Is your proposed feature related to a problem?
The default L3/L4 network policy enrichment can impact memory and CPU consumption, especially with a large number of policies. The flow object size increases significantly as more policies are added and evaluated.
Describe the feature you'd like
Introduce a flag/configmap option to enable or disable the enrichment based on consumer needs. This would allow in flexibility when testing and cater for specific needs or storage size constraints in relation to flow size.
Performance Testing carried out:
Environment:
BYOC AKS cluster with 50 nodes and 10k pods (pods firing periodic requests at 2 req/sec).
Setup:
1k allow-all NetPols added to the kube-system namespace.
Comparison:
L3/L4 NetPol enrichment ON vs OFF (commented out).
Results:
| NetPol | No NetPol | |
|---|---|---|
| CPU | 1.66 cores | 1.53 cores |
| MEM | 1.13 GB | 952 GB |
Flow object example with just one policy:
"ingress_allowed_by": [
{
"name": "allow-good-to-bad",
"namespace": "kube-system",
"labels": [
"k8s:io.cilium.k8s.policy.derived-from=NetworkPolicy",
"k8s:io.cilium.k8s.policy.name=allow-good-to-bad",
"k8s:io.cilium.k8s.policy.namespace=kapinger-ns",
"k8s:io.cilium.k8s.policy.uid=a6afcfcc-3efb-4041-8b89-2e84c943f5ed"
],
"revision": "5"
}
]
As NetPols are evaluated an entry is added to the flow object for each policy & revision.