-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
THREESCALE-10167: Formula for cgroups v2 #3565
Conversation
d1185b2
to
4b30539
Compare
lib/kubernetes/tools.rb
Outdated
end | ||
|
||
def kubernetes_cpu_request | ||
available_cpus if running_on_kubernetes? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we shouldn't care whether we run on k8s. If we see cgroups cpu limit, then use that, otherwise do not. Even if we run directly under podman/docker or locally, it makes sense to use the cgroups
value if such is present.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks great overall!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I guess it makes sense.
I'll refactor
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@akostadinov take a look at 742b520 please
One potential drawback is that if the image is run outside of K8S, it might be that /sys/fs/cgroup/cpu.weight
could be missing (even if cgroups v2 is used), because of how the cgroup directory is structured... So the following warning can be observed:
WARNING: Caught exception getting CPU shares: No such file or directory @ rb_sysopen - /sys/fs/cgroup/cpu.weight
But I think it's OK, I guess we'll rarely run the bare container outside of K8s, and it will be only for testing purposes.
When running locally we won't see this warning because a different unicorn config is used.
test/unit/lib/cgroups/tools_test.rb
Outdated
|
||
# 1024 shares is equivalent to 1 cpu | ||
File.stubs(:read).returns('2048') | ||
assert_equal 2, Cgroups::Tools.available_cpus |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a nitpick, perhaps use another value so that it doesn't match the default of 2
. But everything is great!
Co-authored-by: Aleksandar N. Kostadinov <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you!
* CPU share formula for cgroups v2 * Refactor * Change how the kubernetes lib is required * Add a comment * Refactor Kubernetes tools to Cgroups tools * Update the test as suggested Co-authored-by: Aleksandar N. Kostadinov <[email protected]> --------- Co-authored-by: Aleksandar N. Kostadinov <[email protected]>
What this PR does / why we need it:
Support Cgroups v2 for auto-detection of unicorn workers
Which issue(s) this PR fixes
https://issues.redhat.com/browse/THREESCALE-10167
Verification steps
Run on OCP 4.14 (that uses cgroups v2 by default) and make sure that the unicorn workers number is as expected (according to the CPU requests/limits)
Special notes for your reviewer:
References:
https://linuxera.org/cpu-memory-management-kubernetes-cgroupsv2/#cpu-bandwidth-configuration-on-the-node
https://github.com/kubernetes/kubernetes/blob/release-1.27/pkg/kubelet/cm/helpers_linux.go#L45
https://github.com/kubernetes/kubernetes/blob/release-1.27/pkg/kubelet/cm/cgroup_manager_linux.go#L570-L574