Skip to content

Remove NUMA node boundary for ResourceSlice splitting#55

Merged
k8s-ci-robot merged 1 commit intokubernetes-sigs:mainfrom
pravk03:resource-slice-limit
Feb 9, 2026
Merged

Remove NUMA node boundary for ResourceSlice splitting#55
k8s-ci-robot merged 1 commit intokubernetes-sigs:mainfrom
pravk03:resource-slice-limit

Conversation

@pravk03
Copy link
Copy Markdown
Contributor

@pravk03 pravk03 commented Feb 6, 2026

Removes the self-imposed restriction of confining each ResourceSlice to a single NUMA node when we exceed the 128 device limit (set here). Slices will now be created by simply chunking the devices, potentially spanning NUMA nodes.

Fixes: #5

@k8s-ci-robot k8s-ci-robot requested review from klueska and pohly February 6, 2026 01:04
@k8s-ci-robot
Copy link
Copy Markdown
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: pravk03

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. approved Indicates a PR has been approved by an approver from all required OWNERS files. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Feb 6, 2026
@pravk03
Copy link
Copy Markdown
Contributor Author

pravk03 commented Feb 6, 2026

/cc @ffromani
/cc @catblade

@k8s-ci-robot k8s-ci-robot requested a review from ffromani February 6, 2026 01:04
@k8s-ci-robot
Copy link
Copy Markdown
Contributor

@pravk03: GitHub didn't allow me to request PR reviews from the following users: catblade.

Note that only kubernetes-sigs members and repo collaborators can review this PR, and authors cannot review their own PRs.

Details

In response to this:

/cc @ffromani
/cc @catblade

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@pravk03
Copy link
Copy Markdown
Contributor Author

pravk03 commented Feb 6, 2026

/assign @ffromani

Copy link
Copy Markdown
Contributor

@ffromani ffromani left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM with one possible improvement, please see the inline comment

Comment thread pkg/driver/dra_hooks.go Outdated
Removes the self-imposed restriction of confining each ResourceSlice to a single NUMA node when we exceed the 128 device limit.
Slices will now be created by simply chunking the devices, potentially spanning NUMA nodes.
@pravk03 pravk03 force-pushed the resource-slice-limit branch from 4c7d151 to 837aed2 Compare February 9, 2026 19:57
Copy link
Copy Markdown
Contributor

@ffromani ffromani left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

thanks!

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Feb 9, 2026
@k8s-ci-robot k8s-ci-robot merged commit a32f1be into kubernetes-sigs:main Feb 9, 2026
6 checks passed
catblade pushed a commit to catblade/dra-driver-cpu that referenced this pull request Feb 10, 2026
Remove NUMA node boundary for ResourceSlice splitting
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Handle ResourceSlice 128-device limit per ResourceSlice

3 participants