Skip to content

initial kep for qos of storage v0.1#1907

Closed
pacoxu wants to merge 2 commits intokubernetes:masterfrom
pacoxu:storage/qos
Closed

initial kep for qos of storage v0.1#1907
pacoxu wants to merge 2 commits intokubernetes:masterfrom
pacoxu:storage/qos

Conversation

@pacoxu
Copy link
Copy Markdown
Member

@pacoxu pacoxu commented Jul 27, 2020

based on kubernetes/kubernetes#92287 proposal 2.
The initial discussion and user requirements can be found in kubernetes/kubernetes#27000

Fixes #3520

@kubernetes/sig-storage-feature-requests
/sig storage

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. sig/storage Categorizes an issue or PR as relevant to SIG Storage. kind/feature Categorizes issue or PR as related to a new feature. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Jul 27, 2020
@k8s-ci-robot
Copy link
Copy Markdown
Contributor

Welcome @pacoxu!

It looks like this is your first PR to kubernetes/enhancements 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes/enhancements has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot
Copy link
Copy Markdown
Contributor

Hi @pacoxu. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Jul 27, 2020
@k8s-ci-robot
Copy link
Copy Markdown
Contributor

@pacoxu: Reiterating the mentions to trigger a notification:
@kubernetes/sig-storage-feature-requests

Details

In response to this:

based on kubernetes/kubernetes#92287 proposal 2.

@kubernetes/sig-storage-feature-requests
/sig storage

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot
Copy link
Copy Markdown
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: pacoxu
To complete the pull request process, please assign msau42
You can assign the PR to them by writing /assign @msau42 in a comment when ready.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot requested review from jsafrane and msau42 July 27, 2020 09:30
@k8s-ci-robot k8s-ci-robot added kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory size/L Denotes a PR that changes 100-499 lines, ignoring generated files. sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. labels Jul 27, 2020
@pacoxu pacoxu changed the title [WIP] initial kep for qos of storage initial kep for qos of storage v0.1 Sep 14, 2020
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Sep 14, 2020
@pacoxu
Copy link
Copy Markdown
Member Author

pacoxu commented Sep 14, 2020

/sig storage
@kubernetes/sig-storage-pr-reviews

can someone give comments? Or we may discuss in the storage sig about this?

@pacoxu
Copy link
Copy Markdown
Member Author

pacoxu commented Sep 14, 2020

/reassign

@pacoxu
Copy link
Copy Markdown
Member Author

pacoxu commented Sep 14, 2020

/retest



### Risks and Mitigations
I think iops limiting would be a great idea, but I'm not sure if the current cgroups implementation will effectively implement it. For example, with non-direct device access, writes are buffered in the kernel and something like 80% of them will not be accounted to a cgroup (instead they're all aggregated together). I did a little bit of experimentation here: https://gitlab.com/mattcary/blkio_cgroups/-/blob/master/data/blkio_cgroup.md (sorry that the writeup is not very polished).
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's actually the problem with limiting IOPS: with the current Linux kernel it's not possible to limit it reliably and the reason we don't have it yet in Kubernetes.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As docker supports it, it seems to be a general requirements for storage, especially block volumes. As I mentioned, the disk device that docker is running on, is so important that we need to protect it from being overused by some pods.

--device-read-bps value       Limit read rate (bytes per second) from a device (default [])
      --device-read-iops value      Limit read rate (IO per second) from a device (default [])
      --device-write-bps value      Limit write rate (bytes per second) to a device (default [])
      --device-write-iops value     Limit write rate (IO per second) to a device (default [])

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if docker-shim is deprecated, I would look into other cri like containerd. Whether this is supported?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like there is some level of cgroups v2 support in containerd? containerd/containerd#3726

containerd is supported well, it's available on GKE and is becoming the default.

https://cloud.google.com/kubernetes-engine/docs/concepts/using-containerd

Comment on lines +116 to +119
annotations:
qos.volume.storage.daocloud.io: >-
{"pvc": "snap-03", "iops": {"read": 2000, "write": 1000}, "bps":
{"read": 1000000, "write": 1000000}}
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't use alpha annotations in Kubernetes, we directly declare fields in structs (as beta proposal).

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will only include the final design later.

Comment on lines +171 to +176
iops:
read: 2000
write: 1000
bps:
read: 1000000
write: 1000000
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. This assumes that cluster admin can trust users to create pods with the right IOPS requirements. A rogue user can claim all / most of the IOPS for itself. In Kubernetes, we don't trust users (much) and we have quotas and similar concepts to limit them. I don't know how to apply IOPS there, perhaps via StorageClass & IOPS defined there? Then the cluster admin can define quota of volumes to each user / namespace.

  2. How does it work when user sets different IOPS for volumes based by the same device (e.g. each Secret / EmptyDir / HostPath volume having different IOPS)?

  3. Not all volume types support IOPS, e.g. NFS / GlusterFS / CephFS / tmpfs. Will be IOPS setting silently ignored for them?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. To limit user, we may add it to limitrange of namespace.
    IOPS can be applied to storage class as well, but this may be a limitation from storage side. This may be a physical limitation.

  2. IOPS limitation may only apply to volume, which is a block device.
    cgroups can be applied to only device id.
    If configmap and secrets are mounted on which is on the rootfs of the host, the rootfs iops limitation will take effect.

  3. That's the limitation.
    Just ignore those in kubelet. kubelet can get mount type.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to instead of adding absolute IOPS limits, add more of a QoS class to the storageclass. eg,

  • unlimited, no restriction on IOPS, which is what we currently do

  • fair, all pods using the PVC/volume split IOPS fairly. This does require a pretty complicated controller to track PV usage and update all the nodes with how usage should split, especially if we want to base throttling on actual usage (eg, pods that aren't hammering the device don't get the same share as a more active pod), but this may be what users actually want

  • priority, pods with some sort of annotation get unlimited IOPS, all other pods get a fixed share

OK, I'll admit this is all very vague and not well thought-out, but a direction like this has a few advantages:

  • Current state of the art means we're not going to be able to hit all IOPS limits anyway AFAIU

  • This gives leeway for implementation differences and means we wouldn't have to change the API when cgroups v2 comes out or do all sort of complicated docker vs containerd differences

  • Managing IOPS at a pod level is not going to scale for large deployments, but it's (probably) only in large deployments where this sort of thing is really needed. So a higher-level API is maybe ultimately much more useful.

#27000 already was discussing along these lines.

Maybe another way to see this is that QoS is really a volume characteristic and not a pod characteristic. That is, we are concerned about guaranteeing a certain performance for all users of a volume more than we are concerned about setting what performance a pod sees. So setting limits at some higher level than the pod is probably the right way to go about this.


- The limit is runtime limitation for block device when it is mounted to the pod. The limit is not a volume limitation in IaaS's aspect. If the device is used by multi pods, each pod should limit the iops by itself.
- For volume capability of iops, it is a physical limitation on device(PV), and this is not the same as the QoS of PVC in this proposal.
- As cgroup implement has its limitations, we will not mention kernel buffered writings issues with cgroup limitations. This should be fixed or optimized in kernel side. Detailes will be mentioned in the `risks and limitations` below.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given this limitation, how useful is this feature for node stability? In my experience, most writes are buffered, so this would only limit some pods.

The proposal would benifit in below two senarios.

* Local Storage(local disk devices), better speed:
For instances, I want to use local storage(local device) to gain the storage local speed.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does this feature improve speed? It seems to only address limiting speed.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is advantage of local volume. I may put it in the wrong place. thanks.


- The limit is runtime limitation for block device when it is mounted to the pod.
- Only blocking device will be limited with specified volume device id.
- This should be implemented with cgroup, so it will only work beyond cgroup capability.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Windows has been GA for a while now. Given that you are proposing a user-facing API, we should at minimum evaluate feasibility of implementing this feature in windows.



Then kubelet can get the mount point of the pvc and the device id. Then we use the cgroup to limit iops and bps of the pod.
We can just edit the cgroup limit files under the pod /sys/fs/cgroup/blkio/kubepods/pod/<Container_ID>/...
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: This is the container cgroup, but you say below we aren't managing containers. Can you remove <Container_ID> from the path?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

echo "<block_device_maj:min> <value>" > /sys/fs/cgroup/blkio/kubepods/pod<UID>/blkio.throttle.read_iops_device
```

To manage the QoS of containers, it can be supported in the future if there are more than 1 contaienrs in the pod, and we may add different iops limits for each container.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have discussed having both container-level and pod-level control for other resources in the past, but it can be quite complex: #1592. I would recommend assuming we will only get either pod or container-level limiting.

Better resource management for iops or bps of block device in pvc.
In some senarios, we need to limit PV's iops limit or Pod iops on a device or volume at runtime.

For container runtime, dockerd provides blkio related params in docker run to limit a device's iops and bps. [docker reference #block-io-bandwidth-blkio-constraint](https://docs.docker.com/engine/reference/run/#block-io-bandwidth-blkio-constraint)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

- The limit is runtime limitation for block device when it is mounted to the pod.
- Only blocking device will be limited with specified volume device id.
- This should be implemented with cgroup, so it will only work beyond cgroup capability.
- An extended feature that can be provided later is that we can limit iops of rootfs for container runtime. This will reduce the influcences between pods, running on the same host.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is already a key problem, I think we need to design something for that here.

eg with GCP PD, IOPS are pooled across all devices attached to a node so that limiting IOPS on a single device may not work as expected (eg, accessing the rootfs may end up throttling an attached PV).

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • cgroups can limit iops for local block volume and rootfs.

I may change to goal to this. For all types of volumes' QoS, it may be a much bigger topic.


- The limit is runtime limitation for block device when it is mounted to the pod. The limit is not a volume limitation in IaaS's aspect. If the device is used by multi pods, each pod should limit the iops by itself.
- For volume capability of iops, it is a physical limitation on device(PV), and this is not the same as the QoS of PVC in this proposal.
- As cgroup implement has its limitations, we will not mention kernel buffered writings issues with cgroup limitations. This should be fixed or optimized in kernel side. Detailes will be mentioned in the `risks and limitations` below.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will cgroups v2 change any of this?

If so we should mention that and link in (or start...) related work in docker/containerd.

I think if cgroups is fundamentally limited we want to be careful about adding in a feature that won't work very well but will get baked into deployments and require long-term support.

```
annotations:
qos.volume.storage.daocloud.io: >-
{"pvc": "snap-03", "iops": {"read": 2000, "write": 1000}, "bps":
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is to be volume-based, why not use the pod volume name rather than the PVC? That would incidentally allow for rootfs throttling.

This might not be effective for all kinds of volumes depending on how they're mounted, I suppose, but since the throttling has to be best-effort anyway that might not be a problem in practice.

Comment on lines +171 to +176
iops:
read: 2000
write: 1000
bps:
read: 1000000
write: 1000000
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to instead of adding absolute IOPS limits, add more of a QoS class to the storageclass. eg,

  • unlimited, no restriction on IOPS, which is what we currently do

  • fair, all pods using the PVC/volume split IOPS fairly. This does require a pretty complicated controller to track PV usage and update all the nodes with how usage should split, especially if we want to base throttling on actual usage (eg, pods that aren't hammering the device don't get the same share as a more active pod), but this may be what users actually want

  • priority, pods with some sort of annotation get unlimited IOPS, all other pods get a fixed share

OK, I'll admit this is all very vague and not well thought-out, but a direction like this has a few advantages:

  • Current state of the art means we're not going to be able to hit all IOPS limits anyway AFAIU

  • This gives leeway for implementation differences and means we wouldn't have to change the API when cgroups v2 comes out or do all sort of complicated docker vs containerd differences

  • Managing IOPS at a pod level is not going to scale for large deployments, but it's (probably) only in large deployments where this sort of thing is really needed. So a higher-level API is maybe ultimately much more useful.

#27000 already was discussing along these lines.

Maybe another way to see this is that QoS is really a volume characteristic and not a pod characteristic. That is, we are concerned about guaranteeing a certain performance for all users of a volume more than we are concerned about setting what performance a pod sees. So setting limits at some higher level than the pod is probably the right way to go about this.

@k8s-ci-robot
Copy link
Copy Markdown
Contributor

k8s-ci-robot commented Sep 16, 2020

@pacoxu: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Rerun command
pull-enhancements-verify 4ff9368 link /test pull-enhancements-verify

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@fejta-bot
Copy link
Copy Markdown

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 15, 2020
@keyingliu
Copy link
Copy Markdown

@pacoxu any updates about this kep?

@pacoxu
Copy link
Copy Markdown
Member Author

pacoxu commented Apr 23, 2021

I prefer to wait for cgroup v2 and discuss it later.

Cgroup V2 can refer to #2254.

According to 1.22 KEP prioritization
https://docs.google.com/document/d/1U10J0WwgWXkdYrqWGGvO8iH2HKeerQAlygnqgDgWv4E/edit#, group v2 may target to alpha in 1.22 or soon.

@mattcary
Copy link
Copy Markdown
Contributor

+1 it's good to see that cgroups v2 has gotten some momentum.

It would be a bit of a mess to do storage qos in both cgroups v1 and v2 in a consistent way.

@mlsorensen
Copy link
Copy Markdown

mlsorensen commented May 7, 2021

This just popped up on my radar. One thing I'd mention is that a lot of storage solutions have their own implementations for QoS, it might make sense to consider the implementation and API separately (perhaps no need to wait on cgroups v2, if the API can be finalized and CSI drivers can begin implementing how they see fit). Or perhaps the only supported means of QoS is intended to be at the node and in cgroups?

In this proposal, is QoS being considered for anything that is not a PVC - emptyDir, etc? I've heard of use cases where it's the node's FS that needs to be guarded.

@pacoxu
Copy link
Copy Markdown
Member Author

pacoxu commented May 8, 2021

@mlsorensen correct. This proposal only focuses on cgroup limitation on QoS.
The implemations of the PR kubernetes/kubernetes#92287 only supports docker as the container runtime.

Per https://github.com/kubernetes/enhancements/pull/1907/files#r553451844

It looks like there is some level of cgroups v2 support in containerd? containerd/containerd#3726

containerd is supported well, it's available on GKE and is becoming the default.

https://cloud.google.com/kubernetes-engine/docs/concepts/using-containerd

To set QoS of a PVC, it has to be a block device. The emptyDir may be limited with the rootfs of the node?

@k8s-triage-robot
Copy link
Copy Markdown

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 9, 2021
@aslafy-z
Copy link
Copy Markdown

aslafy-z commented Dec 9, 2021

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 9, 2021
@pacoxu
Copy link
Copy Markdown
Member Author

pacoxu commented Jan 12, 2022

See kubernetes/kubernetes#92287 (comment) for related support update in cri-o and containerd.

@k8s-triage-robot
Copy link
Copy Markdown

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 12, 2022
@aslafy-z
Copy link
Copy Markdown

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 12, 2022
@k8s-triage-robot
Copy link
Copy Markdown

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 11, 2022
@pacoxu
Copy link
Copy Markdown
Member Author

pacoxu commented Jul 15, 2022

/remove-lifecycle stale
See current status in kubernetes/kubernetes#92287 (comment)

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 15, 2022
@kikisdeliveryservice
Copy link
Copy Markdown
Member

@pacoxu this PR is using an old version of the KEP template, please update and also open an issue which is also required for all KEPs :)

@pacoxu pacoxu marked this pull request as draft September 16, 2022 02:40
@k8s-ci-robot k8s-ci-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Sep 16, 2022
@pacoxu
Copy link
Copy Markdown
Member Author

pacoxu commented Sep 16, 2022

@pacoxu this PR is using an old version of the KEP template, please update and also open an issue which is also required for all KEPs :)

/close
There is a new solution in kubernetes/kubernetes#92287 (comment).

I opened #3520 for future tracking of this feature design.

@k8s-ci-robot
Copy link
Copy Markdown
Contributor

@pacoxu: Closed this PR.

Details

In response to this:

@pacoxu this PR is using an old version of the KEP template, please update and also open an issue which is also required for all KEPs :)

/close
There is a new solution in kubernetes/kubernetes#92287 (comment).

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@pacoxu
Copy link
Copy Markdown
Member Author

pacoxu commented Sep 21, 2022

See the new proposal in #3004.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. kind/feature Categorizes issue or PR as related to a new feature. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. sig/storage Categorizes an issue or PR as relevant to SIG Storage. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

iops limit for pod/pvc/pv