Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Design of local storage dynamic provisioning #1914

Closed
wants to merge 8 commits into from

Conversation

lichuqiang
Copy link

@lichuqiang lichuqiang commented Mar 9, 2018

Still in progress, more details need to be added
The change relies on #1857

@msau42 @NicolasT

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Mar 9, 2018
@k8s-ci-robot k8s-ci-robot requested review from childsb and saad-ali March 9, 2018 02:16
@k8s-github-robot k8s-github-robot added kind/design Categorizes issue or PR as related to design. sig/storage Categorizes an issue or PR as relevant to SIG Storage. labels Mar 9, 2018
@ianchakeres
Copy link
Contributor

FYI: @dhirajh

which depends on the newly introduced changes in [Volume Topology-aware Scheduling](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/volume-topology-scheduling.md),
that is:

1. When feature `DynamicProvisioningScheduling` enabled, scheduler would verify a pod's volume requirement,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reword:

When the feature gate DynamicProvisioningScheduling is enabled, the scheduler will verify a pod's volume requirements,

that is:

1. When feature `DynamicProvisioningScheduling` enabled, scheduler would verify a pod's volume requirement,
and set `annScheduledNode` on PVCs that need provisioning.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/and set/and set an annotation/


1. When feature `DynamicProvisioningScheduling` enabled, scheduler would verify a pod's volume requirement,
and set `annScheduledNode` on PVCs that need provisioning.
2. New fields would be added in `StorageClass` to allow storage providers to report
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What new fields need to be added?

2. New fields would be added in `StorageClass` to allow storage providers to report
capacity limits per topology domain.

The local volume provisioner need to be updated in several aspects to support the scenario.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: /need/needs/
nit: /aspects/areas/
nit: /the scenario/dynamic provisioning/


### Phase 3: 1.11 alpha

#### Dynamic provisioning
Copy link
Contributor

@ianchakeres ianchakeres Mar 9, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please capitalize all words in headings ###, ####, and #####.

- "/dev/sdb"
- "/dev/sdc"
```
For LVM, its items are disk partitions, and for other mechanisms, they could be other things.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For example for an LVM StorageManager, the list of storageSource could contain disk partitions. For other StorageManagers, the list could contain other storage sources (e.g. unpartitioned disks).

- "/dev/sdc"
```
For LVM, its items are disk partitions, and for other mechanisms, they could be other things.
StorageManager should know the scope and what to do with the input.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/StorageManager/Each StorageManager/

* As provisioners running on all nodes shall a same configuration ConfigMap,
storage sources of their nodes are required to be configured in the same way.
Take LVM as an example, for configmap above, disks used for provisioning are required
to be partitioned as "/dev/sdb" and "/dev/sdc".
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this situation could be handled via multiple means.

Cluster operators could leverage multiple dameonset manifests and various selectors. For example, if you had two different types of hardware with a different number of partitions and different partition locations, then you could label the nodes accordingly and deploy two daemonsets where each daemonset config was tailored toward the appropriately labelled nodes.

Alternatively, another earlier process in the node's lifecycle (e.g. nodeprep - https://engineering.salesforce.com/provisioning-kubernetes-local-persistent-volumes-61a82d1d06b0) could perform operations before the lv dynamic provisioner is used, and the node preparation could make nodes fit the common configuration via symlinks or bindmounts.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree, we should support the use case of different number and path to disks on every node, along with disks being dynamically added and removed. Maybe another discovery directory or local file is needed.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Supporting wildcards or regex in the storageSources listed could provide some additional flexibility to support multiple node's and their individual configurations.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really appreciate for you guys suggestions, I will look into that.

hostDir: "/mnt/hdds"
local_dynamic:
storageSource:
- "/dev/sdb"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I recommend that people use filesystems bind mounted by referencing their by-uuid or raw block devices symlinks by referencing their by-id, and that we do something similar in our examples. https://wiki.archlinux.org/index.php/persistent_block_device_naming

This scheme helps prevent some problems that could happen if you access volumes via names that remain static, when the underlying volumes can move.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Carefully consider the case where a disk might be added to the system and on reboot, the ordering in /dev changes. Would be good to add a workflow detailing adding/removing capacity, and how to replace a failed disk.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another use case to consider is configuring raid underneath. We should decide where to draw the line for the provisioner. Should it:

  • manage raid? The lvm input could just be the raid device path instead of the physical disks
  • manage lvm? The storage source could just take the vg name


##### Backend storage management

Mechanisms to manage local storage sources are varied. They could be LVM, Ceph, or something like that.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For Ceph, why can't the Ceph plugin be used?

##### Change in populator

In addition to PV, the `populator` will also be extended to watch PVC objects of all namespaces.
When it finds that a PVC is newly annotated with `annScheduledNode`, and its node name match the value
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It also needs to watch for the provisioner name to match itself.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, you are right, at the first beginning, I thought the annotation of provisioner is set by the provisioner itself, and forgot to correct it here later when I realize the fact that it's set by the controller.
The permissions thing below is for the same reason


For this purpose, the following permissions are required:

* update/list/get PersistentVolumeClaims
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does it need to update PVC?


2. Check if the Physical volume exists in Volume group exStorageGroup, if not, add it:
```
vgextend exStorageGroup /dev/new
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Who creates/configures the vg?

Copy link
Author

@lichuqiang lichuqiang Mar 12, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still the storageManager of LVM, when a new source disk comes, it should check if related vg exists, and create one first if not.
The description here is indeed not detailed enough, will have an update

hostDir: "/mnt/hdds"
local_dynamic:
storageSource:
- "/dev/sdb"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Carefully consider the case where a disk might be added to the system and on reboot, the ordering in /dev changes. Would be good to add a workflow detailing adding/removing capacity, and how to replace a failed disk.

* As provisioners running on all nodes shall a same configuration ConfigMap,
storage sources of their nodes are required to be configured in the same way.
Take LVM as an example, for configmap above, disks used for provisioning are required
to be partitioned as "/dev/sdb" and "/dev/sdc".
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree, we should support the use case of different number and path to disks on every node, along with disks being dynamically added and removed. Maybe another discovery directory or local file is needed.

hostDir: "/mnt/hdds"
local_dynamic:
storageSource:
- "/dev/sdb"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another use case to consider is configuring raid underneath. We should decide where to draw the line for the provisioner. Should it:

  • manage raid? The lvm input could just be the raid device path instead of the physical disks
  • manage lvm? The storage source could just take the vg name

local-slow:
hostDir: "/mnt/hdds"
local_dynamic:
storageSource:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should storageSource be storageSources, since there is a list below it?

@lichuqiang
Copy link
Author

@ianchakeres Comments on expression addressed, thanks for your patience

@lichuqiang
Copy link
Author

Another use case to consider is configuring raid underneath. We should decide where to draw the line for the provisioner. Should it:

  • manage raid? The lvm input could just be the raid device path instead of the physical disks
  • manage lvm? The storage source could just take the vg name

@msau42 I hope to manage raid. say pvcreate /dev/md0 and add it to existing VG.
This way, the behavior could be the same, no matter the path passed in is of physical disks or of raids.
But as I don't have enough knowledge on RAID, could you help clarify whether it possible to mix physical disks and raids in a VG?

@lichuqiang
Copy link
Author

lichuqiang commented Mar 12, 2018

Carefully consider the case where a disk might be added to the system and on reboot, the ordering in /dev changes. Would be good to add a workflow detailing adding/removing capacity, and how to replace a failed disk.

Yes, failure recovery is an issue that we haven't considered well in our internal release of HUAWEI.
For example, if we have three disks /dev/sda, /dev/sdb and /dev/sdc in the VG, if /dev/sdc failed, what do you think users would like to see?
Update its capcacity to 0 to inform admin for recovery, or just remove the failed device from the VG, and leave the remaining two disks for provisioning?


##### Backend storage management

Mechanisms to manage local storage sources are varied. They could be LVM, Ceph, or something like that.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why would Ceph be 'local storage'?

`storageManager` is a general interface, each kind of storage should have its own implementation.
During start-up, a new environment variable `STORAGE_BACKEND` is set in the provisioner DaemonSet,
to determine which StorageManager instance to start.
(TBD: Should we support the scenario of multi backends on a node?)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't it be much simpler if a cluster operator just deploys multiple DaemonSets, each for a different STORAGE_BACKEND?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I didn't describe it clearly, I mean support multi backends in a same plugin, which seems no need :)

Take LVM as an example, when a new source disk "/dev/new" is added to its StorageClass `exStorage`,
the StorageManager should:

1. Check if related PV (physical volume in LVM) exists, if not, create one with command:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO it's problematic if the provisioner is responsible for this. As mentioned below, disk naming etc could be problematic. Instead, I believe storage pool provisioning is a responsibility of the cluster operator (using e.g. Ansible or whatever), only volume provisioning should be performed by the Kubernetes infrastructure.

As such, in the LVM case: the Kubernetes provider creates and manages LVs in a given and pre-existing VG, an admin creates PVs and VGs using whichever mechanism seems suitable.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree, there are too many ways an admin can configure and manage vgs. It will be hard to support all of them.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will also simplify the issue where all the disks on every node must be named the same.

Copy link
Author

@lichuqiang lichuqiang Mar 13, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems you guys have a strong intention on this point.

We made it this way in our internal release, for we mainly use it in shared clusters. And we aimed to make things simpler for users/admins, and our use case may be much simpler: only physical disks are used for now, and the machines are created in batch with the same specifications.

Anyway, since we have seen several defects in it as a general solution, I’ll take it as the first choice to manage vgs directly in the doc, and make existing one alternative for more feedback

Copy link
Author

@lichuqiang lichuqiang Mar 13, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@msau42 @NicolasT By the way, this way, do you think it necessary to support a list of VGs, or a unique VG should be enough?

If we have more than one VG, then we might meet some edge-cases, for example:
If we have two VGs on a node, VG1 and VG2, they each has 10G left, then the reported capacity would be 20G, but a pod with storage request of 15G could not be satisfied indeed


2. Check if volume group exStorageGroup exists, if not, create it:
```
vgcreate exStorageGroup /dev/new

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See above. If the VG doesn't exist, the provisioner should bail out IMHO.

3. If VG exists, check if the physical volume exists in volume group exStorageGroup,
if not, add it:
```
vgextend exStorageGroup /dev/new

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope ;-)

local-slow:
hostDir: "/mnt/hdds"
local_dynamic:
storageSources:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd say only the name of a VG should be specified for things to work. Nothing more required.

vgextend exStorageGroup /dev/new
```

4. Update capacity in StorageClass if there is a change in VG size.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does this work for thinly-provisioned VGs?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It becomes complicated if we take overcommit into consideration. I'd like to leave it for future releases
@msau42 What's your opinion?

@lichuqiang
Copy link
Author

lichuqiang commented Sep 19, 2018

With existing static discovery pattern, PV will be created once you expose volumes to the plugin;
And for the upcoming dynamic provisioning pattern, as I described above, we'll not trigger provision if there exists available pv.

So for your case, you can
1. pre-create PVs across the nodes (static discovery pattern makes sense for this)
2. ensure no unbound PVs left in the cluster, and rely on the dynamic plugin to provision on demand.

Forget that, what I described is about the traditional volume binding behavior, that volume controller is responsible for it.
But if you need dynamic provision for local pv, you'll need to enable the volume scheduling feature.
This way, scheduler is responsible for triggering volume binding/provision. And for your case, it will schedule the pod to node B, and trigger volume provision, instead of bound the pvc to the pv on node A.

@msau42
Copy link
Member

msau42 commented Sep 19, 2018

Hi @cpfeiffer it sounds a little bit like you don't need the persistence part of the feature, ie you want to be sticky to the data.

If all you need is ephemeral local storage, then you can use emptydir

@kfox1111
Copy link

hmm.... actually, thinking about it a little bit, while emptyDir seems to be a good fit for @cpfeiffer's stated issue, I think there is one case it doesn't fit. The dynamic provisioning mentioned here can allocate local volumes as needed in order to not over allocate host storage. emptyDir currently doesn't guarantee an amount of storage. Maybe emptydir should be reimplemented on top of local dynamic storage, so that it can reserve an amount of storage for its lifecycle? or alternately, maybe a second, sizedEmptyDir (localEmptyDir?) driver is added that does this instead.

@msau42
Copy link
Member

msau42 commented Sep 19, 2018

You can request a limit on emptydir size now, and I believe we do soft eviction if you exceed. I think there are plans to support project quota for hard enforcement too.

The other use case that @cpfeiffer mentioned was to avoid having to load data into the volume everytime, but still have the flexibility to schedule on different nodes. In that case, then hostpath may be the simplest solution here.

@kfox1111
Copy link

ah, I didn't realize it did that yet. thanks for the info. :)

@cpfeiffer
Copy link

@msau42 emptyDir doesn't fit us because we want to keep the contents for longer than just a single run.

And indeed hostpath is the best we figured out so far. I just thought that dynamic provisioning might save us from manually provisioning the host volumes.

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
To fully approve this pull request, please assign additional approvers.
We suggest the following additional approver: saad-ali

If they are not already assigned, you can assign the PR to them by writing /assign @saad-ali in a comment when ready.

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Oct 13, 2018
@k8s-ci-robot
Copy link
Contributor

@lichuqiang: The following test failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-community-verify 4ff4283 link /test pull-community-verify

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@lichuqiang lichuqiang changed the title Design of local storage dynamic provisioning WIP: Design of local storage dynamic provisioning Oct 13, 2018
@k8s-ci-robot k8s-ci-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Oct 13, 2018
@lichuqiang
Copy link
Author

Update to list the thoughts inflight, to avoid readers being misled by the old information. It's far from a complete deisgn, but more of rough thoughts on which we hope to receive feedback from readers.


##### CSI Driver Details

Different from other storages, we'll need to deploy a set of CSI provisioner on each of the nodes. We'll rely on feature volumeScheduling as a dependency. For local storage, the provisioner is only responsible for PVCs marked with "related plugin name + selected node annotation". That is, it only privision the pvcs that "scheduled" to its node.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will all these provisioners list/watch all PV/PVCs? If so, maybe we need to take care of the pressure on apiserver/etcd.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The existing static discovery plugin also behavior like this. So I suppose this acceptable.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 24, 2019
@KlavsKlavsen
Copy link

Hows this progressing.. sure would be nice with automatic LVM provisioning for local storage :)

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 23, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closed this PR.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. kind/design Categorizes issue or PR as related to design. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/storage Categorizes an issue or PR as relevant to SIG Storage. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.