This issue is part of splitting #1185 for tracking purposes.
See #1356 for some context on the inability for LCOW filesystems to be expanded to disk like a traditional Linux container can be.
This issue is to discuss how to handle operations such as ADD in the builder, or docker cp etc. Anything that needs to access the filesystem of the container will need some coordination with containerd.
As I currently see it, there are two options here.
The first option is for containerd exposes a new service such as a FilesystemService with methods exposed such as open, stat, etc which, in the case of LCOW are proxied to a remote continuity driver (https://github.com/containerd/continuity/blob/master/driver/driver.go). This puts more load on containerd, and includes more code to manage, but it simplifies client interactions with active layers, as they do not need to deal with service VM management, or treat Mounts[] as anything but a transparent reference to the layer that containerd understands.
The second option is to instead not allow access to the filesystem of the container by clients of containerd. This means that all operations done on the container filesystem layer need to be done via a container. This simplifies the containerd changes, as the new container is nothing different than existing containers, but complicates operations on a running container (Should the same active snapshot be able to have multiple containers accesssing it? What does that mean for Windows HyperV and LCOW containers?)
Last I checked https://github.com/moby/buildkit currently expects the Mounts returned by containerd to expand to disk. There has been some discussion about mixed platform workers in moby/buildkit#41 related to LCOW, but I haven't followed buildkit much, so need to catch up on context here.
The current PR for similar functionality in moby/moby is located at moby/moby#34252. It creates a remoteFS interface in the graphdriver which proxies calls to the Service VM. This is most similar to option 1 above.
/cc @stevvooe @gupta-ak
This issue is part of splitting #1185 for tracking purposes.
See #1356 for some context on the inability for LCOW filesystems to be expanded to disk like a traditional Linux container can be.
This issue is to discuss how to handle operations such as ADD in the builder, or
docker cpetc. Anything that needs to access the filesystem of the container will need some coordination with containerd.As I currently see it, there are two options here.
The first option is for containerd exposes a new service such as a FilesystemService with methods exposed such as
open,stat, etc which, in the case of LCOW are proxied to a remote continuity driver (https://github.com/containerd/continuity/blob/master/driver/driver.go). This puts more load on containerd, and includes more code to manage, but it simplifies client interactions with active layers, as they do not need to deal with service VM management, or treatMounts[]as anything but a transparent reference to the layer that containerd understands.The second option is to instead not allow access to the filesystem of the container by clients of containerd. This means that all operations done on the container filesystem layer need to be done via a container. This simplifies the containerd changes, as the new container is nothing different than existing containers, but complicates operations on a running container (Should the same active snapshot be able to have multiple containers accesssing it? What does that mean for Windows HyperV and LCOW containers?)
Last I checked https://github.com/moby/buildkit currently expects the Mounts returned by containerd to expand to disk. There has been some discussion about mixed platform workers in moby/buildkit#41 related to LCOW, but I haven't followed buildkit much, so need to catch up on context here.
The current PR for similar functionality in moby/moby is located at moby/moby#34252. It creates a remoteFS interface in the graphdriver which proxies calls to the Service VM. This is most similar to option 1 above.
/cc @stevvooe @gupta-ak