Skip to content

Proxy graph driver and proxydaemon#15594

Closed
mpatlasov wants to merge 2 commits intomoby:masterfrom
mpatlasov:proxy-graph-driver-and-proxydaemon
Closed

Proxy graph driver and proxydaemon#15594
mpatlasov wants to merge 2 commits intomoby:masterfrom
mpatlasov:proxy-graph-driver-and-proxydaemon

Conversation

@mpatlasov
Copy link

proxy - a storage backend transparently redirecting all requests to another backend

Rationale

It would be nice to run docker inside a container, but a docker graphdriver
sometimes need access to critical system-wide resources. For example,
devicemapper need access to /dev/loop and /dev/mapper/control, and it wants
to format and mount ext4 filesystems over devmapper devices.

Also, the superuser of container may mistakenly or maliciously modify the
content of raw devices that inner filesystem and dm-thin target works on.
Such modifications may havoc the whole host system and cannot be easily
isolated as per-container errors.

Solution

Run a docker daemon in proxy mode on host system (outside container). It
listens on a unix socket (accessible from inside container) for commands from
a docker daemon running inside container. These command set is exactly docker
graphdriver API wrapped in the go net/rpc.

Then run a docker daemon on top of proxy graphdriver inside container. The
proxy graphdriver gets the address of remote server (i.e. host docker daemon
in proxy mode) via "--storage-opt" options interface and connects to the server
on startup. Then it transparently passes all incoming requests (in the form of
graphdriver API) to the docker daemon in proxy mode on host system.

The latter processes incoming Init request intelligently while passing all
other requests transparently to the actual graphdriver.

Such an approach factors out potentially dangerous operations to the trusted
host environment (assuming that container superuser cannot modify binaries on
host system) and also allow to keep per-container docker graphdriver files in
a protected area dedicated to the container.

Step-by-step example

The example assumes that the docker daemon is already running on the host
system and a docker development container was started by:

docker run --privileged --rm --name devcon -ti -v /root/repos/docker-fork:/go/src/github.com/docker/docker devcon-image /bin/bash

Then, on the host system:

docker proxydaemon -R /protected -C devcon -S unix:///root/repos/docker-fork/sock

The command proxydaemon means start daemon in proxy mode.

Here -R /protected specifies the prefix to be added to Init request. So, if
the container user requested /var/lib/docker as the home of docker files, the
host system will actually keep them in /protected/var/lib/docker.

-C devcon specifies the name of container (devcon) that
given instance of proxy daemon works for.

-S host=unix:///root/repos/docker-fork/sock specifies the path
to the unix socket for daemon-to-daemon communication. It must be visible
from inside container.

Secondly, inside container:

docker daemon -s proxy --storage-opt graphdriver=devicemapper --storage-opt proxyserver=unix:///go/src/github.com/docker/docker/sock

Here -s proxy forces using proxy graphdriver. --storage-opt passes next
argument as an option for graphdriver. graphdriver=devicemapper specifies
graphdriver to use on remote side (on the host system) and
proxyserver=unix:///go/src/github.com/docker/docker/sock specifies the path
to communication unix socket. Of course, this must strictly match to the
unix-socket path set for host docker daemon in proxy mode.

Since now, a person can run all variety of docker run/pull/etc ... inside
devcon container. The communication path to the host graphdriver must work
transparently for the person. In the other words, the presence of the proxy server
must be invisible for docker client inside container.

The patch implements "--syscall" option for "docker exec" command.
The flag modifies "docker exec" behavior to issue a syscall in the
container, instead of executing command. Currently, only three
syscalls are supported: mount, umount, mkdir:

$ docker exec --syscall <container> mount -t <type> <device> <mount_point>
$ docker exec --syscall <container> umount <mount_point>
$ docker exec --syscall <container> mkdir <path>

They are useful when we do not trust "mount"/"umount"/"mkdir" binaries
residing inside container.

Signed-off-by: Maxim Patlasov <[email protected]>
It would be nice to run docker inside a container, but a docker graphdriver
sometimes need access to critical system-wide resources. It's dangerous
to grant that access to a container if we do not fully trust it. The patch
implements a proxy moving those potentially dangerous operations from a
container to the host system.

See README.md in daemon/graphdriver/proxy/ for details.

Signed-off-by: Maxim Patlasov <[email protected]>
@mpatlasov mpatlasov force-pushed the proxy-graph-driver-and-proxydaemon branch from 49152fb to d4f8240 Compare August 14, 2015 21:08
@cpuguy83
Copy link
Member

Please see #13777 instead.

@mpatlasov
Copy link
Author

Please see #13777 instead.

#13777 is to create out-of-process graphdrivers and run them in the same namaspace as proxy graphdriver (otherwise, ret.Dir that it gets from d.client.Call("GraphDriver.Get", args, &ret) won't be valid where the client will try to use it).

#15594 - in contrast - is to run existing graphdrivers in a separate (e.g. trusted) namespace. For example, #15594 allows you to run devicemapper graphdriver on the host system in favor of docker daemon running inside a container.

Unfortunately, both uses the same name and collides at least by daemon/graphdriver/proxy.go. Ideas how to handle it properly?

@cpuguy83
Copy link
Member

I think we should focus on running out-of-process graphdrivers, and this use-case can be resolved.
This can work just fine if using shared or slave mounts (ie ret.Dir will work fine).

@mpatlasov
Copy link
Author

I think we should focus on running out-of-process graphdrivers,

It makes sense for me. I posted a comment on #13777 - let's continue to discuss it there.

@cpuguy83
Copy link
Member

Ok, thank you! I'll close this for now.

@cpuguy83 cpuguy83 closed this Aug 18, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants