Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

container hostConfig not checked on container start #15159

Closed
runcom opened this issue Jul 30, 2015 · 5 comments
Closed

container hostConfig not checked on container start #15159

runcom opened this issue Jul 30, 2015 · 5 comments
Assignees
Milestone

Comments

@runcom
Copy link
Member

runcom commented Jul 30, 2015

On container start, exactly in daemon/start.go, only the hostConfig provided to the API is checked.
If you've previously created a container with a specific hostConfig and then you make changes to the system something could be broken.
I've hit this when unmounting cgroups (memory in my case) but any other hostConfig's field could have this problem I think.
Step to reproduce on an Ubuntu 15.04:

$ docker run -d --memory-swappiness=0 debian:jessie true # memory cgroups is mounted
ba61c3bbe0198643994956aca074976e157ef2521425ac527b1a0b6016e61851
$ sudo umount /sys/fs/cgroup/memory  # umount memory cgroup for instance           
$ docker start ba61c3bbe0198643994956aca074976e157ef2521425ac527b1a0b6016e61851
Error response from daemon: Cannot start container ba61c3bbe0198643994956aca074976e157ef2521425ac527b1a0b6016e61851: [8] System error: no such directory for memory.swappiness.
Error: failed to start containers: [ba61c3bbe0198643994956aca074976e157ef2521425ac527b1a0b6016e61851]

This is happening because the hostConfig in the start API is only checked if it's the one provided in the request and not the actual container's hostConfig (see verifyContainerSettings)
Beaware also that verifyContainerSettings will set for instance MemorySwappiness to nil if it detects the system doesn't have memory cgroup enabled.
Should we check hostConfig before starting container (and show a nicer error and put MemorySwappiness in hostConfig to nil) or this libcontainer error is ok to show if you umount cgroups while running? I have a fix to check container's hostConfig on start also

EDIT: Digging deeper, daemon.sysInfo is only populated on daemon start so even calling verifyContainerSettings in container start still shows the error above

ping @LK4D4 (I've seen libcontainer ignores cgroups path not found error in apply_systemd and try to write the file and erroring out above, wondering if that should be fixed as well)

@hqhq
Copy link
Contributor

hqhq commented Aug 3, 2015

@runcom I can't reproduce on the latest master branch, I tried ubuntu 14.04 using upstart and RHEL7 using systemd, what Docker version are you using?

@runcom
Copy link
Member Author

runcom commented Aug 3, 2015

@hqhq master branch on Ubuntu 15.04 :( will try on Fedora 22 later today

@hqhq
Copy link
Contributor

hqhq commented Aug 3, 2015

@runcom It's weird I can't reproduce it, but for libcontainer part you mentioned, the rule is:

  • cgroup subsystem not mounted, and you are not using related options, error ignored
  • cgroup subsystem not mounted, but you assigned related values, error out

So if memory cgroup was not mounted, and --memory-swappiness was assigned, it is expected to see that error msg.
I think fix should be on Docker side, maybe daemon.sysinfo should be updated more often?

@runcom
Copy link
Member Author

runcom commented Aug 3, 2015

@hqhq indeed docker is fixed (I saw your patch) butsysinfo is not updated while daemon is running so if you unmount the cgroup while it's running you get that error if your container was started with cgroup's constraints and you try to start it

I think the problem here lies also in having hostConfig in start :/

@runcom
Copy link
Member Author

runcom commented Aug 3, 2015

Working to fix this, I'll send the PR later today

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants