Expected behavior
I would expect cache volumes to be working during image builds.
Actual behavior
When having
RUN --mount=id=root_.cache,type=cache,target=/root/.cache aws codeartifact login --tool pip --repository python --domain s1-packages --domain-owner REDACTED && pip3 install -r /tmp/requirements3.txt && rm -f ~/.config/pip/pip.conf
in my Dockerfile, the image build dies with:
ERROR: error committing fz55z5fyasu7npprdw7t4cfwz: invalid mutable ref 0xc0044e2a00: invalid: executor failed running [/bin/sh -c aws codeartifact login --tool pip --repository python --domain s1-packages --domain-owner REDACTED && pip3 install -r /tmp/requirements3.txt && rm -f ~/.config/pip/pip.conf]: stat /var/lib/docker/overlay2/fz55z5fyasu7npprdw7t4cfwz: no such file or directory
If I add sharing=private to the cache mount, the error disappears.
Steps to reproduce the behavior
I guess placing the above solely into a Dockerfile may not trigger this, so some other conditions must be met.
I can see this after upgrading to the current docker version (previous was 19.x). I could observe this with aufs, but before reporting, switched to overlay2.
Output of docker version:
Client: Docker Engine - Community
Version: 20.10.2
API version: 1.41
Go version: go1.13.15
Git commit: 2291f61
Built: Mon Dec 28 16:17:32 2020
OS/Arch: linux/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.2
API version: 1.41 (minimum version 1.12)
Go version: go1.13.15
Git commit: 8891c58
Built: Mon Dec 28 16:15:09 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.3
GitCommit: 269548fa27e0089a8b8278fc4fc781d7f65a939b
runc:
Version: 1.0.0-rc92
GitCommit: ff819c7e9184c13b7c2607fe6c30ae19403a7aff
docker-init:
Version: 0.19.0
GitCommit: de40ad0
Output of docker info:
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)
Server:
Containers: 13
Running: 12
Paused: 0
Stopped: 1
Images: 18
Server Version: 20.10.2
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 269548fa27e0089a8b8278fc4fc781d7f65a939b
runc version: ff819c7e9184c13b7c2607fe6c30ae19403a7aff
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.15.0-135-generic
Operating System: Ubuntu 18.04.5 LTS
OSType: linux
Architecture: x86_64
CPUs: 16
Total Memory: 61.47GiB
Name: ip-10-150-14-172
ID: ZJFE:DJF7:DUGP:WOGK:A66E:IVQS:X6HB:CKP4:SBRG:VROU:ZIBO:GWXF
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
Additional environment details (AWS, VirtualBox, physical, etc.)
AWS EC2 node
Expected behavior
I would expect cache volumes to be working during image builds.
Actual behavior
When having
RUN --mount=id=root_.cache,type=cache,target=/root/.cache aws codeartifact login --tool pip --repository python --domain s1-packages --domain-owner REDACTED && pip3 install -r /tmp/requirements3.txt && rm -f ~/.config/pip/pip.confin my
Dockerfile, the image build dies with:ERROR: error committing fz55z5fyasu7npprdw7t4cfwz: invalid mutable ref 0xc0044e2a00: invalid: executor failed running [/bin/sh -c aws codeartifact login --tool pip --repository python --domain s1-packages --domain-owner REDACTED && pip3 install -r /tmp/requirements3.txt && rm -f ~/.config/pip/pip.conf]: stat /var/lib/docker/overlay2/fz55z5fyasu7npprdw7t4cfwz: no such file or directoryIf I add
sharing=privateto the cache mount, the error disappears.Steps to reproduce the behavior
I guess placing the above solely into a
Dockerfilemay not trigger this, so some other conditions must be met.I can see this after upgrading to the current docker version (previous was 19.x). I could observe this with
aufs, but before reporting, switched tooverlay2.Output of
docker version:Output of
docker info:Additional environment details (AWS, VirtualBox, physical, etc.)
AWS EC2 node