Description
I apologize if I'm raising this bug in the wrong place. There are many layers to this issue, but the hack I found to fix it is in containerd:
I have an armv8 AWS gravitron instance that I use to build armhf containers. I've created a 32-bit armhf LXC container that runs docker. Things worked fined when using classic docker build commands. However, I've recently been trying out BuildKit to take advantage of its ability to pass secrets to the container's build context. I'm now building containers with a command like DOCKER_BUILDKIT=1 docker build ....
This works fine on amd64 and aarch64 servers. However, it appears both docker CLI and Engine get tripped up when I try in my 64-bit kernel / 32-bit user-space setup. The problem seems to be in getCPUVariant which reads /proc/cpuinfo and will then return "v8". This messes everything up as I have a linux/arm/v8 combo that's invalid.
Describe the results you received:
The error message is failed to solve with frontend dockerfile.v0: failed to solve with frontend gateway.v0: rpc error: code = Unknown desc = failed to load LLB: runtime execution on platform linux/arm/v7 not supported
Describe the results you expected:
I don't know how you could handle this correctly. I'm curious how it even works under the classic docker build.
Output of containerd --version:
containerd github.com/containerd/containerd v1.2.10 b34a5c8af56e510852c35414db4c1f4fa6172339
Any other relevant information:
I've added this hack/patch into my set up to work around the issue:
func getCPUVariant() string {
+ variant := os.Getenv("GOARCH_VARIANT")
+ if len(variant) > 0 {
+ log.L.Debugf("Overriding CPU Variant as: %s", variant)
+ return variant
+ }
+
if runtime.GOOS == "windows" {
// Windows only supports v7 for ARM32 and v8 for ARM64 and so we can use
// runtime.GOARCH to determine the variants
Maybe there's a more sensible way to work around this?
Description
I apologize if I'm raising this bug in the wrong place. There are many layers to this issue, but the hack I found to fix it is in containerd:
I have an armv8 AWS gravitron instance that I use to build armhf containers. I've created a 32-bit armhf LXC container that runs docker. Things worked fined when using classic
docker buildcommands. However, I've recently been trying out BuildKit to take advantage of its ability to pass secrets to the container's build context. I'm now building containers with a command likeDOCKER_BUILDKIT=1 docker build ....This works fine on amd64 and aarch64 servers. However, it appears both docker CLI and Engine get tripped up when I try in my 64-bit kernel / 32-bit user-space setup. The problem seems to be in
getCPUVariantwhich reads/proc/cpuinfoand will then return "v8". This messes everything up as I have a linux/arm/v8 combo that's invalid.Describe the results you received:
The error message is
failed to solve with frontend dockerfile.v0: failed to solve with frontend gateway.v0: rpc error: code = Unknown desc = failed to load LLB: runtime execution on platform linux/arm/v7 not supportedDescribe the results you expected:
I don't know how you could handle this correctly. I'm curious how it even works under the classic
docker build.Output of
containerd --version:Any other relevant information:
I've added this hack/patch into my set up to work around the issue:
Maybe there's a more sensible way to work around this?