Description
Given an image that has a label
$ docker image inspect bad:latest --format '{{.Config.Labels}}'
map[foo:bar]
I would expect a filter that looks for the presence of that label to show the image (which it does)
$ docker image ls --filter label=foo
REPOSITORY TAG IMAGE ID CREATED SIZE
bad latest b8b5f859b19a 10 months ago 6.56MB
and would expect it to be omitted when negating that filter (but remains present)
$ docker image ls --filter label!=foo
REPOSITORY TAG IMAGE ID CREATED SIZE
<snip>
bad latest b8b5f859b19a 10 months ago 6.56MB
This incorrect behavior doesn't happen for all images; it appears the 'source' may be the key.
Reproduce
These parent images are all different to minimize the chances of storage optimizations being a factor, but I'm fairly certain it's not a factor. None of them have labels themselves. Even though 'maintainer' is no longer special (I think?), I only used it since nginx was the first of my lightweight/go-to images from the registry I found that had labels of any kind. That name is not relevant.
Baseline; the two images have running containers but neither has the labels being used in this reproduction:
$ docker image ls -a
REPOSITORY TAG IMAGE ID CREATED SIZE
moby/buildkit buildx-stable-1 832fa7aa1eb3 2 months ago 318MB
registry 3 1fc7de654f2a 4 months ago 77.9MB
$ docker image inspect moby/buildkit:buildx-stable-1 registry:3 --format '{{.Config.Labels}}'
map[]
map[]
Example image which exhibits the problem. The first two images in the negative list are supposed to be their, but not third. It can't/shouldn't be in both lists at the same time.
$ echo -e 'FROM busybox\nLABEL maintainer=me' | docker buildx build -q -t bk-default -f- --builder default .
sha256:8e57045f98e39d6c8ebb24ba2c5cabe36342f119694f6a019a109b15ef7f2ff7
$ docker image inspect bk-default:latest --format '{{.Config.Labels}}'
map[maintainer:me]
$ docker image ls --filter label!=maintainer
REPOSITORY TAG IMAGE ID CREATED SIZE
moby/buildkit buildx-stable-1 832fa7aa1eb3 2 months ago 318MB
registry 3 1fc7de654f2a 4 months ago 77.9MB
bk-default latest 8e57045f98e3 10 months ago 6.56MB
$ docker image ls --filter label=maintainer
REPOSITORY TAG IMAGE ID CREATED SIZE
bk-default latest 8e57045f98e3 10 months ago 6.56MB
However, images created by the container builder, the classic builder, pulled from registry, or loaded from tarball (produced by buildkit builder) work correctly:
$ docker buildx create --name temp --driver docker-container --bootstrap
<snip>
temp
$ echo -e 'FROM debian\nLABEL maintainer=me' | docker buildx build -q -t bk-cont -f- --builder temp --load .
sha256:dda9bc6f17450bd86345856e458c2713784ee5b3d86e8b31bc59aa3739eac5af
$ echo -e 'FROM alpine\nLABEL maintainer=me' | DOCKER_BUILDKIT=0 docker build -q -t classic -f- .
<snip>
sha256:8867a992af6daa62bf4b8c7560d7b42ac95d680e5b85ea00c060381cd1f849ef
$ docker pull -q nginx:alpine
docker.io/library/nginx:alpine
$ echo -e 'FROM busybox\nLABEL maintainer=you' | docker buildx build -q -f- --builder default --output type=docker,name=bk-def-loaded,dest=bk-def.tar .
sha256:c5e71e63804b9381c07d0a088ee5e08b598c6917055d3f442d3ade0233c88a51
$ echo -e 'FROM busybox\nLABEL maintainer=you' | docker buildx build -q -f- --builder temp --output type=docker,name=bk-cont-loaded,dest=bk-cont.tar .
sha256:c5e71e63804b9381c07d0a088ee5e08b598c6917055d3f442d3ade0233c88a51
$ docker image load -q -i bk-def.tar
Loaded image: bk-def-loaded:latest
$ docker image load -q -i bk-cont.tar
Loaded image: bk-cont-loaded:latest
Those five variants are all correctly omitted; the only invalid entry is the image produced by the buildkit default driver:
$ docker image ls --filter label!=maintainer
REPOSITORY TAG IMAGE ID CREATED SIZE
alpine latest 4bcff63911fc 2 weeks ago 12.8MB
moby/buildkit buildx-stable-1 832fa7aa1eb3 2 months ago 318MB
registry 3 1fc7de654f2a 4 months ago 77.9MB
bk-default latest 8348feaa1c43 10 months ago 6.56MB
The four other filter variants ("label=key", "label=key=val", "label=key!=val", "label!=key!=val") all seem to work as expected; since the data is clearly there, that's why I guessed it was a docker issue rather than buildkit issue.
Expected behavior
docker image ls --filter label=key and docker image ls --filter label!=key, assuming no other filters or errors/warnings, should never contain the same results.
docker version
Client: Docker Engine - Community
Version: 28.3.3
API version: 1.51
Go version: go1.24.5
Git commit: 980b856
Built: Fri Jul 25 11:34:09 2025
OS/Arch: linux/amd64
Context: default
Server: Docker Engine - Community
Engine:
Version: 28.3.3
API version: 1.51 (minimum version 1.24)
Go version: go1.24.5
Git commit: bea959c
Built: Fri Jul 25 11:34:09 2025
OS/Arch: linux/amd64
Experimental: true
containerd:
Version: 1.7.27
GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da
runc:
Version: 1.2.5
GitCommit: v1.2.5-0-g59923ef
docker-init:
Version: 0.19.0
GitCommit: de40ad0
docker info
Client: Docker Engine - Community
Version: 28.3.3
Context: default
Debug Mode: false
Plugins:
ai: Docker AI Agent - Ask Gordon (Docker Inc.)
Version: v1.9.3
Path: /home/robertovillarreal/.docker/cli-plugins/docker-ai
buildx: Docker Buildx (Docker Inc.)
Version: v0.25.0-desktop.1
Path: /home/robertovillarreal/.docker/cli-plugins/docker-buildx
cloud: Docker Cloud (Docker Inc.)
Version: v0.4.2
Path: /home/robertovillarreal/.docker/cli-plugins/docker-cloud
compose: Docker Compose (Docker Inc.)
Version: v2.38.2-desktop.1
Path: /home/robertovillarreal/.docker/cli-plugins/docker-compose
debug: Get a shell into any image or container (Docker Inc.)
Version: 0.0.41
Path: /home/robertovillarreal/.docker/cli-plugins/docker-debug
desktop: Docker Desktop commands (Docker Inc.)
Version: v0.1.11
Path: /home/robertovillarreal/.docker/cli-plugins/docker-desktop
extension: Manages Docker extensions (Docker Inc.)
Version: v0.2.29
Path: /home/robertovillarreal/.docker/cli-plugins/docker-extension
init: Creates Docker-related starter files for your project (Docker Inc.)
Version: v1.4.0
Path: /home/robertovillarreal/.docker/cli-plugins/docker-init
mcp: Docker MCP Plugin (Docker Inc.)
Version: v0.9.9
Path: /home/robertovillarreal/.docker/cli-plugins/docker-mcp
model: Docker Model Runner (EXPERIMENTAL) (Docker Inc.)
Version: v0.1.36
Path: /usr/libexec/docker/cli-plugins/docker-model
sbom: View the packaged-based Software Bill Of Materials (SBOM) for an image (Anchore Inc.)
Version: 0.6.0
Path: /home/robertovillarreal/.docker/cli-plugins/docker-sbom
scan: Docker Scan (Docker Inc.)
Version: v0.23.0
Path: /usr/libexec/docker/cli-plugins/docker-scan
scout: Docker Scout (Docker Inc.)
Version: v1.18.1
Path: /home/robertovillarreal/.docker/cli-plugins/docker-scout
Server:
Containers: 5
Running: 5
Paused: 0
Stopped: 0
Images: 9
Server Version: 28.3.3
Storage Driver: overlayfs
driver-type: io.containerd.snapshotter.v1
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
CDI spec directories:
/etc/cdi
/var/run/cdi
Swarm: inactive
Runtimes: io.containerd.runc.v2 runc sysbox-runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 05044ec0a9a75232cad458027ca83437aae3f4da
runc version: v1.2.5-0-g59923ef
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: builtin
cgroupns
Kernel Version: 6.8.0-60-generic
Operating System: Ubuntu 24.04.2 LTS
OSType: linux
Architecture: x86_64
CPUs: 16
Total Memory: 30.56GiB
Name: <snip>
ID: B6K2:2BOW:BSIE:WIGE:RODV:GC2B:JMYF:6XP4:25AT:3S4Q:6634:3OII
Docker Root Dir: /var/lib/docker
Debug Mode: true
File Descriptors: 73
Goroutines: 134
System Time: 2025-08-02T18:16:47.543558447-06:00
EventsListeners: 1
Experimental: true
Insecure Registries:
<snip>
192.168.1.0/24
::1/128
127.0.0.0/8
Registry Mirrors:
http://localhost:5005/
http://localhost:5006/
http://localhost:5007/
Live Restore Enabled: false
Default Address Pools:
Base: 172.25.0.0/16, Size: 24
Additional Info
There has been about four times I was ready to post this, but then decided to dig further. I'm fairly certain I understand the issue, but in case I'm wrong, I'm hiding my previous write-up rather than deleting it. This is not a new issue; I replicated it in 24.x, about the time this functionality was introduced.
Most of my original write-up
I'm hoping/assuming the reproduction will be much more helpful than mentioning all the things I learned/tried leading up to this, so will keep this brief.
1. I have not found anything in the docs saying this is supported. It's explicitly not supported/implemented for the default image store (fails with error), but was intentionally and explicitly added for containerd (#45289). I interpreted these three facts as "this is intended to be used, but don't want to (or can't) backport it to the default store since it's going away, and don't want to complicate docs any further with things that work differently between the stores".
2. I noticed the CLI "knows" about the negative filters for `docker container prune` (merging filters specified on the CLI with those in the user's config), but not the image list operation. I thought maybe the CLI wasn't passing them along correctly, but the CLI is innocent.
3. I have not systematically tested to see if specifying the label via `--label` vs. `LABEL` matters. But I've come across one instance where they are *not* interchangeable; ironically, it was dealing with that nuance that led to discovering this bug. I can expand on this if you think it's relevant, but I assumed it was a separate buildkit thing.
4. This likely never worked. I didn't do the whole gamut of tests from my reproduction above, but I did a subset against 24.0.9 (around the time that functionality was merged in a d-in-d setup). Namely, images build with the default builkit builder are incorrect, but images pulled from registry are fine.
As a quick experiment, I hacked some of the images and test cases from
|
func TestImageList(t *testing.T) { |
|
ctx := namespaces.WithNamespace(context.TODO(), "testing") |
|
|
|
blobsDir := t.TempDir() |
|
|
|
toContainerdImage := func(t *testing.T, imageFunc specialimage.SpecialImageFunc) c8dimages.Image { |
|
idx, err := imageFunc(blobsDir) |
|
assert.NilError(t, err) |
|
|
|
return imagesFromIndex(idx)[0] |
|
} |
|
|
|
multilayer := toContainerdImage(t, specialimage.MultiLayer) |
|
twoplatform := toContainerdImage(t, specialimage.TwoPlatform) |
|
emptyIndex := toContainerdImage(t, specialimage.EmptyIndex) |
|
configTarget := toContainerdImage(t, specialimage.ConfigTarget) |
|
textplain := toContainerdImage(t, specialimage.TextPlain) |
(adding labels to the valid images and adding a negative filter to the test setup). All of the tests I hacked failed, which means the negative filtering was working.
After putting the pieces together, I have a pretty good idea of the issue. Note this is just a result of one manual comparison/investigation. In comparing the two images produced by buildkit (default and container builders), there is a glaring difference: the former (the one which exhibits the bug) has two image configs, vs. one for the latter.
The 'bad' image has this layout:
application/vnd.oci.image.index.v1+json
application/vnd.oci.image.manifest.v1+json
application/vnd.oci.image.config.v1+json
{
"config": {
"Labels": {
"maintainer": "me"
}
},
}
application/vnd.oci.image.manifest.v1+json
application/vnd.oci.image.config.v1+json
{
"config": {}
}
The second config is related to attestations (and very easy to identify if you're looking for it). Of note is that it's likely the 'bad' config will always be referenced last (regardless of BFS or DFS).
The check for a "config"
|
if !c8dimages.IsConfigType(desc.MediaType) { |
|
return nil, nil |
|
} |
is based solely on media type with no other criteria, so the 'bad' config will always be visited. The flow of the code and the comments imply there are legitimate scenarios where multiple valid configs are found. (I just did a quick test... multi-platform is one. But when done with the default builder, a two-platform build results in four configs... two "valid", but two bogus (representing attestation data)).
Right now I am assuming the good config is visited first (but have not proven it). But assuming it is, a check for negative existence of a label flows through
|
if check.onlyExists { |
|
// label! given without value, check if doesn't exist |
|
if check.negate { |
|
// Label exists, config doesn't match |
|
if exists { |
|
return nil, nil |
and results in a return that says "don't look any deeper into any children, but don't give up". As a result, the 'bad' config is visited. The 'bad' config literally has no labels, thus hits
|
// This config matches the filter so we need to shop this image, stop dispatch. |
|
return nil, errFoundConfig |
which is says "I found what I'm looking for, so stop immediately".
At first I thought the bug was looking at invalid configs. Then, knowing multiple configs (multi-platform) was legit, I thought changing the logic was the fix... in my scenario, if you've discovered a label that you don't want, there's no point continuing to look. But I now see you can create a legitimate multi-platform image where a label has value X in one arch, value Y in another, and not defined in yet another.
In short, I believe the problem is that my 'bad' image actually has two configs: the conventional one that has the label values, and a second config, totally empty, that's associated with an attestation manifest. The filtering is written as "only stop searching when I have not encountered a failed assertion" (it is very hard to articulate). Bottom line, in my scenario, I'm looking for a missing label (label!=maintainer). In the first (valid) config, the label is present, thus a "failed assertion", so the search continues. The empty config is then visited. Since the label is not present there are no failed assertions, the config has been deemed found and shows up in the listing. When doing the opposite filter (label=maintainer), the first valid config is visited. There are no failed assertions, so the search terminates immediately, thus showing up in that listing as well.
The obvious solution is to ignore attestation configs. That would (superficially) fix my problem. While that empty config is clearly worthless, you can run into the same thing with multi-platform images; even without bogus attestation configs, you can have one platform config with the label and another without.
Introducing docker image ls --platform could conceivable help (even in my case, since the attestation config is for an "unknown" platform), or introducing an "any", "none", or "all" component to the filter. This is a weird one and may not have a simple solution, but at the very least, may warrant a snippet in the docs...?
Description
Given an image that has a label
$ docker image inspect bad:latest --format '{{.Config.Labels}}' map[foo:bar]I would expect a filter that looks for the presence of that label to show the image (which it does)
and would expect it to be omitted when negating that filter (but remains present)
This incorrect behavior doesn't happen for all images; it appears the 'source' may be the key.
Reproduce
These parent images are all different to minimize the chances of storage optimizations being a factor, but I'm fairly certain it's not a factor. None of them have labels themselves. Even though 'maintainer' is no longer special (I think?), I only used it since nginx was the first of my lightweight/go-to images from the registry I found that had labels of any kind. That name is not relevant.
Baseline; the two images have running containers but neither has the labels being used in this reproduction:
$ docker image ls -a REPOSITORY TAG IMAGE ID CREATED SIZE moby/buildkit buildx-stable-1 832fa7aa1eb3 2 months ago 318MB registry 3 1fc7de654f2a 4 months ago 77.9MB $ docker image inspect moby/buildkit:buildx-stable-1 registry:3 --format '{{.Config.Labels}}' map[] map[]Example image which exhibits the problem. The first two images in the negative list are supposed to be their, but not third. It can't/shouldn't be in both lists at the same time.
However, images created by the container builder, the classic builder, pulled from registry, or loaded from tarball (produced by buildkit builder) work correctly:
Those five variants are all correctly omitted; the only invalid entry is the image produced by the buildkit default driver:
$ docker image ls --filter label!=maintainer REPOSITORY TAG IMAGE ID CREATED SIZE alpine latest 4bcff63911fc 2 weeks ago 12.8MB moby/buildkit buildx-stable-1 832fa7aa1eb3 2 months ago 318MB registry 3 1fc7de654f2a 4 months ago 77.9MB bk-default latest 8348feaa1c43 10 months ago 6.56MBThe four other filter variants ("label=key", "label=key=val", "label=key!=val", "label!=key!=val") all seem to work as expected; since the data is clearly there, that's why I guessed it was a docker issue rather than buildkit issue.
Expected behavior
docker image ls --filter label=keyanddocker image ls --filter label!=key, assuming no other filters or errors/warnings, should never contain the same results.docker version
Client: Docker Engine - Community Version: 28.3.3 API version: 1.51 Go version: go1.24.5 Git commit: 980b856 Built: Fri Jul 25 11:34:09 2025 OS/Arch: linux/amd64 Context: default Server: Docker Engine - Community Engine: Version: 28.3.3 API version: 1.51 (minimum version 1.24) Go version: go1.24.5 Git commit: bea959c Built: Fri Jul 25 11:34:09 2025 OS/Arch: linux/amd64 Experimental: true containerd: Version: 1.7.27 GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da runc: Version: 1.2.5 GitCommit: v1.2.5-0-g59923ef docker-init: Version: 0.19.0 GitCommit: de40ad0docker info
Additional Info
There has been about four times I was ready to post this, but then decided to dig further. I'm fairly certain I understand the issue, but in case I'm wrong, I'm hiding my previous write-up rather than deleting it. This is not a new issue; I replicated it in 24.x, about the time this functionality was introduced.
Most of my original write-up
I'm hoping/assuming the reproduction will be much more helpful than mentioning all the things I learned/tried leading up to this, so will keep this brief. 1. I have not found anything in the docs saying this is supported. It's explicitly not supported/implemented for the default image store (fails with error), but was intentionally and explicitly added for containerd (#45289). I interpreted these three facts as "this is intended to be used, but don't want to (or can't) backport it to the default store since it's going away, and don't want to complicate docs any further with things that work differently between the stores". 2. I noticed the CLI "knows" about the negative filters for `docker container prune` (merging filters specified on the CLI with those in the user's config), but not the image list operation. I thought maybe the CLI wasn't passing them along correctly, but the CLI is innocent. 3. I have not systematically tested to see if specifying the label via `--label` vs. `LABEL` matters. But I've come across one instance where they are *not* interchangeable; ironically, it was dealing with that nuance that led to discovering this bug. I can expand on this if you think it's relevant, but I assumed it was a separate buildkit thing. 4. This likely never worked. I didn't do the whole gamut of tests from my reproduction above, but I did a subset against 24.0.9 (around the time that functionality was merged in a d-in-d setup). Namely, images build with the default builkit builder are incorrect, but images pulled from registry are fine.As a quick experiment, I hacked some of the images and test cases from
moby/daemon/containerd/image_list_test.go
Lines 210 to 226 in 0f9c087
(adding labels to the valid images and adding a negative filter to the test setup). All of the tests I hacked failed, which means the negative filtering was working.
After putting the pieces together, I have a pretty good idea of the issue. Note this is just a result of one manual comparison/investigation. In comparing the two images produced by buildkit (default and container builders), there is a glaring difference: the former (the one which exhibits the bug) has two image configs, vs. one for the latter.
The 'bad' image has this layout:
The second config is related to attestations (and very easy to identify if you're looking for it). Of note is that it's likely the 'bad' config will always be referenced last (regardless of BFS or DFS).
The check for a "config"
moby/daemon/containerd/image_list.go
Lines 655 to 657 in 0f9c087
is based solely on media type with no other criteria, so the 'bad' config will always be visited. The flow of the code and the comments imply there are legitimate scenarios where multiple valid configs are found. (I just did a quick test... multi-platform is one. But when done with the default builder, a two-platform build results in four configs... two "valid", but two bogus (representing attestation data)).
Right now I am assuming the good config is visited first (but have not proven it). But assuming it is, a check for negative existence of a label flows through
moby/daemon/containerd/image_list.go
Lines 669 to 674 in 0f9c087
and results in a return that says "don't look any deeper into any children, but don't give up". As a result, the 'bad' config is visited. The 'bad' config literally has no labels, thus hits
moby/daemon/containerd/image_list.go
Lines 695 to 696 in 0f9c087
which is says "I found what I'm looking for, so stop immediately".
At first I thought the bug was looking at invalid configs. Then, knowing multiple configs (multi-platform) was legit, I thought changing the logic was the fix... in my scenario, if you've discovered a label that you don't want, there's no point continuing to look. But I now see you can create a legitimate multi-platform image where a label has value X in one arch, value Y in another, and not defined in yet another.
In short, I believe the problem is that my 'bad' image actually has two configs: the conventional one that has the label values, and a second config, totally empty, that's associated with an attestation manifest. The filtering is written as "only stop searching when I have not encountered a failed assertion" (it is very hard to articulate). Bottom line, in my scenario, I'm looking for a missing label (
label!=maintainer). In the first (valid) config, the label is present, thus a "failed assertion", so the search continues. The empty config is then visited. Since the label is not present there are no failed assertions, the config has been deemed found and shows up in the listing. When doing the opposite filter (label=maintainer), the first valid config is visited. There are no failed assertions, so the search terminates immediately, thus showing up in that listing as well.The obvious solution is to ignore attestation configs. That would (superficially) fix my problem. While that empty config is clearly worthless, you can run into the same thing with multi-platform images; even without bogus attestation configs, you can have one platform config with the label and another without.
Introducing
docker image ls --platformcould conceivable help (even in my case, since the attestation config is for an "unknown" platform), or introducing an "any", "none", or "all" component to the filter. This is a weird one and may not have a simple solution, but at the very least, may warrant a snippet in the docs...?