Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When settings collectors.containerd.socket=/run/k3s/containerd/containerd.sock, the socket is mounted at the wrong place. #840

Closed
Darkness4 opened this issue Feb 9, 2025 · 5 comments
Labels
kind/bug Something isn't working

Comments

@Darkness4
Copy link

Darkness4 commented Feb 9, 2025

Describe the bug

When settings collectors.containerd.socket=/run/k3s/containerd/containerd.sock, the socket is mounted at the wrong place, causing <NA> on kubernetes fields.

How to reproduce it

  1. helm template falco falcosecurity/falco --version 4.20.0 --set collectors.containerd.socket=/run/k3s/containerd/containerd.sock

  2. Configmap shows:

    # Source: falco/templates/configmap.yaml
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: falco
      # ...
    data:
      falco.yaml: |-
        # ...
        cri:
          enabled: true
          sockets:
          - /run/k3s/containerd/containerd.sock # <---
        # ...
  3. But the socket is mounted at /run/containerd. See DaemonSet:

    # Source: falco/templates/daemonset.yaml
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: falco
      # ...
    spec:
      # ...
      template:
        # ...
        spec:
          # ...
          containers:
            - name: falco
              # ...
              env:
                - name: HOST_ROOT
                  value: /host
              volumeMounts:
                - mountPath: /host/run/containerd/ # <---
                  name: containerd-socket
                # ...
              # ...
          volumes:
            - name: containerd-socket
              hostPath:
                path: /run/k3s/containerd
          # ...

Expected behaviour

Either, in falco.yaml, .cri.sockets should be set to /run/containerd/containerd.sock, or the socket directory should be mounted at /host/run/k3s/containerd/.

I'm using this as a workaround for now:

# charts/falco/templates/pod-template.tpl
@@ -175,7 +175,7 @@ spec:
           name: docker-socket
         {{- end }}
         {{- if .containerd.enabled }}
-        - mountPath: /host/run/containerd/
+        - mountPath: /host{{ dir .containerd.socket }}
           name: containerd-socket
         {{- end }}
         {{- if .crio.enabled }}

Screenshots

Before patching the configmap:

2025-02-09T02:20:01.307755967+0000: Notice Redirect stdout/stdin to network connection (gparent=containerd-shim ggparent=systemd gggparent=<NA> fd.sip=<NA> connection= lport=<NA> rport=<NA> fd_type=ipv4 fd_proto=raw evt_type=dup3 user=root user_uid=0 user_loginuid=-1 process=ping proc_exepath=/bin/busybox parent=ash command=ping -c 1 -W 5 192.168.0.61 terminal=0 container_id=cf342f90f2de container_image=<NA> container_image_tag=<NA> container_name=<NA> k8s_ns=<NA> k8s_pod_name=<NA>)

After patching the configmap manually by setting /run/containerd/containerd.sock in cri.sockets:

2025-02-09T03:00:02.834600255+0000: Notice Redirect stdout/stdin to network connection (gparent=containerd-shim ggparent=systemd gggparent=<NA> fd.sip=<NA> connection= lport=<NA> rport=<NA> fd_type=ipv4 fd_proto=raw evt_type=dup3 user=root user_uid=0 user_loginuid=-1 process=ping proc_exepath=/bin/busybox parent=ash command=ping -c 1 -W 5 192.168.0.61 terminal=0 container_id=7933d14dc3ad container_image=docker.io/alpine/curl container_image_tag=8.11.1 container_name=ping-check k8s_ns=default k8s_pod_name=ping-check-28984500-h6nkq)

Environment

  • Falco version: 0.40.0
  • System info:
{"machine":"aarch64","nodename":"falco-drgrq","release":"6.6.62+rpt-rpi-v8","sysname":"Linux","version":"#1 SMP PREEMPT Debian 1:6.6.62-1+rpt1 (2024-11-25)"}
  • Cloud provider or hardware configuration: Raspberry Pi 4 with k3s
  • OS: Debian 12
  • Kernel: 6.6.62+rpt-rpi-v8
  • Installation method: Helm
@Darkness4 Darkness4 added the kind/bug Something isn't working label Feb 9, 2025
@michaelSchmidMaloon
Copy link

We're facing the same issue with Rancher (rke2).

@alacuku
Copy link
Member

alacuku commented Feb 14, 2025

Hey @Darkness4, thanks for reporting the issue. I'm working on the fix.

@alacuku
Copy link
Member

alacuku commented Feb 14, 2025

While waiting for the fix, this is the correct way to mount k3s containerd path:

# File access configuration (scenario requirement)
mounts:
  # -- A list of volumes you want to add to the Falco pods.
  volumes:
    - name: k3s-containerd-socket
      hostPath:
        path: /run/k3s/containerd
  # -- A list of volumes you want to add to the Falco pods.
  volumeMounts:
    - mountPath: /host/run/k3s/containerd
      name: k3s-containerd-socket

@alacuku
Copy link
Member

alacuku commented Feb 14, 2025

Fixed in #843

/close

@poiana poiana closed this as completed Feb 14, 2025
@poiana
Copy link
Contributor

poiana commented Feb 14, 2025

@alacuku: Closing this issue.

In response to this:

Fixed in #843

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants