Skip to content

Incorrect CPU resource enumeration in LXD containers  #21057

@ilhaan

Description

@ilhaan

What kind of request is this (question/bug/enhancement/feature request):
Bug

Steps to reproduce (least amount of steps as possible):
Make sure cluster is set to use "Custom" providers upon cluster creation. Create LXD container with Docker installed as per instructions here. ubuntu:18.04 is the base image that was used. Once container is ready, run "Custom Node Run Command" to enlist the container as a node on Rancher. This command can be found by navigating to "Cluster" in the top navbar > On the cluster page click the vertical ellipsis button > Edit > Scroll all the way to the bottom of the page. Copy this command and run it in the LXD container that was created earlier.

Result:
Nodes successfully enlist in cluster and can schedule pods without any issues. However, nodes display CPU resources of the bare metal the LXD containers run on and not the resources (CPU Cores & Memory) that have been allocated to the LXD container.

Other details that may be helpful:
LXD containers were ensured to have limited CPU cores using lxc config set my-container limits.cpu 4. This was verified by running grep -c processor /proc/cpuinfo, which returned 4 as expected.

Rancher shows the same node as having 56 cores, which is the number of cores on the bare metal where the LXD containers are running. This is shown in the screenshot below:

Screen Shot 2019-06-21 at 1 14 32 PM

All nodes shown in the screenshot are individual LXD containers running on the same machine. They have each been assigned 4 cores and 8GB of memory using LXD config settings.

Environment information

  • Rancher version (rancher/rancher/rancher/server image tag or shown bottom left in the UI): 2.2.4 (from bottom left in the UI)
  • Installation option (single install/HA): Single Install of rancher server using instructions from here.

Cluster information

  • Cluster type (Hosted/Infrastructure Provider/Custom/Imported): Custom
  • Machine type (cloud/VM/metal) and specifications (CPU/memory): metal
  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-07T09:55:27Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.5", GitCommit:"2166946f41b36dea2c4626f90a77706f426cdea2", GitTreeState:"clean", BuildDate:"2019-03-25T15:19:22Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
  • Docker version (use docker version): Below is the output from Docker running in LXD container
Client:
 Version:           18.09.6
 API version:       1.39
 Go version:        go1.10.8
 Git commit:        481bc77
 Built:             Sat May  4 02:35:57 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.6
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.8
  Git commit:       481bc77
  Built:            Sat May  4 01:59:36 2019
  OS/Arch:          linux/amd64
  Experimental:     false

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions