Skip to content

Workload VM in 'RunContainerError' state after Kubernetes cluster reboot #783

@ramukima

Description

@ramukima

I am running a kubernetes cluster (single node) with virtlet running on a VM (say A). I am able to run VMs and virtlet example for ubuntu-vm (say B).

However, when I reboot my VM (A), all kubernetes services recover and become available. However, the VM (B) I had deployed, remains in the status 'RunContainerError'. Here is a sample describe pod output when this happens -

  Warning  Failed   19s               kubelet, kubemaster  Error: "/run/virtlet.sock": rpc error: code = 2 desc = failed to create domain "b3c4e0f9-1184-5394-6dee-fb381ca95c45": virError(Code=1, Domain=10, Message='internal error: process exited while connecting to monitor: I1016 15:42:14.853945    6474 vmwrapper.go:66] Obtaining PID of the VM container process...
E1016 15:42:14.854254    6474 vmwrapper.go:89] Failed to obtain tap fds for key "b8b9cbfe-d153-11e8-870e-0050563d373c": server returned error: bad fd key: "b8b9cbfe-d153-11e8-870e-0050563d373c"')
  Normal  Pulled  6s (x9 over 1m)  kubelet, kubemaster  Container image "virtlet.cloud/ubuntu" already present on machine

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions