-
Notifications
You must be signed in to change notification settings - Fork 566
VFIO initial support #60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Nice to see this getting started :) |
0de0bf0 to
ce87416
Compare
|
I reworked the vfio-bindings crate, address @sboeuf comments on the vfio one and improve the crate overall. Next we will make the PCI implementation work with our PCI and VFIO crates. |
855efa3 to
e6b4147
Compare
da1dd9e to
64605c5
Compare
| vmm-sys-util = { git = "https://github.com/rust-vmm/vmm-sys-util" } | ||
|
|
||
| [features] | ||
| default = ["v5_0_0"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I note that default features are a bit sticky. See rust-lang/cargo#3126 (comment) for some background but if we want people to be able to selectively disable these features it may be better to not make it a default.
|
|
||
| pub mod bindings { | ||
| #[cfg(feature = "v5_0_0")] | ||
| pub use super::v5_0_0::*; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could maybe use an explanation here but is the idea that the bindings mod only be able to have a single bindgen version configured to be used at the same time? I am assuming that due to name collisions that will almost certainly come up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could maybe use an explanation here but is the idea that the bindings mod only be able to have a single bindgen version configured to be used at the same time?
Yes, that's the idea.
vfio/src/vfio_device.rs
Outdated
| fn new(id: u32, vm: &VmFd) -> Result<Self> { | ||
| let mut group_path = String::from("/dev/vfio/"); | ||
| let s_id = &id; | ||
| group_path.push_str(s_id.to_string().as_str()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same comment about Path construction here.
With the VFIO crate, we can now support directly assigned PCI devices into cloud-hypervisor guests. We support assigning multiple host devices, through the --device command line parameter. This parameter takes the host device sysfs path. Fixes: cloud-hypervisor#60 Signed-off-by: Samuel Ortiz <[email protected]>
The default bindings are generated from the 5.0.0 Linux userspace API. Signed-off-by: Samuel Ortiz <[email protected]>
With the VFIO crate, we can now support directly assigned PCI devices into cloud-hypervisor guests. We support assigning multiple host devices, through the --device command line parameter. This parameter takes the host device sysfs path. Fixes: cloud-hypervisor#60 Signed-off-by: Samuel Ortiz <[email protected]>
93eee54 to
8de04c9
Compare
|
@chao-p @rbradford @bryteise This is ready to be reviewed, finally. |
4bbbdd1 to
df318f7
Compare
With the VFIO crate, we can now support directly assigned PCI devices into cloud-hypervisor guests. We support assigning multiple host devices, through the --device command line parameter. This parameter takes the host device sysfs path. Fixes: cloud-hypervisor#60 Signed-off-by: Samuel Ortiz <[email protected]>
The Virtual Function I/O (VFIO) kernel subsystem exposes a vast and relatively complex userspace API. This commit abstracts and simplifies this API into both an internal and external API. The external API is to be consumed by VFIO device implementation through the VfioDevice structure. A VfioDevice instance can: - Enable and disable all interrupts (INTX, MSI and MSI-X) on the underlying VFIO device. - Read and write all of the VFIO device memory regions. - Set the system's IOMMU tables for the underlying device. Signed-off-by: Zhang, Xiong Y <[email protected]> Signed-off-by: Chao Peng <[email protected]> Signed-off-by: Samuel Ortiz <[email protected]>
This brings the initial PCI support to the VFIO crate. The VfioPciDevice is the main structure and holds an inner VfioDevice. VfioPciDevice implements the PCI trait, leaving the IRQ assignments empty as this will be driven by both the guest and the VFIO PCI device, not by the VMM. As we must trap BAR programming from the guest (We don't want to program the actual device with guest addresses), we use our local PCI configuration cache to read and write BARs. Signed-off-by: Zhang, Xiong Y <[email protected]> Signed-off-by: Chao Peng <[email protected]> Signed-off-by: Samuel Ortiz <[email protected]>
In order to properly manage the VFIO device interrupt settings, we need to keep track of both MSI and MSI-X PCI config capabilities changes. When the guest programs the device for interrupt delivery, it writes to the MSI and MSI-X capabilities. This information must be trapped and cached in order to map the physical device interrupt delivery path to the guest one. In other words, tracking MSI and MSI-X capabilites will allow us to accurately build the KVM interrupt routes. Signed-off-by: Sebastien Boeuf <[email protected]> Signed-off-by: Samuel Ortiz <[email protected]>
We track all MSI and MSI-X capabilities changes, which allows us to also track all MSI and MSI-X table changes. With both pieces of information we can build kvm irq routing tables and map the physical device MSI/X vectors to the guest ones. Once that mapping is in place we can toggle the VFIO IRQ API accordingly and enable disable MSI or MSI-X interrupts, from the physical device up to the guest. Signed-off-by: Sebastien Boeuf <[email protected]> Signed-off-by: Samuel Ortiz <[email protected]>
VFIO explictly tells us if a MMIO region can be mapped into the guest address space or not. Except for MSI-X table BARs, we try to map them into the guest whenever VFIO allows us to do so. This avoids unnecessary VM exits when the guest tries to access those regions. Signed-off-by: Zhang, Xiong Y <[email protected]> Signed-off-by: Chao Peng <[email protected]> Signed-off-by: Samuel Ortiz <[email protected]>
With the VFIO crate, we can now support directly assigned PCI devices into cloud-hypervisor guests. We support assigning multiple host devices, through the --device command line parameter. This parameter takes the host device sysfs path. Fixes: cloud-hypervisor#60 Signed-off-by: Samuel Ortiz <[email protected]>
The VFIO integration test first boots a QEMU guest and then assigns the QEMU virtio-pci networking device into a nested cloud-hypervisor guest. We then check that we can ssh into the nested guest and verify that it's running with the right kernel command line. Signed-off-by: Samuel Ortiz <[email protected]>
Signed-off-by: Sebastien Boeuf <[email protected]>
With the VFIO crate, we can now support directly assigned PCI devices into cloud-hypervisor guests. We support assigning multiple host devices, through the --device command line parameter. This parameter takes the host device sysfs path. Fixes: #60 Signed-off-by: Samuel Ortiz <[email protected]>
No description provided.