-
Notifications
You must be signed in to change notification settings - Fork 565
Add MSI-X support #38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
This PR depends on the pending PR rust-vmm/kvm#42 @rbradford @sameo the PR still need a bit of cleanup but I already did most of it, that's why I'd like you to take a look to let me know what you think about the PR. BTW, it is fully functional ;) |
c205063 to
c8950d2
Compare
|
@sameo @rbradford comments addressed, let me know what you think. In the meantime, I'll split the PR into proper commits. |
efe6862 to
6b6d1ed
Compare
|
@sameo @rbradford ok the code is ready now. I think I addressed all comments. Please let me know if I missed anything. |
| pub type IrqClosure = Box<Fn() -> std::result::Result<(), std::io::Error> + Send + Sync>; | ||
| pub type MsixClosure = | ||
| Box<Fn(MsixTableEntry) -> std::result::Result<(), std::io::Error> + Send + Sync>; | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're up to 3 different closure types. For clarity sake, I would suggest defining:
pub type PciInterruptClosure =
Box<Fn(Option<MsixTableEntry>) -> std::result::Result<(), std::io::Error> + Send + Sync>;and use the same type for both assign_irq and assign_msix`.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, we need to document the closure prototype.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding your proposal about PciInterruptClosure, I'm not sure this is the right approach because those closures should be passed through dedicated functions (assign_irq or assign_msix) depending on the use case, hence the need to have different closures. In the context of IRQ, why would we have an Option<MsixTableEntry> as parameter?
The VMM (vm.rs) can setup different closures for its PCI devices, based on the type of interrupt they want to set, this does not need to be generic. On the other hand, VirtioInterruptClosure is specific to virtio and has to be generic.
Actually the thing is that IrqClosure could be defined somewhere else than the pci crate but I'm not sure where.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, the IrqClosure does not have to be PCI specific and can live in another place. Probably in vm-device once it's ready. But I'm really not a fan of including type as part of type_name, e.g. from type I know this is a closure already.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding your proposal about
PciInterruptClosure, I'm not sure this is the right approach because those closures should be passed through dedicated functions (assign_irqorassign_msix) depending on the use case,
As a matter of fact, I even considered commenting on the assign_msix addition :)
What we're trying to do is extend the idea of assigning an interrupt to a device. Those interrupts can be delivered from the device in several ways: Pin based interrupts or memory based ones.
So I would go further and say we should define a generic InterruptParameters structure containing an Option<MsixTableEntry> and an InterruptType fields.
Then have your closure essentially being defined as:
pub type InterruptDelivery =
Box<Fn(InterruptParameters) -> std::result::Result<(), std::io::Error> + Send + Sync>;And you and Chao are right: This shouldn't be PCI prefixed as it could potentially belong to the device model crate. For now we can keep it here.
hence the need to have different closures. In the context of IRQ, why would we have an
Option<MsixTableEntry>as parameter?The VMM (
vm.rs) can setup different closures for its PCI devices, based on the type of interrupt they want to set, this does not need to be generic.
Yes but otoh the VMM is only trying to assign an interrupt to a device. And it wants to tell the device manager or for now the virtio layer how the device should deliver the interrupt.
On the other hand,
VirtioInterruptClosureis specific to virtio and has to be generic.Actually the thing is that
IrqClosurecould be defined somewhere else than thepcicrate but I'm not sure where.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Closely related discussion here: rust-vmm/vm-virtio#4
A DeviceInterrupt trait seems to be the trend of the discussion, and is something we may want to consider as well.
arch/Cargo.toml
Outdated
| byteorder = "=1.2.1" | ||
| kvm-bindings = "0.1" | ||
| kvm-ioctls = "0.1.0" | ||
| kvm-ioctls = { git = "https://github.com/sboeuf/kvm-ioctls", branch = "kvm_signal_msi" } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Obviously fix this before merging
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Of course :)
b2ed03e to
d4cfd2d
Compare
|
@sameo @rbradford @chao-p I've updated the PR (please review), but the part about vendoring is not right... I spent some time trying to fix the vendoring but nothing works... |
Because we cannot always assume the irq fd will be the way to send an IRQ to the guest, this means we cannot make the assumption that every virtio device implementation should expect an EventFd to trigger an IRQ. This commit organizes the code related to virtio devices so that it now expects a Rust closure instead of a known EventFd. This lets the caller decide what should be done whenever a device needs to trigger an interrupt to the guest. The closure will allow for other type of interrupt mechanism such as MSI to be implemented. From the device perspective, it could be a pin based interrupt or an MSI, it does not matter since the device will simply call into the provided callback, passing the appropriate Queue as a reference. This design keeps the device model generic. Signed-off-by: Sebastien Boeuf <[email protected]>
In order to support MSI-X, this commit adds to the pci crate a new module called "msix". This module brings all the necessary pieces to let any PCI device implement MSI-X support. Signed-off-by: Sebastien Boeuf <[email protected]>
In order to have access to the newly added signal_msi() function from the kvm-ioctls crate, this commit updates the version of the kvm-ioctls to the latest one. Because set_user_memory_region() has been swtiched to "unsafe", we also need to handle this small change in our cloud-hypervisor code directly. Signed-off-by: Sebastien Boeuf <[email protected]>
In order to allow virtio-pci devices to use MSI-X messages instead of legacy pin based interrupts, this patch implements the MSI-X support for cloud-hypervisor. The VMM code and virtio-pci bits have been modified based on the "msix" module previously added to the pci crate. Fixes cloud-hypervisor#12 Signed-off-by: Sebastien Boeuf <[email protected]>
In order to factorize the complexity brought by closures, this commit merges IrqClosure and MsixClosure into a generic InterruptDelivery one. Signed-off-by: Sebastien Boeuf <[email protected]>
|
I've rebased your branch for you to drop the vendoring related change as that is no longer necessary (and also pull in integration testing) Shall we add a test to check MSI-X is in use? What would that involve? |
Signed-off-by: Rob Bradford <[email protected]>
|
@rbradford thanks for rebasing and adding the appropriate test :) |
In order to allow virtio-pci devices to use MSI-X messages instead
of legacy pin based intterrupts, this patch implements the MSI-X
support for cloud-hypervisor.
There are three main changes carried by this patch:
to be able to leverage MSI-X support.
provided through the pci crate.
the interrupt_evt instead of a very specific EventFd. This is an
important part that let the vmm glue decide what should be done when
a virtio device wants to send an IRQ to the driver. This would let
any VMM support multiple types of interrupts, without having to let
the virtio device know about those.
In the case of cloud-hypervisor, we don't need to provide the VmFd
to the virtio device implementation, so that it does not need to
be aware about KVM specific bits.
Fixes #12
Signed-off-by: Sebastien Boeuf [email protected]