-
Notifications
You must be signed in to change notification settings - Fork 565
vmm: support PCI I/O regions on all architectures #6871
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Fixes: def98fa ("vmm, vm-allocator: Introduce an allocator for platform devices") Signed-off-by: Alyssa Ross <[email protected]>
|
The ARM failure is because we're in a kernel transition. |
While non-Intel CPU architectures don't have a special concept of IO address space, support for PCI I/O regions is still needed to be able to handle PCI devices that use them. With this change, I'm able to pass through an e1000e device from QEMU to a cloud-hypervisor VM on aarch64 and use it in the cloud-hypervisor guest. Previously, it would hit the unimplemented!(). Signed-off-by: Alyssa Ross <[email protected]>
|
Thanks @alyssais - your changes are fine and correct. I'm a little bit perplexed how this works in practice but I found this quote:
I guess that because these are VFIO devices the real host bridge on the system takes care of all this. Did you verify that I/O port BAR on that particular PCI device was required and working (e.g. with like a print statement in the QEMU code for the I/O bar region) |
|
These are the two relevant functions - https://github.com/qemu/qemu/blob/master/hw/net/e1000e.c#L137-L183 |
|
I'll have a look. |
|
I don't see those being called. It does have a non-zero size set though — that's how I ran into this. |
The Linux driver only uses I/O port for certain device types: QEMU advertises:
Which isn't in the list: What about trying ne2k-pci ? That seems to be only I/O bar based! |
|
On aarch64 in the cloud-hypervisor guest I get: On x86_64, it doesn't work either, but for a different reason. This is printed repeatedly: It does work on the cloud-hypervisor host (the QEMU guest). |
Ah - I can see it uses INTx - rather than MSI! |
rbradford
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm happy to go ahead since it does resolve the use case where a device has I/O bars but doesn't need them for operation.
|
Because I was already testing it — if I replace Cloud Hypervisor with QEMU, it does work on aarch64. |
It's probably because it requires INTx interrupts - not something we support (or maybe we did add support but weren't able to test it regularly!) |
50bac16
The default is virtio-net on aarch64. virtio-net isn't very realistic, and it also causes guest kernels to get stuck in a lock when it's passed through. Link: cloud-hypervisor/cloud-hypervisor#6871 Signed-off-by: Alyssa Ross <[email protected]>
While non-Intel CPU architectures don't have a special concept of IO address space, support for PCI I/O regions is still needed to be able to handle PCI devices that use them.
With this change, I'm able to pass through an e1000e device from QEMU to a cloud-hypervisor VM on aarch64 and use it in the cloud-hypervisor guest. Previously, it would hit the
unimplemented!().I'd appreciate some checking here, as while this does fix a problem, I've been learning about PCI as I've gone. In particular: