For gaming? I haven’t really run into any issues. If you’re trying to virtualize your GPU for VMs and stuff like that, Nvidia is a lot more locked down. I use the proprietary drivers - the open source ones don’t seem to perform as well. Most Distributions will just give you a prompt where you select which drivers you would prefer to use.
Currently, PCI passthrough works for dual-graphic cards only. However, there is a workaround for passing a single graphic card. The problem with this approach is that you have to deattach the graphics card from the host and use ssh to control the host from the guest.
When you start the virtual machine, all your GUI apps will be force terminated. However, as a workaround, you can use Xpra to detach to another Display before starting the virtual machine and reattach the Apps to display after shutting down the virtual machine.
If you have NVIDIA GPU, you may need to dump your GPU’s vBIOS using nvflashAUR and patch it using vBIOS Patcher.
NVIDIA vGPU
By default, NVIDIA disabled the vGPU for consumer series (if you own an enterprise card go ahead). However, you can unlock vGPU for your consumer card.
You will also need a vGPU license, though there are some workarounds.
Follow this guide to manually setup a Windows 10 guest with NVIDIA vGPU.
Once I got my virtualization settings set up correctly in UEFI, and KVM was my hypervisor instead of QEMU TCG, my performance did seem pretty good. Maybe it’s just working correctly without having to follow these steps?
Looks like that wiki page is out of date, you no longer need to dump your bios and patch it. I’ve never really found a need to control the host when running a VM, but SSH is a decent option if you only plan to use terminal apps.
Have you set up a VM with KVM and it’s working? There shouldn’t be much else to do, just install your gpu drivers and play some games, or run your windows application :)
For gaming? I haven’t really run into any issues. If you’re trying to virtualize your GPU for VMs and stuff like that, Nvidia is a lot more locked down. I use the proprietary drivers - the open source ones don’t seem to perform as well. Most Distributions will just give you a prompt where you select which drivers you would prefer to use.
You don’t need to do work around for nvidia GPU’s for VM’s anymore, works pretty much the same as AMD
You likely know more than me about doing it, but this is my source
https://wiki.archlinux.org/title/QEMU/Guest_graphics_acceleration
Once I got my virtualization settings set up correctly in UEFI, and KVM was my hypervisor instead of QEMU TCG, my performance did seem pretty good. Maybe it’s just working correctly without having to follow these steps?
Looks like that wiki page is out of date, you no longer need to dump your bios and patch it. I’ve never really found a need to control the host when running a VM, but SSH is a decent option if you only plan to use terminal apps.
Have you set up a VM with KVM and it’s working? There shouldn’t be much else to do, just install your gpu drivers and play some games, or run your windows application :)