Dual GPU Passthrough on Proxmox
This post explains how I managed to pass through some old NVIDIA GTX1060 cards to two Rocky Linux VMs on Proxmox 9. One of those virtual machines is even able to make use of a local monitor and keyboard/mouse while the other VM will be for remote desktop access only. This was pretty straightforward in hindsight except that tutorials on the web were often a few years old and did different things, so I was unsure at first which pieces of advice were still valid.
Skip down to “How To” if you’re in a hurry.
In VFX studios people who usually are not working on beefy workstations might still have to play back 4K video or review full-resolution files (OpenEXR in our case). While mini PCs (HP EliteDesk, Zotac, Forum Mini and what not) with often just 16GB of RAM and onboard graphics are more than enough for some spreadsheet and database work, the playing back of video once or twice a day would really benefit from more power.
Setup
We had an old HP Z820 workstation around and old GPUs that were no longer up to the task for actual VFX work in Nuke. The workstation, however, has dual 16 core CPUs, 128 GB or RAM and server-grade hardware inside (I don’t get any money from HP for saying this but I like their workstations and we’ve been using refurbished models from Z800 to Z8G4). There’s enough room to fit 2 consumer-grade graphics cards from MSI in there and the PSU can handle them easily (each one needs a single 6-pin PCIe power cable).
I installed a default no-frills Proxmox PVE 9 on a 500 GB SSD which has enough space for two VM images that need to contain a full Rocky Linux 9 installation with a graphical desktop environment. Any smarter setups in terms of storage are beyond the scope of my tutorial.
How To
First, Proxmox itself needs to be modified a bit. It must not load any graphics drivers during its boot process as this would prevent the graphics cards from being passed through to a VM. It’s probably best to install all available system updates first and reboot in case of a kernel update. Check that you have both SSH access and that the Proxmox web GUI is working fine.
Create /etc/modprobe.d/blacklist.conf and enter these lines to prevent Proxmox from using these kernel modules:
blacklist nouveau
blacklist nvidia
Modify the GRUB_CMDLINE_LINUX_DEFAULT line /etc/default/grub (you can duplicate it and add a # in front of it to preserve the original line in case you want to undo this later):
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"
Run update-grub afterwards!
IOMMU is Intel’s name for the feature that allows passing devices to VMs. It can only work if it is supported and enabled, which was true for an old machine like my HP Z820 workstation so it’s probably supported on all modern hardware (it’s called VT-d inside the HP BIOS).
Create /etc/modules-load.d/vfio.conf and add these lines to load some more required modules.
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
Once this is done, you can reboot your Proxmox PVE and you’re done with meddling on the command-line. If you have a monitor connected it should stay blank and not show the Proxmox terminal login anymore.
Note:
The article I’ve mostly followed myself contains more information about IOMMU groups. It was not necessary to deal with this in my case, probably due to the good server-grade hardware I had.
VM Creation
Create a new Virtual Machine. The required settings for GPU passthrough are as follows:
- Graphic Card: Default
- Machine: q35
- BIOS: OVMF (UEFI)
- Add EFI Disk checked
- Pre-Enroll keys unchecked (important if you don’t want secure boot inside the VM which forces you to do some extra things to get the nvidia driver working. See this forum post.)
- SCSI disks, vCPUs and memory as you like. I’ve assigned half of my cores to each VM and a bit less than half the RAM each.
Once the VM is up and running you need to pass through the GPUs and optionally some peripherals. I’ve installed Rocky Linux 9 in it before doing the next steps but you can probably also continue right away. Just remember that the virtual browser-based console will stop working once you have passed the graphics card to the VM. If you are not in front of the host PC you should install Linux first.
Make sure the VM is stopped and go to the VM’s Hardware section. Click Add → PCI device. Select Raw device and pick your NVIDIA card from the list. I had 2 identical GPUs in my case and chose one for the first VM and the other for the 2nd VM. I’ve also checked the boxes for Primary GPU, All Functions, ROM-Bar and PCI Express. There’s probably also an audio device provided by the NVIDIA card. I didn’t pass it through as I didn’t need it.
To pass through a local USB keyboard and mouse connect the peripherals and click Add → USB Device. You should be able to pick the devices based on manufacturer or pipe through the raw USB port. The latter, however, was a problem for me as a reboot of Proxmox seemed to have jumbled the port IDs around. The device manufacturer’s IDs were safe but if you ever swap your keyboard it will need to be added to the VM again.
Inside the VM you should now see the NVIDIA device listed if you type “lspci”. You just need to install the drivers now which are available from the 3rd party repositories ELRepo (kmod-nvidia) and RPM Fusion (akmod-nvidia). After a reboot of the VM you can check nvidia-smi or btop to confirm that the GPU is actually working.