QEMU GPU Passthrough
Requirements
- CPU that supports virtualization and IOMMU virtualization.
- Motherboard that supports virtualization.
- Minimum of two graphics cards, integrated graphics can be one of them.
When booting into Linux, make sure the computer initiates the video on the GPU which will be used for the host and not the GPU that will be passed through to the virtual machine. If one of the GPU’s is integrated, the first device to initialize can usually be set in the BIOS.
This article will not walk through creating a VM with QEMU, its assumed a virtual machine is readily available. The Arch Wiki has an in depth article on how to setup QEMU.
IOMMU
A requirement for device passthrough is IOMMU virtualization support from the CPU and motherboard. This is called VT-d on Intel and AMD-Vi on AMD.
Enable
A special boot flag, intel_iommu=on
or amd_iommu=on
, is needed to be added to the boot loader, depending on the CPU vendor. Edit the kernel entry of the Linux bootloader, for Arch Linux, the configuration is in /boot/loader/entries/arch.conf
.
1 2 3 4 |
title Arch Linux linux /vmlinuz-linux initrd /initramfs-linux.img options root=<PARTUUID> <options> intel_iommu=on |
Reboot the computer to initialize IOMMU.
Verify
After reboot, verify if IOMMU had been loaded.
1 2 3 4 |
dmesg | grep -e DMAR -e IOMMU ... [ 0.000000] DMAR: IOMMU enabled ... |
Isolate the GPU
Locate the target card’s PCI bus addresses and device IDs.
1 2 3 4 |
lspci -nn|grep -i "nvidia\|radeon" 01:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Pitcairn XT [Radeon HD 7870 GHz Edition] [1002:6818] 01:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Cape Verde/Pitcairn HDMI Audio [Radeon HD 7700/7800 Series] [1002:aab0] |
In this case, the two PCI device IDs wanted are 1002:6818
and 1002:aab0
. Their bus address are 01:00.0
and 01:00.1
respectively.
vfio-pci Module
To allow the VM access to your devices, vfio needs to claim it before the host does. This can be achieved with vfio-pci
. It is available in kernel v4.1+, and is the recommended option if your kernel supports it. Check if this module is available by using modprobe
.
1 |
modprobe vfio-pci |
If there is no output, the module loaded without issues. If instead modprobe errors with modprobe: FATAL: Module vfio-pci not found
then an upgraded Linux kernel is needed.
Blacklisting Modules
If the host does not require the driver of the PCI device intended be to passed through (i.e. the host and VM are not using the same GPU vendor), blacklist the device. Edit /etc/modprobe.d/00-modprobe.conf
or create the file if it doesn’t exist. Add radeon
and/or nvidia
to the blacklist if they not needed by the host.
1 2 |
blacklist radeon blacklist nvidia |
Bind Device to vfio
vfio
will need to bind to any device that will be passed to a VM.
First, lets find the full bus address of the devices.
1 2 3 4 |
ls /sys/bus/pci/devices | grep -i '01:00.0\|01:00.1' 0000:01:00.0@ 0000:01:00.1@ |
Create the file /usr/local/etc/vfio-devices
.
Now add the bus addresses of the devices required within the file, excluding @
.
1 |
DEVICES="0000:01:00.0 0000:01:00.1" |
We will use this as an environmental file to pass the devices we want to bind.
vfio-bind Script
The vfio-bind
script will automate the rebinding the the PCI devices. Create a file called /usr/local/bin/vfio-bind
and add the following script.
1 2 3 4 5 6 7 8 9 10 11 12 |
#!/bin/sh modprobe vfio-pci for dev in "$@"; do vendor=$(cat /sys/bus/pci/devices/$dev/vendor) device=$(cat /sys/bus/pci/devices/$dev/device) if [ -e /sys/bus/pci/devices/$dev/driver ]; then echo $dev > /sys/bus/pci/devices/$dev/driver/unbind fi echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id done |
Now allow the script to be executable.
1 |
chmod +x /usr/local/bin/vfio-bind |
This script loads the vfio-pci
module, locates the devices passed in as arguments and rebinds them to vfio-pci
.
SystemD Unit File
The unit file will kick off the vfio-bind
script using the list of devices in the vfio-devices
file. Create the unit file in /usr/bin/systemd/system/vfio-bind.service
. Add the follow to the unit file.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
#!systemd [Unit] Description=Binds devices to vfio-pci After=syslog.target [Service] EnvironmentFile=-/usr/local/etc/vfio-devices Type=oneshot RemainAfterExit=yes ExecStart=-/usr/local/bin/vfio-bind $DEVICES [Install] WantedBy=multi-user.target |
The unit files loads /usr/local/etc/vfio-devices
to define the $DEVICES
environmental variable. Next, it runs /usr/local/bin/vfio-bind $DEVICES
script passing the list of devices.
Finally, enable the service to start on boot up.
1 |
systemctl enable vfio-bind.service |
Restart the system and verify the script is running. The device IDs will be listed in dmesg
.
1 2 3 4 |
dmesg | grep -i vfio [ 0.362578] VFIO - User Level meta-driver version: 0.3 [ 0.378419] vfio_pci: add [1002:6818[ffff:ffff]] class 0x000000/00000000 [ 0.391754] vfio_pci: add [1002:aab0[ffff:ffff]] class 0x000000/00000000 |
QEMU Script
The follow options will need to be added the QEMU virtual machine script.
1 2 3 |
-enable-kvm \ -device vfio-pci,host=01:00.0,x-vga=on,multifunction=on \ -device vfio-pci,host=01:00.1 \ |
The host=
will need to be set to the bus addresses of the devices being passed through. The video device will need x-vga=on
in order to display video.
Because the video of the VM is on separate hardware, seamless keyboard/mouse integration will not work.
There are ways around losing the keyboard and mouse integration, like Synergy. Another option is to setup a dumb video device on the QEMU VM.
1 2 |
-device qxl \ -vga none \ |
This will create a qxl
video device but not connected to any video driver. When kicking off the QEMU VM, QEMU will start up a blank video screen window. Clicking on the window will pass the keyboard and mouse to the VM.