QEMU GPU Passthrough


  • CPU that supports virtualization and IOMMU virtualization.
  • Motherboard that supports virtualization.
  • Minimum of two graphics cards, integrated graphics can be one of them.

When booting into Linux, make sure the computer initiates the video on the GPU which will be used for the host and not the GPU that will be passed through to the virtual machine. If one of the GPU’s is integrated, the first device to initialize can usually be set in the BIOS.

This article will not walk through creating a VM with QEMU, its assumed a virtual machine is readily available. The Arch Wiki has an in depth article on how to setup QEMU.


A requirement for device passthrough is IOMMU virtualization support from the CPU and motherboard. This is called VT-d on Intel and AMD-Vi on AMD.


A special boot flag, intel_iommu=on or amd_iommu=on, is needed to be added to the boot loader, depending on the CPU vendor. Edit the kernel entry of the Linux bootloader, for Arch Linux, the configuration is in /boot/loader/entries/arch.conf.

Reboot the computer to initialize IOMMU.


After reboot, verify if IOMMU had been loaded.

Isolate the GPU

Locate the target card’s PCI bus addresses and device IDs.

In this case, the two PCI device IDs wanted are 1002:6818 and 1002:aab0. Their bus address are 01:00.0 and 01:00.1 respectively.

vfio-pci Module

To allow the VM access to your devices, vfio needs to claim it before the host does. This can be achieved with vfio-pci. It is available in kernel v4.1+, and is the recommended option if your kernel supports it. Check if this module is available by using modprobe.

If there is no output, the module loaded without issues. If instead modprobe errors with modprobe: FATAL: Module vfio-pci not found then an upgraded Linux kernel is needed.

Blacklisting Modules

If the host does not require the driver of the PCI device intended be to passed through (i.e. the host and VM are not using the same GPU vendor), blacklist the device. Edit /etc/modprobe.d/00-modprobe.conf or create the file if it doesn’t exist. Add radeon and/or nvidia to the blacklist if they not needed by the host.

Bind Device to vfio

vfio will need to bind to any device that will be passed to a VM.

First, lets find the full bus address of the devices.

Create the file /usr/local/etc/vfio-devices.

Now add the bus addresses of the devices required within the file, excluding @.

We will use this as an environmental file to pass the devices we want to bind.

vfio-bind Script

The vfio-bind script will automate the rebinding the the PCI devices. Create a file called /usr/local/bin/vfio-bind and add the following script.

Now allow the script to be executable.

This script loads the vfio-pci module, locates the devices passed in as arguments and rebinds them to vfio-pci.

SystemD Unit File

The unit file will kick off the vfio-bind script using the list of devices in the vfio-devices file. Create the unit file in /usr/bin/systemd/system/vfio-bind.service. Add the follow to the unit file.

The unit files loads /usr/local/etc/vfio-devices to define the $DEVICES environmental variable. Next, it runs /usr/local/bin/vfio-bind $DEVICES script passing the list of devices.

Finally, enable the service to start on boot up.

Restart the system and verify the script is running. The device IDs will be listed in dmesg.

QEMU Script

The follow options will need to be added the QEMU virtual machine script.

The host= will need to be set to the bus addresses of the devices being passed through. The video device will need x-vga=on in order to display video.

Because the video of the VM is on separate hardware, seamless keyboard/mouse integration will not work.

There are ways around losing the keyboard and mouse integration, like Synergy. Another option is to setup a dumb video device on the QEMU VM.

This will create a qxl video device but not connected to any video driver. When kicking off the QEMU VM, QEMU will start up a blank video screen window. Clicking on the window will pass the keyboard and mouse to the VM.