All Projects → ipaqmaster → Vfio

ipaqmaster / Vfio

A script I've made to make passing PCI devices to a VM less of a chore. I use it for single-gpu VM gaming and other PCI/LiveCD/PXE/VM/RawImage testing given the script's accessibility.

Programming Languages

shell
77523 projects

Projects that are alternatives of or similar to Vfio

Clarrays.jl
OpenCL-backed GPU Arrays
Stars: ✭ 58 (-18.31%)
Mutual labels:  gpu
Ggnn
GGNN: State of the Art Graph-based GPU Nearest Neighbor Search
Stars: ✭ 63 (-11.27%)
Mutual labels:  gpu
Cloudml
R interface to Google Cloud Machine Learning Engine
Stars: ✭ 66 (-7.04%)
Mutual labels:  gpu
Memory Efficient Maml
Memory efficient MAML using gradient checkpointing
Stars: ✭ 60 (-15.49%)
Mutual labels:  gpu
Aiopen
AIOpen是一个按人工智能三要素(数据、算法、算力)进行AI开源项目分类的汇集项目,项目致力于跟踪目前人工智能(AI)的深度学习(DL)开源项目,并尽可能地罗列目前的开源项目,同时加入了一些曾经研究过的代码。通过这些开源项目,使初次接触AI的人们对人工智能(深度学习)有更清晰和更全面的了解。
Stars: ✭ 62 (-12.68%)
Mutual labels:  gpu
Arboretum
Gradient Boosting powered by GPU(NVIDIA CUDA)
Stars: ✭ 64 (-9.86%)
Mutual labels:  gpu
Dain Vulkan Gui
AI-Powered video interpolater (eg. 30fps -> 60fps) for Vulkan devices. Based on dain-ncnn-vulkan and ffmpeg
Stars: ✭ 58 (-18.31%)
Mutual labels:  gpu
Autogadgetfs
USB testing made easy
Stars: ✭ 71 (+0%)
Mutual labels:  usb-devices
Tsne Cuda
GPU Accelerated t-SNE for CUDA with Python bindings
Stars: ✭ 1,120 (+1477.46%)
Mutual labels:  gpu
Junest
The lightweight Arch Linux based distro that runs upon any Linux distros without root access
Stars: ✭ 1,134 (+1497.18%)
Mutual labels:  qemu
Optix Path Tracer
OptiX Path Tracer
Stars: ✭ 60 (-15.49%)
Mutual labels:  gpu
Pycuda
CUDA integration for Python, plus shiny features
Stars: ✭ 1,112 (+1466.2%)
Mutual labels:  gpu
Toboot
Bootloader for the EFM32HG Tomu Board
Stars: ✭ 65 (-8.45%)
Mutual labels:  usb-devices
Rayray
A tiny GPU raytracer, using Zig and WebGPU
Stars: ✭ 59 (-16.9%)
Mutual labels:  gpu
Rpi Vk Driver
VK driver for the Raspberry Pi (Broadcom Videocore IV)
Stars: ✭ 1,160 (+1533.8%)
Mutual labels:  gpu
Hidden Miner Builder
Hidden miner builder
Stars: ✭ 58 (-18.31%)
Mutual labels:  gpu
Vbiosfinder
Extract embedded VBIOS from (almost) any BIOS Update
Stars: ✭ 64 (-9.86%)
Mutual labels:  gpu
Parenchyma
An extensible HPC framework for CUDA, OpenCL and native CPU.
Stars: ✭ 71 (+0%)
Mutual labels:  gpu
Sirenofshame
Gamification for Continuous Integration
Stars: ✭ 70 (-1.41%)
Mutual labels:  usb-devices
Nx
Multi-dimensional arrays (tensors) and numerical definitions for Elixir
Stars: ✭ 1,133 (+1495.77%)
Mutual labels:  gpu

vfio

What is this

A bash script that starts my Win10 VM directly with qemu-system-x86_64 but automatically handles optional network bridging, hugepage allocation, USB passthrough arguments and PCI device rebinding + arguments (+ rebinding back to drivers when done) for minimizing my headaches.

It makes starting the Win10 vm quick and easy then sends me back to the lightdm login screen once it shuts down. Hoping the script helpful to others as I continiously tinker with it. Plan to add dual GPU support (one for host, one for guest) once I dust my old PC off.

What does it do

It starts a VM using qemu-system-x86_64 directly but it also:

Takes a regular expression of PCI and USB devices to pass to the VM when starting it. Generates qemu arguments for both.

Automatically unbinds specified PCI devices from their current driver and attaches them to the vfio-pci driver. (without any need for driver blocking or early vfio binds in the boot options) Then rebinds them to the driver they were originally used by after VM shutdown.

Optionally makes a network bridge on the host during VM runtime; Giving the VM its own Layer 2 mac-address presence on the existing home LAN. (Useful for setting a DHCP reservation, connecting to services running on the VM such as RDP, avoiding host-NAT nightmares)

It can optionally pin the VM's qemu process to specific host CPU threads (Useful if core isolation is configured in the host's boot parameters or have a cpu-set defined for improved VM performance)

On VM exit it rebinds PCI devices back to their original drivers and starts your display-manager back up if it were running.

What hosts and installs are supported?

This is an ongoing discovery but typically today's modern boards are going to be fine. As long as your motherboard has virtualization support and preferrably IOMMU on the motherboard for best performance it seems to work.

This script has worked for me this year using Archlinux. I play Overwatch on it a lot and as for performance: If the game were presented to me without context I wouldn't be able to tell it's a VM. The fps and input latency has seriously been great.

On Archlinux, it's worked on my two below hardware configurations however I'm certain other distros and hardware configurations will work too.

Why

WINE and Proton have gotten me far for the past few years but sometimes:

  1. Lacking the time to help a stubborn title run,
  2. A title is known not to work despite the efforts of Proton or Wine,
  3. A title employs a driver-level Anti-Cheat solution (which you cannot just throw at WINE) which leaves a VM as the next best option.

Modern IOMMU Support has made playing incompatible titles with ease.

With a second GPU present the Looking Glass project could be implemented; leaving the VM headless instead. (This flag is being worked on)

The script, arguments, and examples

Arguments this script will take [and script Gotchas]

Arguments

-image /dev/zvol/zpoolName/windows -imageformat raw

If set - attaches a flatfile, partition, whole-disk or zvol to the VM with QEMU's -drive parameter. -imageformat is optional and will apply to the most recent -image argument specified. -image and -imageformat can be used multiple times to add more disks.

-iso /path/to/a/diskimage.iso

If set attaches an ISO to qemu with an incrementing index id. Can be specified as many times needed for multiple CDs. Good for liveCDs or installing an OS with an optional driver-cd.

-bridge br0,tap0 (Attach vm's tap0 to existing br0)

-bridge br0,tap0,eth0 (Create br0, attach vm's tap0 and host's eth0 to br0, use dhclient for a host IP.)

Takes a bridge name, interface and tap interface name as arguments.

Using example 1:

  1. Checks br0 exists first.

  2. Creates tap0 for the vm as usual and attaches it to the pre-existing br0 (No dhclient, assumes pre-existing bridge is configured)

  3. During cleanup unslaves and deletes tap0.

Using example 2:

  1. Creates br0 + tap0.

  2. Slaves tap0 and eth0 to br0.

  3. Copies the mac from eth0 to the br0 (To preserve any DHCP Reservation in a network)

  4. Removes any lingering IPs from eth0. (flush)

  5. Brings all 3 up and runs dhclient on br0

  6. During cleanup, unslaves the interface and deletes br0+tap0. Restores NetworkManager if it were found to be running.

    If this argument isn't specified the default qemu NAT adapter will be used (NAT may be desirable for some setups)

-memory 8192M / -memory 8G / -mem 8G

Set how much memory the VM gets for this run. Argument assumes megabytes unless you explicitly use a suffix like K, M, or G. If this argument isn't specified the default value is HALF of the host total.

-hugepages / -huge

Try to mount (if not already) and allocate some hugepages based on the VM's total memory defined with -memory (or default). If successful, qemu is given the arguments to use it. Gets unallocated after VM exit in the cleanup routine.

-bios '/path/to/that.fd'

An optional bios path. If not set the script will try /usr/share/ovmf/x64/OVMF_CODE.fd if available.

-usb 'AT2020USB|SteelSeries|Ducky|Xbox|1425:5769'

If set, the script enumerates lsusb with this regex and generates qemu arguments for passing them through when the VM starts.

This example would catch any:

  1. Audio-Techinica AT2020 (USB Microphone+headphone DAC)

  2. SteelSeries mouse,

  3. Xbox Controllers,

  4. Ducky brand keyboard.

  5. A USB device with ID 1425:5769, whatever that may be.

-pci 'Realtek|NVIDIA|10ec:8168'

If set, the script enumerates lspci and generates arguments like the above. But also unbinds them from their current drivers (If any) and binds them to vfio-pci. Remembers what they were beforehand for rebinding after the VM shuts down.

This example would catch any:

  1. PCI devices by Realtek

  2. NVIDIA cards (Including children of the GPU like the audio card and USB-C controller)

  3. A PCI device with ID 10ec:8168.

-taskset 0,1,2,3,4,5 / -taskset 0,2,4,8

The taskset argument will take the threads you give it and only lets the VM execute on those threads. It also creates only that many threads on the VM. (6 and 4 in the examples respectively) This can significantly reduce latency if the guestis having trouble, even if you haven't configured any host pinning.

-colortest

A quick terminal color test then exits.

-iommugroups / -iommugrouping

Prints IOMMU groupings if available then exists.

-extras '-device xyz -device abc -device ac97 -display gtk -curses'

If set adds extra arbitrary commands to the cmdline of qemu (Once invoked) Useful for -device ac97 to get some quick and easy audio support if the host is running pulseaudio.

-run

Actually run. The script runs dry by default and outputs qemu arguments and environment information without this flag. Especially useful for testing PCI regexes without unbinding things first try, but good for general safety.

Notes and Gotchas.

  • If you don't set any -usb or -pci arguments the VM will run in a window on your desktop as is normal for Qemu. Useful for testing the VM actually boots, installing OSes or using liveCDs.
    • If you don't have $DISPLAY set the guest will run headless but the terminal will attach to the guest's serial. Make sure you put something like console=ttyS0 in the guest boot arguments if you actually want to interact with it while headless.
  • The absolute minimum requirement to get started is the -image and -iso arguments with OVMF available. You can install an OS, VirtIO+Nvidia drivers if needed, and have it ready for a passthrough on the next boot.
  • The default networking mode is QEMU's user-mode networking (NAT through host IP).
    • It's fine but if you want to talk to the guest from the outside you'll want to consider using -bridge.
  • This script makes use of VirtIO for networking. Unless you're passing through a USB/PCI network adapter, you'll want to install the VirtIO drivers into the guest. (e.g. Boot into the Windows ISO to install, then reboot the VM this time with the VirtIO driver iso attached)
    • The CPU topology is 'host' by default. The VM will think it has the host's CPU model.
  • By default the VM's CPU topology uses ALL host cores and HALF the host's total memory You can use -taskset to cherrypick host threads for the VM to execute on, it will also set the VM's threadcount appropriately to match. This is very useful if the host has enough load to interupt the VM during normal operation. If your host doesn't have the cpu load headroom for a gaming guest or the VM experiences stuttering, Consider using -taskset to isolate guest cores.

Examples

Note: I've omitted -run from these so they remain a dry-run.

If you aren't ready to do any passthrough and just want to start the VM in a regular window (Installation, Drivers, etc.): ./main -image /root/windows.img,format=raw -cdrom /root/Win10Latest.iso

If a host has been booted with isolated cores you can tell the script to pin the guest to those only: ./main -image /root/windows.img,format=raw -taskset 0,1,2,3,4,5

This example starts the VM with only host threads 0 to 5 (The first 3 cores on a multi-threaded host) Very useful if a VM experiences stuttering from host load.

An example run with passthrough could look like: ./main -image /dev/zvol/poolName/windows,format=raw -bridge br0,eth0,tap0 -usb 'SteelSeries|Audio-Technica|Holtek|Xbox' -pci 'NVIDIA'

This example would would (if seen) pass all regex-matching USB and PCI devices and rebind the PCI devices if applicable. It would also provision network bridge br0 and attach tap0 to the bridge with your host interface. Then give tap0 to the VM.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].