What is SR-IOV (Single Root I/O Virtualization) ?

SR-IOV is a technology, by Intel created to improve the networking performance of virtual machines.

Interrupts

First of all it is important to explain how interrupts are involved within packet processing.

The process to remove and read packets from the wire, in both virtualized and non-virtualized systems is interrupt driven. When a packet is received on the NIC, an IRQ (interrupt request) is sent to the CPU. The CPU then has to stop what it is doing in order to obtain the data.

However on virtualized hosts an additional interrupt is introduced. At the point the NIC receives a packet it sends an interrupt to the CPU running the hypervisor. The packet is examined to determine which core and VM must process it. Another interrupt is then sent to the CPU running the VM, the CPU stops what it is doing and copies the packet into the VMs user space.

VMDq

To reduce the number of interrupts, ensure a greater balance of workload across the cpus and increase network performance, Intel introduced VMDq.

VMDq allows the Hypervisor (VMM) to assign, in the NIC, a queue/classifier and separate CPU interrupts for each of the VMs. This allows the NIC to sort each packet based on MAC/VLAN and send the appropriate interrupt to the VMs CPU, without having to consult the Hypervisor CPU. Even with VMDq, there is still room for performance improvement as the VMM still has to handoff the packet to the VM[1]. However, because of this handoff it means that inline features such as vSwitch are not negated.

 

SR-IOV

SR-IOV (Single Root I/O Virtualization) is an extension to the PCI Express (PCIe) specification and offers greater benefits in performance over VMDq.
Unlike VMDq, where a separate queue for each VM is created, SR-IOV creates a Virtual Function (VF) that acts like a separate physical NIC for each VM[2].
The VF in the NIC is then given a descriptor that tells it where the user space memory, owned by the specific VM it serves, resides[3]. This allows data to/from the NIC to be transferred to/from the VM directly via DMA (Direct Memory Access) . Bypassing the virtual switch and the VMM (Virtual Machine Manager) providing interrupt-free operation and in turn making packet processing extremely quick.

However, there is a downside, due the direct access between the VM and the VF, anything between, i.e vSwitches etc will not be available.

References

[1] https://www.linkedin.com/pulse/revisiting-network-io-virtualization-brad-beadles?trkSplashRedir=true&forceNoSplash=true
[2] http://windowsitpro.com/virtualization/q-are-vmdq-and-sr-iov-performing-same-function
[3] http://www.metaswitch.com/the-switch/accelerating-the-nfv-data-plane

Rick Donato

Want to become a VMware expert?

Here is our hand-picked selection of the best courses you can find online:
Complete VMware Administration course
VMware vSphere 7 – Install, Configure, Manage
and our recommended certification practice exams:
AlphaPrep Practice Tests - Free Trial