I was going through all the exciting things that ESXi 5.1 supports. But one thing caught my eye and it was something I had never heard of.
vQuicky – For the Impatient Like me 🙂
> SR-IOV stands for Single-root I/O Vritualization and is now supported in ESXi 5.1
> SR-IOV is different from VMdirect-path in that it allows multiple vms to share the same nic and have higher throughput as they bypass the hypervisor layer
> SR-IOV supported PCIe card allows one PCIe card to be presented to many virtual machines directly without going through the hypervisor layer
> SR-IOV runs on the concept of Physical functions and virtual functions. PFs have configuration capability to the PCIe device while VFs do not. VFs are attached to the virtual machines and are of the same device type as the PF/PCIe device.
> PCI SIG Â maintain the SR-IOV specification and say that one SR-IOV supported card can have as many as 256 VFs in theory.
> SR-IOV needs to be supported in BIOS, OS and Hypervisor for it to work correctly.
> SR-IOV can cause issues with physical switches as physical switches do not know about the VFs. Inters-witching is another functionality that I believe is available that allows direct traffic flow between VFs.
> You cannot have a different type of VF from that of a PF or the PCIe device type.
> If you have a SR-IOV support HBA card, then you will need NPIV, N_PORT_ids to manage WWNs on the virtual HBAs.
> MR-IOV is multiple systems using VFs and sharing one PCIe device.
inDepth
So what is single-root i/o virtualization or sr-iov in short. Here goes.
Single-root I/O virtualization is a technology or infact a specification that allows you to present a single physical PCIe device to appear as multiple PCIe devices. For instance if you had a single network PCIe card, with SR-IOV you can now present this PCIe card to the virtual machines and these virtual machines will look at it as if its their own dedicated PCIe physical card. You will now have the ability to present one PCIe card to multiple virtual machine instances. This is different than VMdirectpath. SR-IOV allows hypervisor bypass by allowing a virtual machine to attach a virtual function to itself and share the same network card. More about virtual functions below.
SR-IOV works by introducing the concept of Physical Functions and Virtual Functions, PF and VF in short respectively. Physical functions are full blown PCIe functions while virtual functions are light weight and are associated with a virtual machine. A virtual function is assigned to a virtual machine, the traffic flows directly to it bypassing the virtual layer allowing higher throughput. Basically, PFs have full configuration capability to the PCIe card, aka, they are full featured PCIe functions and can do configuration changes to the card. VFs lack the configuration capability and can only move data in and out.
SR-IOV requires support in BIOS and also in the operating system and/or the hypervisor as well. Since VFs can’t be treated as full PCIe devices because of their lack in their configuration capability, the OS need extra drivers to work with them.
The PCI SIG SR-IOV (these guys maintain this specification) says that one instance can have as many as 256 VFs. So a quad port SR-IOV PCIe card will have 1024 VFs in theory. Remember that VF replies on the physcial hardware of the card itself so practically the number of VFs may be lower. For SR-IOV fibre channel HBAs, you will have logical HBAs sharing a physical HBA and you will need NPIV enabled so you can manage the WWNs and N_Port_IDS on a single HBA.
Some things you have to be careful about is that switches will not be aware of the VF ports and can cause some issues. I read that the SR-IOV allows some functionality where VF Switching is possible, that means you can send traffic between VFs and not have to use a physical switch. You also cannot have a VF of one type other than that of the PF.
I picked up some information for comments from Scott Lowe’s blog. The servers below support SR-IOV. Apparently there is also MR-IOV (Multi Root – i/o virtualization) where multiple systems will share PCIe VFs. I heard it was terribly complicated but Googling did not result in too much info.
Dell: PowerEdge 11G Gen II Servers: T410, R410, R510, R610, T610, R710, T710 and R910.
Further information is available at http://en.community.dell.com/techcenter/os-applications/w/wiki/3458.microsoft-windows-server-8.aspx
Fujitsu: PRIMERGY RX900 S2 using BIOS version 01.05.http://www.fujitsu.com/fts/products/computing/servers/primergy/rack/rx900/
HP: Proliant DL580 G7 and Proliant DL585 G7. Further information is available athttp://www.hp.com/go/win8server
Comment if you liked it or if I said something wrong 🙂
Follow Us!