What you need to know about Jumbo Frames

rjapproves QuickY (For the impatient like me!)

> Jumbo frames pack more data in each network frame, so a frame with more payload, less cpu overhead, more data throughput
> Anything more than 1500 MTU is a jumbo frame and the maximum supported is 9000MTU.
> Can use for inter-vm traffic, vkernel traffic(iscsi), vmotion traffic
> Need to be enabled for the entire stack end-to-end to reap benefits. L2 switches not configured will drop frames, L3 will fragment them if not configured
> Enable Jumbo frames  – esxi 4 – esxcfg-vswitch -m 9000 vswitch0 and for vkernel – esxcfg-vmknic -m 9000 vmkernelname
> Enable Jumbo frames – ESXi 5 – esxcli network vswitch standard set -m 9000 -v vswitch0 and for vkernel – esxcli network ip interface set -m 9000 -i vmk0
> For DVS, use the client and edit settings–Advanced
> List all network info – esxi 4 – esxcfg-vswitch -l or esxcfg-vmknic -l
> List all network info – ESXi 5 – esxcli network interface list

In Depth

I have been into vmware for sometime but never really got to know what Jumbo frames were all about. So here goes. Jumbo Frames in simple terms allow you to send more data packed in frames. This is beneficial for higher throughput and also for lower cpu overhead. Remember, the more the frames, the more cpu resources you are burning up to send all the frames. This is basically the same through out the stack and by stack I mean your hypervisor-switch-router-… etc.

So a Jumbo frame is a frame where the frame has a payload of more than 1500 MTU(Maximum Transmission Unit). Now typically Jumbo frames max out at 9000MTU. Remember that this roughly translates to 1500 bytes for standard and 9000 bytes for a jumbo frame. Actually the real size for a 1500MTU is actually 1522 bytes of data where the 22 bytes include source and destination mac addresses(12 bytes total) + optional 802.1q vlan tag (4bytes) + ethertype(2bytes) and CRC32 error correction trailer (4 bytes).  A blog i read about while researching claimed to have seen some cisco switches that could go up to 9216 bytes.

As mentioned above now by enabling a jumbo frame, you can pack as much data in one frame, thus the hypervisor has to send less frames to send the same amount of data, improving speeds and also lower cpu overhead.  You can use jumbo frames for traffic between vms, vmotion or even iscsi which is supported in ESX(i) 4 and ESXi 5.

The only catch with jumbo frames is that it is a end to end setup. That means from your hypervisor to your switch to the other end should support jumbo frames. Some routers, when they get a jumbo frame, fragment it and assemble them on the other end and it only adds overhead which is just bad design. Also if the switches are not configured with Jumbo frames, they will just drop it so you will break connectivity!

How to enable Jumbo Frames?


You have to enable jumbo frames on the vswitch or the dvs that you may be using. You can either use the client to do that or from the service console,
For ESX(i) 4 – esxcfg-vswitch -m 9000 vswitch0    —> Remember this command sets the MTU on the nics.
For ESXi 5 – esxcli network vswitch standard set -m 9000 -v vSwitch0
Remember the above commands will set all nics associated to that vSwitch to MTU 9000.
You can also enable jumbo frames for vKernel interfaces as follows.
For ESX(i) 4 – esxcfg-vmknic -m 9000 portgroup_name  –> I read that you actually need to delete and recreate the vmk in ESX(i) 4 but did not test it.
For ESXi 5 – esxcli network ip interface set -m 9000 -i vmkernel_interface  —> vmkernel_interface as in vmk0 or vmk1

To list all switch info 
For ESX(i) 4 – esxcfg-vswitch -l or esxcfg-vmknic -l
For ESXi 5 – esxcli network interface list

Comment to discuss more or correct me 🙂

Resources – 

Leave a Reply

Your email address will not be published. Required fields are marked *

Post Navigation