Installation & Configuration

Welcome to the Nutanix NEXT community. To get started please read our short welcome post. Thanks!

cancel
Showing results for 
Search instead for 
Did you mean: 

Nutanix, VMware Networking, and Jumbo Frames

SOLVED Go to solution
Highlighted
Pathfinder

Nutanix, VMware Networking, and Jumbo Frames

I'm trying to figure out if there is a performance issue with storage vmotion traffic going from one of our datacenters to the other (only spread out by half a mile). We are running Cisco ACI in our environment and the two datacenters look like one huge switch/router. There are four 40Gb links between the datacenters dedicated to the ACI network that are LAG'd together. Recently we moved a VM that is almost 6TB in size from on datacenter to the other. It took almost 24 hours to migrate the VM between the datacenters, which calculates to be about 0.5Gb transfer speed, which I thought was poor. The hosts that these are on have two 10Gb NICs that are dedicated to VM Management and VM vmotion traffic and two more 10Gb links dedicated to VM traffic themselves (connected with a vDS with LACP). 

 

I verified that routing is as expected between the datacenters and the MTU on the management/vmotion vSwitch is set to 1500. The vDS is set to 9000, but the VMs using the vDS are set to 1500. Come to find out the default for Cisco ACI is to use MTU 9000 jumbo frames. In case you want to know Cisco ACI pushes the vDS (for VM traffic) into vCenter for all the hosts to use and thus the configuration is done by Cisco ACI for the VM traffic.

 

In the vSphere networking guide there's a hand section (starting on page 26) about jumbo frames and how it can be beneficial (though by no means imensely better) for the enviornment regarding vMotion, Fault Tolerance, and iScsi/NFS. There's a diagaram regarding the best possible setup when using jumbo frames:

BP-2074-vSphere-Networking.jpg

If I'm reading this diagram correctly in its optimal setup, this would basically require six 10Gb NICs to do things properly. Two for VMKernel Mgmt (1500), two for vMotion and fault tolerance (9000), and two for VM traffic (1500). 

 

I just don't know if this is the expected speed or should it be going faster for a storage vmotion since it's fighting for resources on the two 10Gb links? I can't imagine that's truly the issue though as Nutanix supports running the entire environment on only two 10Gb links.  All data on the storage currently sits on SSDs so I don't imagine it's an IOPS issue. Any thoughts? I'm just trying to figure out what I need to bring to the table for my networking team on how to best setup our environment since we only have four 10Gb NICs. Thanks!

1 ACCEPTED SOLUTION

Accepted Solutions
Pathfinder

Re: Nutanix, VMware Networking, and Jumbo Frames

After working with Arthur this is really an issue with VMware's Data Movers, which hasn't been updated in ages. I found this post, though older, gives a somewhat decent idea of what's going on:

 

http://longwhiteclouds.com/2013/12/25/vmware-storage-vmotion-data-movers-thin-provisioning-barriers-...

4 REPLIES
Nutanix Employee

Re: Nutanix, VMware Networking, and Jumbo Frames

Hello bmeyer17,

 

If I recall correctly, Storage vMotion still uses the management vmk interface for the data mover, so the diagram below would not help as the VMKernel Mgmt (1500) MTU would remain the same.

 

If you would open a case with Nutanix we can review the clusters and confirm that they're running optimally.

 

In any case the flow for the Storage vMotion will only go across one of the 10GbE link, even if you are using LACP.

 

DM me with your details and a case number, and I can speak with you regarding the setup and the environment.

 

 

Cheers,

Art

 

Arthur Perkins

Sr. SRE

Nutanix

Pathfinder

Re: Nutanix, VMware Networking, and Jumbo Frames

PM sent. Thanks!

Pathfinder

Re: Nutanix, VMware Networking, and Jumbo Frames

After working with Arthur this is really an issue with VMware's Data Movers, which hasn't been updated in ages. I found this post, though older, gives a somewhat decent idea of what's going on:

 

http://longwhiteclouds.com/2013/12/25/vmware-storage-vmotion-data-movers-thin-provisioning-barriers-...

Wayfarer

Re: Nutanix, VMware Networking, and Jumbo Frames

Thanks for coming back and providing some additional details. I was just spending some time trying to figure out why SvMotions between Nutanix clusters were performing so poorly, and this got me pointed in the right direction. Perhaps in a future release Nutanix will support a wider range of VAAI primitives, but until then, at least I have an explanation Smiley Happy

 

Cheers

-John