I verified that routing is as expected between the datacenters and the MTU on the management/vmotion vSwitch is set to 1500. The vDS is set to 9000, but the VMs using the vDS are set to 1500. Come to find out the default for Cisco ACI is to use MTU 9000 jumbo frames. In case you want to know Cisco ACI pushes the vDS (for VM traffic) into vCenter for all the hosts to use and thus the configuration is done by Cisco ACI for the VM traffic.
In the vSphere networking guide there's a hand section (starting on page 26) about jumbo frames and how it can be beneficial (though by no means imensely better) for the enviornment regarding vMotion, Fault Tolerance, and iScsi/NFS. There's a diagaram regarding the best possible setup when using jumbo frames:
If I'm reading this diagram correctly in its optimal setup, this would basically require six 10Gb NICs to do things properly. Two for VMKernel Mgmt (1500), two for vMotion and fault tolerance (9000), and two for VM traffic (1500).
I just don't know if this is the expected speed or should it be going faster for a storage vmotion since it's fighting for resources on the two 10Gb links? I can't imagine that's truly the issue though as Nutanix supports running the entire environment on only two 10Gb links. All data on the storage currently sits on SSDs so I don't imagine it's an IOPS issue. Any thoughts? I'm just trying to figure out what I need to bring to the table for my networking team on how to best setup our environment since we only have four 10Gb NICs. Thanks!
Best answer by bmeyer17
After working with Arthur this is really an issue with VMware's Data Movers, which hasn't been updated in ages. I found this post, though older, gives a somewhat decent idea of what's going on: