Our Nutanix nodes have 4 10Gbps NIC cards, 2 of which are configured to our main vDS, and 2 of the other NICs are setup to handle vMotion traffic only, and also has Jumbo Frames enabled. We have one HP Blade that will be our transport to the new environment, and currently it has two NICs with one NIC going to the main vDS, and the other NIC dedicated to vMotion, which also has Jumbo Frames enabled. What we discovered when whitelisting the IP of the HP Blade and adding the Nutanix container as an NFS mount, is that when we did the vMotion from the HP Blade to Nutanix, it was going over the management NIC, and not the network we separated out for vMotion. Our suspicion is because the vMotion interface is on a private, non-routed network, and the CVM's are on a different network, the same network that as the Nutanix and HP management network.
So I had the idea of instead of using the public CVM, what about using the private CVM network. I added the two uplinks that were used for vMotion to the standard switch 'vSwitchNutanix', and added a new VMKernel port for vMotion, and gave it an IP of 192.168.5.51. I also did this on the HP Blade, and it's vMotion IP is 192.168.5.52. Then added the Nutanix NFS, which did not require a whitelist, and vMotion works between the two environments. This is just a temporary setup to migrate the VM's to Nutanix, but is there any issues with the way we set this up? We have not began migrations yet, so a little guidance is welcomed.
Best answer by nicksegalle
The management network definitely does not have vMotion enabled on it. When you go to add the Nutanix NFS to the existing ESXi server, you add it by the public IP of the CVM. The vmKernel for vMotion is on subnet 10.1.2.0/24, and the management of the ESXi servers and the public CVM is on 10.3.39.0/24. There are no routes between the two networks, as we keep all vMotion traffic separated. When we did a test vMotion, we noticed the NICs that were set to vMotion had no traffic, and the NICs that had the management interfaces were doing the transfer. Our other option would be to temporarily move the vMotion network to the same network as the CVM, or at the least a routable network to the CVM.
And to answer your question, we added the uplink to the Nutanix vSwitch, and added the vmKernel port to the switch. It was really a long shot, but it actually worked really well.