Solved

ESXi SAN data migration via NFS (vMotion) - question regarding CVM networks

  • 3 February 2017
  • 3 replies
  • 3194 views

Badge +2
This question is a little different than the typical mounting of the Nutanix NFS to an ESXi host. We have that all figured out, but our scenario is a little different and just need to know if our solution has any problems.

Our Nutanix nodes have 4 10Gbps NIC cards, 2 of which are configured to our main vDS, and 2 of the other NICs are setup to handle vMotion traffic only, and also has Jumbo Frames enabled. We have one HP Blade that will be our transport to the new environment, and currently it has two NICs with one NIC going to the main vDS, and the other NIC dedicated to vMotion, which also has Jumbo Frames enabled. What we discovered when whitelisting the IP of the HP Blade and adding the Nutanix container as an NFS mount, is that when we did the vMotion from the HP Blade to Nutanix, it was going over the management NIC, and not the network we separated out for vMotion. Our suspicion is because the vMotion interface is on a private, non-routed network, and the CVM's are on a different network, the same network that as the Nutanix and HP management network.

So I had the idea of instead of using the public CVM, what about using the private CVM network. I added the two uplinks that were used for vMotion to the standard switch 'vSwitchNutanix', and added a new VMKernel port for vMotion, and gave it an IP of 192.168.5.51. I also did this on the HP Blade, and it's vMotion IP is 192.168.5.52. Then added the Nutanix NFS, which did not require a whitelist, and vMotion works between the two environments. This is just a temporary setup to migrate the VM's to Nutanix, but is there any issues with the way we set this up? We have not began migrations yet, so a little guidance is welcomed.
icon

Best answer by nicksegalle 4 February 2017, 01:10

Good to know, thanks for the reply.

The management network definitely does not have vMotion enabled on it. When you go to add the Nutanix NFS to the existing ESXi server, you add it by the public IP of the CVM. The vmKernel for vMotion is on subnet 10.1.2.0/24, and the management of the ESXi servers and the public CVM is on 10.3.39.0/24. There are no routes between the two networks, as we keep all vMotion traffic separated. When we did a test vMotion, we noticed the NICs that were set to vMotion had no traffic, and the NICs that had the management interfaces were doing the transfer. Our other option would be to temporarily move the vMotion network to the same network as the CVM, or at the least a routable network to the CVM.

And to answer your question, we added the uplink to the Nutanix vSwitch, and added the vmKernel port to the switch. It was really a long shot, but it actually worked really well.
View original

3 replies

Userlevel 4
Badge +20
'Thou shalt not mess with the Nutanix vSwitch" was the mantra taught to use years ago. That vSwitch shouldn't have an uplink to the physical network so I'm curious on how a vmKernel port on that vSwitch is getting off the node.

http://nutanixbible.com/#anchor-networking-124

I think it would work but I would really investigate if someone had accidently selected vmotion capability on the management vNIC.
Badge +1
Good to know, thanks for the reply.

The management network definitely does not have vMotion enabled on it. When you go to add the Nutanix NFS to the existing ESXi server, you add it by the public IP of the CVM. The vmKernel for vMotion is on subnet 10.1.2.0/24, and the management of the ESXi servers and the public CVM is on 10.3.39.0/24. There are no routes between the two networks, as we keep all vMotion traffic separated. When we did a test vMotion, we noticed the NICs that were set to vMotion had no traffic, and the NICs that had the management interfaces were doing the transfer. Our other option would be to temporarily move the vMotion network to the same network as the CVM, or at the least a routable network to the CVM.

And to answer your question, we added the uplink to the Nutanix vSwitch, and added the vmKernel port to the switch. It was really a long shot, but it actually worked really well.
Userlevel 7
Badge +30
Interesting, yes, this is doable. Definitely not your typical run of the mill migration, but I've done a similar thing a time or two. Glad you were able to power through

Reply