I'm running into an issue intermittently where vMotion's fail between my Nutanix hosts. I have two vmkernels on each of our nodes, one for management and one for vMotion (separate VLANs for each). What I've seen is that after disabling vMotion on the original vmkernel, after an indeterminate amount of time, the option re-enables. I'm not sure if the trigger is a reboot of the ESXi host during patching, but this has happened multiple times so I can confirm it's not an issue of missing it during initial configuration. Has anyone else ran into this?
Thats odd. Have you opened a ticket with Nutanix support yet?
I haven't yet. Since it happens so erratically I didn't think I could give support good data on when it occurs or which specific nodes it happens on. I have pending patch installs for ESXi 6 and Acropolis (to go to 4.6.1 from 18.104.22.168), so I'll test out vMotions again after those updates are complete and see if either of those operations can elicit the behavior.
hey @TM-Nut - I haven't seen this anywhere else myself. Please open a support ticket if this keeps happening with you.
I've personally done vMotion from the web GUI more times than I can count, works great. Shared Nothing vMotion is a life saver for migrations, makes the "plumbing" between old environment and nutanix simple.
Definitely file a support ticket with us when you can so we can dig into this vMotion enablement issue
does appear to be something specific to Nutanix, as I can't find any reports of such behaviour using anything else other than Nutanix. VMWare have never heard of it either.
I haven't heard of this behavior anywhere, nutanix or otherwise. I flipped through our bug database quickly this morning, and haven't seen any similar reports at the engineering level yet either.
Next time this happens, please get support engaged and we can look into it.
I see that this is an old post but was wondering if there are any updates. i have a new installation of Nutanix with ESXi 6 U2 and am seeing the same issue