5 Essential Tips for Maximizing Your Experience at Nutanix .NEXT for Bloggers
Can you connect to the cluster directly on a CVM’s IP? You can also try this from the command line (my common initial go-to):cluster status (from any CVM to find out who the ZeusLeader is)SSH to the ZeusLeader, and run:genesis restart cluster startThat’s what I usually do first, anyway. Please follow-up if this does or doesn’t work.
This isn’t any sort of “official” way, but you can try doing it the way I’ve done it and have seen other customers do it in the past. Build a Windows VM, attached to the production network, just like you would any other Windows VM. Patch it up to the latest patches from Microsoft. Then, using the console, switch the VM’s network to DHCP (if it wasn’t already getting its IP via DHCP), run sysprep /generalize /shutdown /oobe on the VM, and power the VM down. I usually name the template something like Z_Windows_2022, for instance. That way it’s at the end of the list and doesn’t clog up the list of VMs. Then when you’re ready, clone the VM and make sure it gets a new MAC address. This seems to work for a number of customers.Reference: https://learn.microsoft.com/en-us/windows-hardware/manufacture/desktop/sysprep--generalize--a-windows-installation?view=windows-11
I’m wondering if there’s interest out there for a Greater Cincinnati NUG? I’m specifically thinking “Greater Cincinnati” to cover Northern Kentucky as well, and to host at a bourbon bar in Covington, KY.
Check the MTU on the switches and the NICs and the vSwitches. To me this feels like an MTU problem. If it’s not that, my next guess would be that it’s a routing problem. Nutanix uses 192.168.5.0/24 internally. If you are using that network outside of Nutanix that would cause problems.
Mounting external storage on Nutanix isn’t supported. When you create the storage on Nutanix, the CVMs create a storage pool which is presented to the ESXi cluster as an NFS datastore and optionally can be presented externally as iSCSI storage. So Nutanix can provide storage to other clusters via iSCSI, but mounting external datastores doesn’t end well.If you’re running ESXi on other hardware, use Nutanix Move as others have suggested. It’s a great tool. You can set up and seed the conversion, and then perform the cutover when you’re ready with an outage that is just slightly longer than a reboot. Just be sure to install the Nutanix Guest Tools when you set up Nutanix Move to prepare the migration, since you will need the VirtIO drivers once the VM is running in Nutanix. Move provides that install as part of its functionality, you just have to provide login credentials. Best of all, if there’s a failure in Move for some reason, just power-down the target VM in Nutanix and powe
As Alexander noted above, these are the likely steps. You will have to interact with the cluster in some way; at a minimum to start the cluster back up. To add to what was said, the “start” process will require some work and proper hardware if you want to automate it. You will need to be able to log into the IPMI/iLO/iDRAC type interface and power-on the server. I would write a script that would run from an external server. Shutdown would look like this:Get list of running VMs, store in a file on the local machine where the script is running Use list of running VMs to shutdown VMs Wait for all VMs to shut down Log into CVM Get list of all CVMs, hypervisors, and IPMI using “ncli host list” and store each list in a file (CVM file, Hypervisor file, IPMI file) Shut down cluster with “cluster stop” on CVM Wait for cluster to stop Log out of CVM Using the file with the CVM list, loop through the list of CVMs to connect to each one and shut the CVM down with “cvm_shutdown -P now” Using th
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.