Topics started by JeroenTielen
On my previous blog (Link) I showed you how to build a metro availability. Now I want to "upgrade" both clusters to AHV and enable data protection with the help of Leap to achieve an RPO of zero (0).This blog post are two posts combined. First is the in-place conversion of ESX to AHV and the second is how to enable and configure Leap.ConversionBefore we can change the hypervisor to AHV there are a couple of requirements/things to remember:NGT tools must be installed in each VM: This makes sure we can boot the guest vm's after conversion. Each host must have a uplink NIC team. LACP based load balancing is not supported: So make sure this is turned off on the dSwitch (and on the physical switch). HA and DRS should be enabled; DR activities will be paused during conversion.As my setup is not changed/optimized, I need to change some settings. My vSwitch0 has the following configuration:This will give the following error: "external vswitch vSwitch0 does not have homogeneous uplinks" during
With Nutanix Metro Availability you can achieve a RPO of zero (0). This blog post will explain how to set this up.First let me explain my lab:I have two Nutanix clusters (Cluster-1 and Cluster-2) fresh foundationed with AOS 18.104.22.168 and ESX 7.0 U2. DNS and NTP are configured, and I created Storage Containers with the following names (all this is done on BOTH clusters, except the vCLS containers):METRO_1-2 This is for guest vm's running on Cluster-1 which are synced to Cluster-2; Set the advertised capacity to the real capacity; METRO_2-1 This is for guest vm's running on Cluster-2 which are synced to Cluster-1; Set the advertised capacity to the real capacity; vCLS-1 (Only on Cluster-1) This is for the vCLS virtual machine created by vCenter; vCLS-2 (Only on Cluster-2) This is for the vCLS virtual machine created by vCenter. The three containers. Screenshot is from Cluster-2. On Cluster-1 there is a vCLS-1For metro availability you need to have both Nutanix clusters in the same VM
I changed AHV and CVM ips. But Nutanix File Analytics is still pointing to the old cluster ip. Does anyone has a guide howto change this so File Analytics is using the new cluster ip? Deleting the file analytics and redeploy it is not working. As you first need to disable file analytics. But this is not possible as it gives the error that it cannot access prism (pointing to the old ip).
HI all, I'm preparing a change on the cluster (6 node cluster running AHV). I need to re-ip cvm's and ahv's and add them to a specific vlan (now they are not in a vlan). Before I do this in production I decided to do this on the test lab. (old baby, 4 node G5). I've followed this guide: https://next.nutanix.com/installation-configuration-23/physical-relocation-of-nutanix-clusters-38403 but got stuck step 7 after booting the cvm's. The ouput of svmips shows the old cvm ip's but they are all booted with the new ones. (and accessible via ssh on the new ip's)The output of hostips shows the old ahv ip's byt the are all booted with the new ones. (and accassible via ssh on the new ip's). I must also say that the external_ip_reconfig script hangs at a specific time and now the command “cluster start” gives an error the the cluster is still in reconfigure mode. Anyone here who has the golden tip to get the command svmips and hostips to show the correct ip's? (zk_server_config_file also had the
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.