5 Essential Tips for Maximizing Your Experience at Nutanix .NEXT for Bloggers
With Nutanix Metro Availability you can achieve a RPO of zero (0). This blog post will explain how to set this up.First let me explain my lab:I have two Nutanix clusters (Cluster-1 and Cluster-2) fresh foundationed with AOS 5.20.3.5 and ESX 7.0 U2. DNS and NTP are configured, and I created Storage Containers with the following names (all this is done on BOTH clusters, except the vCLS containers):METRO_1-2 This is for guest vm's running on Cluster-1 which are synced to Cluster-2; Set the advertised capacity to the real capacity; METRO_2-1 This is for guest vm's running on Cluster-2 which are synced to Cluster-1; Set the advertised capacity to the real capacity; vCLS-1 (Only on Cluster-1) This is for the vCLS virtual machine created by vCenter; vCLS-2 (Only on Cluster-2) This is for the vCLS virtual machine created by vCenter. The three containers. Screenshot is from Cluster-2. On Cluster-1 there is a vCLS-1For metro availability you need to have both Nutanix clusters in the same VM
On my previous blog (Link) I showed you how to build a metro availability. Now I want to "upgrade" both clusters to AHV and enable data protection with the help of Leap to achieve an RPO of zero (0).This blog post are two posts combined. First is the in-place conversion of ESX to AHV and the second is how to enable and configure Leap.ConversionBefore we can change the hypervisor to AHV there are a couple of requirements/things to remember:NGT tools must be installed in each VM: This makes sure we can boot the guest vm's after conversion. Each host must have a uplink NIC team. LACP based load balancing is not supported: So make sure this is turned off on the dSwitch (and on the physical switch). HA and DRS should be enabled; DR activities will be paused during conversion.As my setup is not changed/optimized, I need to change some settings. My vSwitch0 has the following configuration:This will give the following error: "external vswitch vSwitch0 does not have homogeneous uplinks" during
HI all, I'm preparing a change on the cluster (6 node cluster running AHV). I need to re-ip cvm's and ahv's and add them to a specific vlan (now they are not in a vlan). Before I do this in production I decided to do this on the test lab. (old baby, 4 node G5). I've followed this guide: https://next.nutanix.com/installation-configuration-23/physical-relocation-of-nutanix-clusters-38403 but got stuck step 7 after booting the cvm's. The ouput of svmips shows the old cvm ip's but they are all booted with the new ones. (and accessible via ssh on the new ip's)The output of hostips shows the old ahv ip's byt the are all booted with the new ones. (and accassible via ssh on the new ip's). I must also say that the external_ip_reconfig script hangs at a specific time and now the command “cluster start” gives an error the the cluster is still in reconfigure mode. Anyone here who has the golden tip to get the command svmips and hostips to show the correct ip's? (zk_server_config_file also had the
This post was authored by Jeroen Tielen (Tielen Consultancy) Nutanix Technology ChampionWithin Nutanix we have Replication Factor (How many data copies are written in the cluster) and Redundancy Factor (how many nodes/disks can go offline). Both can have a value of 2 and 3. What is what is explained here: Blog Post.So, when we have a larger cluster, we always recommend using RF3 (Redundancy Factor 3) as the risk is higher that you have multiple nodes/disks go offline at the same time.During trainings and onsite customer work I often get the question, "what will happen if multiple nodes go offline in Redundancy Factor 2?" In this blog post I will explain different scenarios and their behaviors.My cluster is 7 nodes and configured with RF2 (Redundancy and Replication) and HA Reservation is enabled.I've got 30 Windows 11 VDI's (Yes, with vTPM ;)) running in the cluster and all load is spread across the nodes. This is the current usage on the cluster:Now 1 node will go offline. Or in my ca
I changed AHV and CVM ips. But Nutanix File Analytics is still pointing to the old cluster ip. Does anyone has a guide howto change this so File Analytics is using the new cluster ip? Deleting the file analytics and redeploy it is not working. As you first need to disable file analytics. But this is not possible as it gives the error that it cannot access prism (pointing to the old ip).
Ever wondered how to setup a Nutanix cluster on OVHcloud? Here is a blogpost that would probably tick all boxes with questions: https://www.jeroentielen.nl/running-nutanix-on-ovhcloud/
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.