Failed to reach a node where Genesis is up. Retrying | Nutanix Community
Skip to main content

Hi,

A customer moved his Nutanix Cluster (with Hyper-V) from a DC to another, after powering the Nodes up, the IPs of all Hyper-V host and CMs released, I logged it locally to Hyper-V hosts and configure the internal IP (192.168.5.1/28) and the external IP same like before shutting the cluster down.

I repeated the previous step with CVMs, I went through cd/etc/sysconfigs/ and edit network-scripts file and added the the external IP in the eth0 and the internal one (192.168.5.2/28) in eth1.

Now Hyper-V FC is working fine but cannot start VMs due to the Nutanix cluster issue, whenever I tried to start cluster from any CVM, I get this message “WARNING genesis_utils.py:1211 Failed to reach a node where Genesis is up. Retrying

Is there any way to fix this issue or to repair cluster configuration without disrupting existing data?

Thanks in advance

Hi @saleh saad . This error generally means that one of the CVMs in the cluster is Down or some services on the CVM is not running or crashing.

This error message can be observed due to lots of reasons like network issues etc.

I would highly recommend you to open a support case for this issue so that an engineer can have a closer look and fix the issue.

Also, can you run this command from one of the CVM and paste the output here:

cvm$ cs | grep -v UP

Can you also share the contents inside each of the network configuration files for interfaces eth0 and eth1 of all CVMs by running the below commands:

1) cvm$ allssh cat /etc/sysconfig/network-scripts/ifcfg-eth0

2) cvm$ allssh cat /etc/sysconfig/network-scripts/ifcfg-eth1

 


Hi @saleh saad . This error generally means that one of the CVMs in the cluster is Down or some services on the CVM is not running or crashing. 

 

This error message can be observed due to lots of reasons like network issues etc.

 

I would highly recommend you to open a support case for this issue so that an engineer can have a closer look and fix the issue.

Also, can you run this command from one of the CVM and paste the output here:

  cvm$ cs | grep -v UP

Can you also share the contents inside each of the network configuration files for interfaces eth0 and eth1 of all CVMs by running the below commands:

  1) cvm$ allssh cat /etc/sysconfig/network-scripts/ifcfg-eth0

2) cvm$ allssh cat /etc/sysconfig/network-scripts/ifcfg-eth1

 

Hi @AnishWalia20

You’re right, it was a network issue, backplane network was not configured properly.

Thanks.


Hi @saleh saad . This error generally means that one of the CVMs in the cluster is Down or some services on the CVM is not running or crashing. 

 

This error message can be observed due to lots of reasons like network issues etc.

 

I would highly recommend you to open a support case for this issue so that an engineer can have a closer look and fix the issue.

Also, can you run this command from one of the CVM and paste the output here:

   cvm$ cs | grep -v UP

Can you also share the contents inside each of the network configuration files for interfaces eth0 and eth1 of all CVMs by running the below commands:

   1) cvm$ allssh cat /etc/sysconfig/network-scripts/ifcfg-eth0

2) cvm$ allssh cat /etc/sysconfig/network-scripts/ifcfg-eth1

 

Hi @AnishWalia20 
you’re right, it was a network issue, backplane network was not configured properly.
Thanks.

Hey @saleh saad, sounds amazing that you were able to resolve it.