Unable to create cluster | Nutanix Community
Skip to main content
Solved

Unable to create cluster


Forum|alt.badge.img+5
Unable to create cluster, getting failed with the following message

2015-07-17 00:49:07 WARNING genesis_utils.py:508 Failed to reach a node where Genesis is up. Retrying... (Hit Ctrl-C to abort)2015-07-17 00:49:09 WARNING genesis_utils.py:508 Failed to reach a node where Genesis is up. Retrying... (Hit Ctrl-C to abort)2015-07-17 00:49:10 WARNING genesis_utils.py:508 Failed to reach a node where Genesis is up. Retrying... (Hit Ctrl-C to abort)

Checked in genesis log, found the below message

Failed to set up key based SSH access to hypervisor, most likely because we do not have the correct password cached. Please run fix_host_ssh command manually to fix this problem.


Unable to ssh host. Please check and let me know

Best answer by Santhoshkumar

Issue got resolved by below steps:

Restart the genesis, still same.

Tried to SSH the host from CVM, getting the below error message

#FIPS mode initialized
#Read from socket failed: Connection reset by peer.

Re-generated the SSH Keys in the host and restarted the SSH service in the host.

And issue got resolved.
View original
Did this topic help you find an answer to your question?
This topic has been closed for comments

2 replies

DonnieBrasco
Nutanix Employee
Forum|alt.badge.img+18
  • Nutanix Employee
  • 82 replies
  • July 20, 2015
Are you able to SSH to esx host on private and public IP manually from the CVM where you are getting this error.

Maybe you can try restarting genesis and run cluster create command once again.

To restart the genesis, run following command

genesis restart

Forum|alt.badge.img+5
  • Author
  • Voyager
  • 1 reply
  • Answer
  • July 21, 2015
Issue got resolved by below steps:

Restart the genesis, still same.

Tried to SSH the host from CVM, getting the below error message

#FIPS mode initialized
#Read from socket failed: Connection reset by peer.

Re-generated the SSH Keys in the host and restarted the SSH service in the host.

And issue got resolved.