3node Cluster - 1 CVM is buggy - How to fix that?

  • 13 March 2020
  • 5 replies

Userlevel 2
Badge +12

2 of 3 nodes are fine and working. The 3nd CVM is up and i could ping it. Restart the Cluster with “allssh genesis stop cluster_health; cluster start” does not start these Cluster Partner.

After Login in Prism i saw a “Disk degraded” for these 3rd node and now these disk is missing. 


How to fix node Nr. 3 in a 3 Node Cluster?




Best answer by frsbeckum 20 March 2020, 17:59

View original

This topic has been closed for comments

5 replies

Userlevel 4
Badge +2

Hey @frsbeckum 


Please let us know if you have found any solution for this.

What is the status of the services on the 3rd CVM as you said it is up?

As I can infer from the images, the hostname is not being shown for the host and the statistics itself are not being shown on prism for the same. This needs to be looked upon on a detailed basis. I would suggest you to open a support case for this.



Shaurya Bhardwaj

Userlevel 2
Badge +12

I got a very strange behavior. Installed was Version 20190211. I tried and updated and failed because of node 3. After relogin attempt an Update to Version 20191122 was announced in the login screen but could not be fullfilled because of node 3 elsewhere.

So i decided to purge all disk with gparted and reinstalled the 3 nodes with version 20191122. Node 1 and 2 are fine. Node 3 has a hardware damage and gets very hot after 5 minutes and shuts down after that.

I will replace node 3 and see if a cluster -s …. create will work than again.


Userlevel 6
Badge +5

Hi @frsbeckum,  let us know how you go, with the node replacement, please.

Userlevel 2
Badge +12

The Bug is broken FAN in a Intel Skull Canyon Device. Intel is NOT delivering Spareparts for these devices so i odered a replacement device in Great Britain which should arrive today. I will informe about the next steps in my homalab…...

Userlevel 2
Badge +12

SOLVED with reconfigured Cluster and replaced 3rd Node.


  • Install NEW Node with CE
  • connect to existing CVM in Cluster
  • cluster stop
  • WAIT!!!
  • cluster -s old-cvm1,old-cvm2 -destroy
  • WAIT
  • cluster -s cvm1,cvm2,cvm3 -create
  • Works

Be sure that UEFI is not working on a new node. disable it in the bios settings to boot in legacy mode. Destroy all Disk with an usb stick an gparted before installing.


works fine now