2 of 3 nodes are fine and working. The 3nd CVM is up and i could ping it. Restart the Cluster with “allssh genesis stop cluster_health; cluster start” does not start these Cluster Partner.
After Login in Prism i saw a “Disk degraded” for these 3rd node and now these disk is missing.
How to fix node Nr. 3 in a 3 Node Cluster?
Page 1 / 1
Hey @frsbeckum
Please let us know if you have found any solution for this.
What is the status of the services on the 3rd CVM as you said it is up?
As I can infer from the images, the hostname is not being shown for the host and the statistics itself are not being shown on prism for the same. This needs to be looked upon on a detailed basis. I would suggest you to open a support case for this.
Regards,
Shaurya Bhardwaj
I got a very strange behavior. Installed was Version 20190211. I tried and updated and failed because of node 3. After relogin attempt an Update to Version 20191122 was announced in the login screen but could not be fullfilled because of node 3 elsewhere.
So i decided to purge all disk with gparted and reinstalled the 3 nodes with version 20191122. Node 1 and 2 are fine. Node 3 has a hardware damage and gets very hot after 5 minutes and shuts down after that.
I will replace node 3 and see if a cluster -s …. create will work than again.
Hi @frsbeckum, let us know how you go, with the node replacement, please.
The Bug is broken FAN in a Intel Skull Canyon Device. Intel is NOT delivering Spareparts for these devices so i odered a replacement device in Great Britain which should arrive today. I will informe about the next steps in my homalab…...
SOLVED with reconfigured Cluster and replaced 3rd Node.
Steps:
Install NEW Node with CE
connect to existing CVM in Cluster
cluster stop
WAIT!!!
cluster -s old-cvm1,old-cvm2 -destroy
WAIT
cluster -s cvm1,cvm2,cvm3 -create
Works
Be sure that UEFI is not working on a new node. disable it in the bios settings to boot in legacy mode. Destroy all Disk with an usb stick an gparted before installing.