Solved

Foundation Bare metal build on Secure boot systems

  • 5 August 2021
  • 1 reply
  • 380 views

I have a few systems to rebuild from baremetal. They are all secure boot enabled and to disable this I have to go onsite to change a jumper.

Foundation dosen’t seem to be working which I am assuming the phoenix boot just gets skipped as its not uefi enabled. Is there a version or a process I can use to bare metal build these systems. I can easily install esxi manually if required.

Just trying to avoid a trip into a covid zone to change a jumper if i can avoid it.

icon

Best answer by Nupur.Sakhalkar 9 August 2021, 17:26

View original

This topic has been closed for comments

1 reply

Userlevel 3
Badge +5

@Ravager If UEFI secure boot option cannot be disabled for now, which in turn is causing foundation to fail the imaging process on these nodes, then you can try manual re-imaging of these bare metal nodes (assuming IPMI access is available for these nodes).

Considering these Nutanix nodes are not a part of any cluster, follow the below process for manual imaging of these bare metal nodes:

1) Create AOS+Hypervisor+Phoenix ISO from any existing CVM available in any other cluster (which is on same AOS version needed on these bare metal nodes) or via foundation VM using the steps mentioned in KB-3523

2) Once the AOS+Hypervisor+Phoenix ISO is created, download it to your local machine via WinSCP

3) Login to IPMI console of each of these bare-metal nodes

4) Select Console Redirection from the Remote Console drop-down list of the main menu, and then click the Launch Console button.

5) Open this console and then Select Virtual Storage from the Virtual Media drop-down list on this remote IPMI JAVA console main menu.

6) Click the CDROM&ISO tab in the Virtual Storage window, select ISO File from the Logical
Drive Type field drop-down list, and click the Open Image button.

7) In the browse window, go to where the ISO image (AOS+Hypervisor+Phoenix ISO) is located, select the image, and then click the Open button.

8) Click the Plug In button and then the OK button to close the Virtual Storage window.

9) In the remote console main menu, select Set Power Reset in the Power Control drop-down
list. This causes the system to reboot to the selected image.

10) Once the node boots into phoenix, confirm the Block serial, node serial, node position, AOS and hypervisor details. Also, for installing hypervisor and AOS on the bare metal NX node - select Install CVM (Wipes Existing Data!)and Install, Configure Hypervisor action in the phoenix window.  When all the fields are correct, click the Start button. Installation begins and takes ~30 minutes.

11) After installation completes, unmount the ISO. For NX nodes, In the Virtual Storage window, click CDROM&ISO > Plug Out.

12) Once the ISO is plugged out, at the reboot prompt in the IPMI JAVA remote console, type Y to restart the node.

13) On ESXi and AHV, the node restarts with the new image, additional configuration tasks run, and then the host restarts again. Wait until this stage completes (typically ~30 minutes
depending on the hypervisor) before accessing the node. 

14) Once both the hypervisor and AOS installation is completed, proceed with doing the network configuration (hypervisor + CVM) on these nodes by assigning appropriate IP address and other network configuration on it. 

15) You will need to follow the above process for all nodes that you plan to re-image. Considering that these nodes are not a part of the cluster, you can perform the above steps on all nodes in parallel.

At this point, the manual imaging process for bare-metal nodes will be completed. However, there won’t be any cluster created for it. If your end goal is to create a cluster out of these newly imaged nodes, then perform the steps mentioned in the Manual Cluster Creation guide to create a Nutanix cluster from these nodes.