Foundation 3.0.1 Imaging A Dell Cluster fails at 2%. No ISO Mounted (Moved to correct forum)

  • 19 November 2015
  • 3 replies

Badge +4
I am using foundation 3.0.1 to image a 3 node cluster of Dell R730's however it fails at 2% progress for StandardError: Mount failed: NFS path not in 'remoteimage -s'
The system can access and configure the Dell iDRAC IPMI interface no problem and the virtual cd is enabled on each server.
It appears to not be able to mount the required ISO according to the logs. There is no ISO file in the path mentioned.
Below is the relevant section of the log.
Using nutanix_installer_package-danube- and ESXi 6.0.0 iso image.

Any help would be greatly appreciated.

20151118 072259: /opt/dell/srvadmin/sbin/racadm -r -u root -p calvin remoteimage -c -l -u nutanix -p nutanix/4u20151118 072302: /opt/dell/srvadmin/sbin/racadm -r -u root -p calvin remoteimage -s20151118 072305: Security Alert: Certificate is invalid - self signed certificateContinuing execution. Use -S option for racadm to stop execution on certificate-related errors. Remote File Share is DisabledUserName Password ShareName20151118 072305: RFS mount of /home/nutanix/foundation/tmp/phoenix_node_isos/foundation.node_1.iso failed. Try to mount this path from20151118 072305: a different machine to make sure NFS is configured correctly.

Best answer by AndrewB 23 November 2015, 16:22

View original

This topic has been closed for comments

3 replies

Userlevel 6
Badge +29

Since this is "paid nutanix", Please open a ticket with Dell support, and have them escalate to Nutanix, we can help you power through the foundation issues you're experiencing and give you a more rapid response.

That said, technically, Foundation isn't supported for Dell clusters, as they have their own imaging process. I know for a fact it does work, as my team as done it plenty of times, just know it's not the norm, since Dell can ship ESX, HV, and AHV from the factory all native.

I'm curious, Why are you trying to foundation this cluster?
Badge +4

As Jon mentioned - Please let us know the reason for re-imaging the nodes - The XC Appliances actually have a feature called Rapid System Recovery; which allows you to restore the XC Appliances to their factory config - That being the Hypervisor that it shipped with + the CVM.


Verify this is not a hardware issue by logging in to the iDRAC and reviewing hardware logs and any alerts. Reboot the server to the SDCARD. Run through the RASR Factory Reset to place the XC into a functional state.While the server is booting press F11. When the boot menu appears, pick “Oneshot BIOS Boot Menu”. Pick “Internal SD”. Press 1 for “Factory Reset”. Confirm that you want to erase everything on the node by typing “yes”.

After ~10 minutes it will want you to confirm a reboot. After that it is completely automatic. Since there is no clear “I’m done!” message you should note the time of that first reboot and leave the system completely alone for 45 minutes
Badge +4
Hi guys. Thanks for your assistance on this. I got this working eventually by setting up the nodes as baremetal and entering the IPMI mac address manually. I had let the nodes be automatically detected on the network which didnt work. I needed the re-image as I was switching from a Hyper-V based cluster to an ESXi one for some performnace testing Im doing.

Thanks again for your input and also for the node recovery process which may be useful to me in future.