Has anyone ever attempted a Host book disk repair and encountered this issue before?
All cables and SFPs are plugged in as if it were still in production. Have not yet attempted to rebuild via foundation as these were the steps provided by Nutanix support: https://portal.nutanix.com/page/documents/details?targetId=Hypervisor-Boot-Drive-Replacement-Platform-v510-Multinode-G3G4G5:Hypervisor-Boot-Drive-Replacement-Platform-v510-Multinode-G3G4G5
- Downloaded and burnt phoenix.iso to USB
- Booted USB into Phoenix
- Kicked off Repair Host boot Disk and uploaded whitelisted ESXi image
- Encountered error
I have a case open with support and have escalated and figured that while i’m waiting I could ask the community. Will provide a response here once we figure out the issue.
(Edit: I posted this in the CE forum by accident)
Best answer by j_seetView original
Just an update on the resolution. Nutanix support was able to work around this by manually installing ESXi then Phoenix. We suspect that the original problem was due to our network requiring a vLAN to be configured but the Host Repair wizard doesn’t allow for a vLAN to be defined hence the manual install route.
Don’t hesitate to engage Nutanix support as they are really responsive. They will give you the link to the FULL documentation regarding the process.
To be honest. Never done it on AHV so don’t really know about it.
Difference is : my cluster is not running ESXi but AHV.
How did you managed to complete re-imaging ?
I’m afraid I’m alone with the community on this one. This is a 6 years old node EoS, EoL, … ;)
Thanks for your help on that. I’ll give it a try.
I cannot find any foundation.iso, only tar.gz, .msi, .dmg, ….
Ok, I managed to make it work.
Foundation iso has to be generated.
My faulty node is back ;)
Got exactly the same issue.
I was able to set the ip address of CVM… And restart the repair host boot device. But I’ve lost the SATADOM image which was successfully done at the first step.
Actually Nutanix have no way to push back the SATADOM image saved back to the newly inserted SATADOM.
So reimage process again and again and again … and again…. on my +600 nodes … Waste of a time and money.
They need to improved that steep so badly.