Foundation failing for Cisco UCSM manager server | Nutanix Community
Skip to main content
Solved

Foundation failing for Cisco UCSM manager server

  • February 17, 2025
  • 5 replies
  • 81 views

Forum|alt.badge.img+4

Hi ,

 

I am trying to setup a new Nutanix cluster on Cisco servers managed by UCSM but i am getting  a fatal error , similar like below 

 

 

Servers are managed by UCSM ,

Foundation can ping both UCSM ip and CVM&AHV management  gateway 

 

Exception: Server (WZ) did not shut down in a timely manner 2025-02-17 17:11:33,286Z imaging_step.py:123 DEBUG Setting state of <ImagingStepInitIPMI(<NodeConfig(10.x.x.x) @7190>) @77f0> from RUNNING to FAILED 2025-02-17 17:11:33,288Z imaging_step.py:123 DEBUG Setting state of <ImagingStepRAIDCheckPhoenix(<NodeConfig(10.x.x.x)) @7190>) @78b0> from PENDING to NR 2025-02-17 17:11:33,289Z imaging_step.py:182 WARNING Skipping <ImagingStepRAIDCheckPhoenix(<NodeConfig(10.x.x.x)) @7190>) @78b0> because dependencies not met, failed tasks: [<ImagingStepInitIPMI(<NodeConfig(10.x.x.x) @7190>) @77f0>] 2025-02-17 17:11:33,290Z imaging_step.py:123 DEBUG Setting state of <ImagingStepPreInstall(<NodeConfig(10.x.x.x @7190>) @a280> from PENDING to NR 2025-02-17 17:11:33,290Z imaging_step.py:182 WARNING Skipping <ImagingStepPreInstall(<NodeConfig(10.x.x.x) @7190>) @a280> because dependencies not met 2025-02-17 17:11:33,291Z imaging_step.py:123 DEBUG Setting state of <ImagingStepPhoenix(<NodeConfig(10.x.x.x) @7190>) @a790> from PENDING to NR 2025-02-17 17:11:33,292Z imaging_step.py:182 WARNING Skipping <ImagingStepPhoenix(<NodeConfig(10.x.x.x) @7190>) @a790> because dependencies not met 2025-02-17 17:11:33,292Z imaging_step.py:123 DEBUG Setting state of <InstallHypervisorKVM(<NodeConfig(10.x.x.x) @7190>) @a9a0> from PENDING to NR 2025-02-17 17:11:33,293Z imaging_step.py:182 WARNING Skipping <InstallHypervisorKVM(<NodeConfig(10.x.x.x) @7190>) @a9a0> because dependencies not met

Best answer by Paul Ilavarasu

I was able to fix this issue by keeping CVM AHV subnet in non native vlan 

View original
Did this topic help you find an answer to your question?

5 replies

Kcmount
Forum|alt.badge.img+7
  • Vanguard
  • 367 replies
  • February 17, 2025

Hi there,

I think you may need a newer build of foundation.

 

https://portal.nutanix.com/page/documents/details?targetId=Field-Installation-Guide-Cisco-HCI-UCM:Field-Installation-Guide-Cisco-HCI-UCM

 

Says minimum version is 5.4.2 you're on 5.0.3 from the footer.

 

I'm not a fan of the portable foundation version especially in bare metal here you probably want to look at a proper foundation VM and fix both things at once.


Forum|alt.badge.img+4

This screenshot is a reference , i have running version of 5.7.1 , I have servers managed by UCSM so IPMI/CIMC is not configured 


Kcmount
Forum|alt.badge.img+7
  • Vanguard
  • 367 replies
  • February 18, 2025

Ah :) Would have been useful to say it was a reference.

 

Anyway, have you loaded the firmware bundles etc on the FI/UCSM? 

 

https://portal.nutanix.com/page/documents/details?targetId=Field-Installation-Guide-Cisco-HCI-UCM:cis-server-imaging-c.html

 

What hardware / SKU is the server?

Can you confirm that Foundation was able to connect to your UCSM and discover nodes etc, did it create service profiles?


  • Voyager
  • 1 reply
  • February 18, 2025

Hi Paul,

We have done something similar recently with building 7 clusters with Fabric Interconnects, but found that the following had to be done.
Check the correct Cisco B and C Server images have been downloaded into UCS Manager even though we only have C240 G7 (the image version is critical as it changed for us inbetween deploying one cluster and repeating the following week with the same spec’d hardware)
The IPMI IPs for the hosts need to be in the same subnet as the UCS Fabric Interconnects Management IPs and not in the same subnets as the hosts and CVMs as showing in your screenshot.


Hope this is of some assistance, as that is what we had to do to make it work..
 


Forum|alt.badge.img+4
  • Author
  • Vanguard
  • 91 replies
  • Answer
  • February 24, 2025

I was able to fix this issue by keeping CVM AHV subnet in non native vlan 


Reply