Installation | Nutanix Community
Skip to main content

Hello,
I am desperate, I have 2 disks and the USB stick with the ISo on my server.
If I now want to install Nutranix, I get to the disk selection, but even if I assign the disks to the respective letters, only Phoenix ISO is still mounted at the end
Then the install aborts and I can go back to the disk selection.

If I enter H,C, or D, the system says either I need SSD's, or I must have 2 disks, which I have.

Do I need ESXI on the server, not really if I use the AHV version?

And the LCM Upgrade is Standing by 87% how can i stopp it?

I have press Softly Stop, but the system simply continues without stopping

I have already restarted, unfortunately without success


 


how can i exit from the Maintance Mode?


Hi @TrouBle 

 

Regarding to NTP servers not reachable, maybe it’s related no DNS resolution but if you are running LCM, prechecks have been passed and they are reachable.

 

Do you have AHV installed on an usb drive?

 

Regards!


Hi @TrouBle 

 

Regarding to NTP servers not reachable, maybe it’s related no DNS resolution but if you are running LCM, prechecks have been passed and they are reachable.

 

Do you have AHV installed on an usb drive?

 

Regards!

Yes, this is on a USB stick, I would also like to change this, because now I always have to select the boot medium when starting up


 


 

i cant exit from Maintenance Mode:

nutanix@NTNX-bd0526a0-A-CVM:0.0.0.0:~$ acli host.enter_maintenance_mode_check 0.0.0.0
Failed to connect to server: Errno 111] Connection refused

 

I also cannot execute ncli or acli commands:

 

 

 


 

i cant exit from Maintenance Mode:

nutanix@NTNX-bd0526a0-A-CVM:0.0.0.0:~$ acli host.enter_maintenance_mode_check 0.0.0.0
Failed to connect to server: Errno 111] Connection refused

 

I also cannot execute ncli or acli commands:

 

 

 

2024-04-08 08:05:45.472337: Services running on this node:
  acropolis: o]
  alert_manager: n]
  anduril: n]
  aplos: ]
  aplos_engine: s]
  arithmos: ]
  athena: >]
  cassandra: ]
  catalog: /]
  cerebro: ]
  chronos: r]
  cim_service: >]
  cluster_config: ]
  cluster_health: ]
  curator: []
  delphi: :]
  dynamic_ring_changer: d]
  ergon: g]
  flow: e]
  foundation: :]
  genesis: t22766, 29632, 29671, 29672]
  go_ergon: ,]
  hera: ]
  ikat_control_plane: <]
  ikat_proxy: p]
  insights_data_transfer: r]
  insights_server: e]
  lazan: s]
  mantle: b]
  mercury: r]
  minerva_cvm: ]
  nutanix_guest_tools: c]
  pithos: t]
  placement_solver: ]
  polaris: ]
  prism: r]
  scavenger: :]
  secure_file_sync: ]
  security_service: u]
  ssl_terminator: s]
  stargate: <]
  sys_stat_collector: ]
  uhura: ]]
  xmount: _]
  xtrim: ]
  zookeeper: ]

 


 

i cant exit from Maintenance Mode:

nutanix@NTNX-bd0526a0-A-CVM:0.0.0.0:~$ acli host.enter_maintenance_mode_check 0.0.0.0
Failed to connect to server: Errno 111] Connection refused

 

I also cannot execute ncli or acli commands:

 

 

 

2024-04-08 08:05:45.472337: Services running on this node:
  acropolis: o]
  alert_manager: n]
  anduril: n]
  aplos: ]
  aplos_engine: s]
  arithmos: ]
  athena: ]
  cassandra: ]
  catalog: >]
  cerebro: /]
  chronos: ]
  cim_service: ]
  cluster_config: ]
  cluster_health: ]
  curator: ]]
  delphi: ]
  dynamic_ring_changer: d]
  ergon: e]
  flow: r]
  foundation: ]
  genesis: i22766, 29632, 29671, 29672]
  go_ergon: ]
  hera: g]
  ikat_control_plane: b]
  ikat_proxy: l]
  insights_data_transfer: ]
  insights_server: r]
  lazan: i]
  mantle: ]
  mercury: /]
  minerva_cvm: ]
  nutanix_guest_tools: v]
  pithos: a]
  placement_solver: ]
  polaris: ]
  prism: ]
  scavenger: []
  secure_file_sync: ]
  security_service: r]
  ssl_terminator: e]
  stargate: r]
  sys_stat_collector: >]
  uhura: b]
  xmount: o]
  xtrim: ]
  zookeeper: ]

 

nutanix@NTNX-bd0526a0-A-CVM:0.0.0.0:~$ ncli host list
Error: Cannot connect to Prism Gateway

 


is AHV installed on a USB disk?   This may be there reason for your AHV hypervisor being stuck.   Upgrading AHV when booting from USB is not supported due to some changes in the drivers for the AHV stack, and that might be the reason that you are stuck where you are at. 

I strongly recommend reading through the Getting Started Guide, as nearly all of your post-installation questions, along with warnings about doing LCM updates with AHV booting from USB are covered there.   I’m curious as to the final command you used to create your cluster, as the “one_node_cluster” is not supported with a single CVM drive (nor in a CE cluster).  That also might be impactful here as the system might be expect certain configurations that don’t exist on your clusters.

https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Community-Edition-Getting-Started

 

 


is AHV installed on a USB disk?   This may be there reason for your AHV hypervisor being stuck.   Upgrading AHV when booting from USB is not supported due to some changes in the drivers for the AHV stack, and that might be the reason that you are stuck where you are at. 

I strongly recommend reading through the Getting Started Guide, as nearly all of your post-installation questions, along with warnings about doing LCM updates with AHV booting from USB are covered there.   I’m curious as to the final command you used to create your cluster, as the “one_node_cluster” is not supported with a single CVM drive (nor in a CE cluster).  That also might be impactful here as the system might be expect certain configurations that don’t exist on your clusters.

https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Community-Edition-Getting-Started

2 ssd = 1 cvm, 1 Hypervisor

2 HDD for DATA

1 USB Stick with the Boot Medium

An in one Cluster


is AHV installed on a USB disk?   This may be there reason for your AHV hypervisor being stuck.   Upgrading AHV when booting from USB is not supported due to some changes in the drivers for the AHV stack, and that might be the reason that you are stuck where you are at. 

I strongly recommend reading through the Getting Started Guide, as nearly all of your post-installation questions, along with warnings about doing LCM updates with AHV booting from USB are covered there.   I’m curious as to the final command you used to create your cluster, as the “one_node_cluster” is not supported with a single CVM drive (nor in a CE cluster).  That also might be impactful here as the system might be expect certain configurations that don’t exist on your clusters.

https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Community-Edition-Getting-Started

 

 

 

and one cluster


Hi there guys!

 

I asked for the AHV installation in a USB drive because I ran into a similar issue last week. I’m currently using an NX-1065-G5 node as a single node cluster. 1 SSD for CVM and 2 HDDs for data.

 

Boot drive used to be a SATA-DOM and with it, LCM updates worked flawlessly, but, because of wear and tear it died eventually.

 

As a work around I started using an USB drive for AHV install (yes, it’s not supported) but update through LCM ended with the host booting into dracut. Something like this

 

 

To solve the issue I’ve followed this great article form @Jeroen Tielen which saved the day and now my single node cluster is updated, up and running!

 

Hope this helps

 

Regards!


Hi @TrouBle 

From the CVM you can use ncli cluster-get-name-servers to check which one are configured (if any)

You can add them with ncli cluster add-to-name-servers servers=”The Server” one by one (up to three)

Hope this helps

Regards!

thx,

but how can i fix the orther Problems:

 

I cant use this commands

Error: Invalid Entity 'cluster-get-name-servers'
nutanix@NTNX-000000-A-CVM:10.0.0.5:~$ ncli cluster add-to-name-servers servers=10.0.0.50
-bash: cli: command not found

 


Hi @TrouBle 

From the CVM you can use ncli cluster-get-name-servers to check which one are configured (if any)

You can add them with ncli cluster add-to-name-servers servers=”The Server” one by one (up to three)

Hope this helps

Regards!

thx,

but how can i fix the orther Problems:

 

I cant use this commands

Error: Invalid Entity 'cluster-get-name-servers'
nutanix@NTNX-000000-A-CVM:10.0.0.5:~$ ncli cluster add-to-name-servers servers=10.0.0.50
-bash: cli: command not found

 

my fail cli “ncli” ;)


Hi @TrouBle 

 

I’ve seen in a previous comment that your cluster services aren’t running. Are they?

2024-04-08 08:05:45.472337: Services running on this node:  acropolis: o]  alert_manager: m]  anduril: ]  aplos: >]  aplos_engine: l]  arithmos: ]  athena: e]  cassandra: >]  catalog: <]  cerebro: >]  chronos: m]  cim_service: e]  cluster_config: >]  cluster_health: m]  curator: m]  delphi: /]  dynamic_ring_changer: ]  ergon: []  flow: :]  foundation: /]  genesis: 22766, 29632, 29671, 29672]  go_ergon: 7]  hera: r]  ikat_control_plane: <]  ikat_proxy: :]  insights_data_transfer: m]  insights_server: ]]  lazan: s]  mantle: ]  mercury: ]  minerva_cvm: r]  nutanix_guest_tools: []  pithos: _]  placement_solver: h]  polaris: c]  prism: m]  scavenger: >]  secure_file_sync: a]  security_service: i]  ssl_terminator: i]  stargate: >]  sys_stat_collector: ]  uhura: m]  xmount: []  xtrim: ]  zookeeper: ]]

 

If this is the case neither ncli nor acli will work.

You can check the cluster services status launching cluster status from the cvm, a correct output will be lie this one

If for some reason they are stopped you can try to start them with cluster start.

 

Hope this hepls

 

Regards!


Hello,
Thank you for all your help.
Unfortunately, none of my VMs get an IP and nothing is stored on the switch, although DNS, NTP etc. are entered.
Does anyone have any ideas?

 

 


 


Hi @TrouBle 

You need to create VLANs on your cluster First. Do the following:

Click on the gear placed top right corner on Prism Element

Then on the left menu go to Network section and click on Network Configuration

You’ll se a window like this (but without vlans of course)

Then click on Create Subnet and fill in your network details

The last step will be to create a vNIC on your VM and you’ll be good to go

 

Hope this helps

 

Regards!


I have start the LCM , Software Update, it stand by: