Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,073 Topics
- 2,846 Replies
As the Nutanix Life Cycle Management (LCM) platform evolves, more firmware entities are added to the module. The latest addition are the NICs associated with Nutanix hardware.Starting in LCM 2.3.4, firmware updates for Mellanox and SuperMicro NICs on AHV and ESXi are supported. Currently, Hyper-V platforms and Intel/Silicom NIC cards are not supported.For more information about this feature and software requirements, take a look at KB 10073 found on the Support Portal.
The Discoveries menu in Portal is a new feature that allows customers to view critical issues in their environment with a more holistic approach. Using Nutanix Insights, Portal will provide this view based on Field Advisories, constant health checks, end of support equipment, etc.The Discoveries Details View provides the background, analysis, and corrective actions that can be done to remedy these issues. This will also give you the option to create a case based on the specific issue/topic.For more information, take a look at the “Discoveries Menu” in the Support Portal.
If you have cloned multiple VMs from a single VM (master VM), you can enable NGT and mount the NGT installer simultaneously on multiple VMs by using the master VM image.Before you beginEnsure the following before you perform this task: Install NGT on the master VM. Clone the required number of VMs from the master VM. Shut down the cloned VMs. Perform the following procedure to enable NGT and mount the NGT installer simultaneously on multiple VMs by using the master VM image.Note: After you perform the following procedure, you do not need to separately install NGT on the cloned VMs.Procedure For every cloned VM, log on to the Controller VM and run the following command.ncli> ngt mount vm-id=clone_vm_id Replace clone_vm_id with the ID of the cloned VM. To find the ID of the cloned VM, run ncli> vm list name="<clone-vm-name>" command. Note the value of the Id field as clone_vm_id.<ncli> vm list name="<clone-vm-name>"Id : 00058a81-64bb-2
I know that this question has been asked in this forum a few years back but I wanted to see if the answer changed since the newer releases of AOS. In my case, I’m running AOS version 5.15.4 LTS and found that our 7-node cluster was configured for RF3 - although most of our containers are RF2 (the exceptions being the “NutanixManagementShare: and “SelfServiceContainer” containers).Since most of our containers are only RF2 I’m not too concerned about extra overhead but was curious if AOS now supports the switchback from RF3 on the cluster.
From the AHV Best Practices document, live migration network is in host management network(br0).In the network segment enviroment, backplane is isolated in another network which maybe has high bandwidth, does the live migration has any change under this condition? or Live Migration network can do some setting like ESXi vmotion network?
I have a cluster of 6 nodes including 3 nodes 1065-G5 and 3 other nodes 1065-G6 which run a hyper-v.I wanted to update from Aos 5.10.5 to 5.17.1 through LCM but I am having issues with the processors.I therefore want to proceed node by node in maintenace mode.Can you give me the procedure for updating the aos by proceeding by this method?
after change IPMI i found warning IPMI not match I see the reference https://portal.nutanix.com/page/documents/details?targetId=Hardware-Admin-Ref-AOS-v5_17:ipc-ipmi-ip-addr-change-t.html how to step genesis restartI must stop cluster before ?i can run command genesis restart when cluster start ? Thank for support
Please advise me is it support Server 2019 Hyper V on Nutanix? I unable to Find the article on Nutanix Portal with server 2019? I am planing to migrate from Server 2012 hyper v failover cluster to 2019 Server Hyper V failover Cluster? Which AOS Version is support Server 2019?
I have a couple of oddities going on, one is an ongoing and one was post latest PC2021. update.I was getting warnings that I was using the CVM default password so I chnaged it, it is supposed to replicate to the other CVMs but i get a health alert saying the remaining CVMs are still using the default password - any ideas why?And post PC update, i can log in using ssh but I can’t login to the web UI - I get the login screen , but after I enter username and password (not default) I get “server not available” I have run health checks and idf checks and all seem to PASS - is there a bug in the latest PC Patch? Just to note I can log into Prism Element with my name and password ok.Thanks in advance.
Hi,I have a small cluster of 4 nodes. Disk space usage for /home in all CVMs are increasing fast. After some investigation it appears that the /data/logs folder is taking majority of space, more specifically health_server.log file which is taking about 181gb now. 18133488 health_server.log.20201224-1413011144624 sysstats Does the health_server.log file keep collecting indefinitely, there must be a limit to this log file? Thanks.Beg
Hello everybody, i would like to remove a switch from the Network view in Prism Element, which also is no longer configured under Settings => Network Switch. The cluster was moved to different network a couple of months ago. So far I was unable to remove this old network swith from the Network view in Prism Element and it’s also no longer visible using ‘ncli net list-switch’ in CVM command line. There is only the existing switch configured under Settings => Network Switch. We use AOS 5.15.4 LTS currently. If I select ‘Go to Switch details’ when selecting the old visible switch in ‘Network’, I get an error => Switch with id ‘...’ was not found ! Any idea how I can remove such a switch, which is no longer available? Regards,Didi7
UPGRADING SERVER FIRMWARENutanix recommends that you use the Service Pack for ProLiant® (SPP) ISO file for applying firmware updates. Perform this procedure on every host in the cluster, one host at a time. About this taskTo upgrade the firmware on a server, do the following:Procedure If the server is part of a Nutanix cluster, place the server in maintenance mode.Information about placing a server in maintenance mode is available in the host management section of the Acropolis Command-Line Interface (aCLI) documentation. See the Command Reference for the supported AOS version. Turn on the server to the SPP ISO. Connect to the iLO by using the iLO IP address. Log on to the iLO user interface by using the administrator credentials.The default administrator user name is Administrator on all HPE® ProLiant® servers. Passwords for the iLO administrator differ from one server to another, and are available on the service tag on the server. Attach the SPP ISO to the server by usi
I have read the documents and manuals of APC PowerChute Network but it is a scenario that doesn’t apply to mean. Apparently, someone sold to the customer a Vertiv Liebert GXT4 UPS. Liebert’s engineers concluded that the product couldn’t provide a graceful shutdown of the Nutanix cluster, while it could shutdown the ESXi on top.Is there anyone out there who had succeeded in performing a graceful shutdown without APC?
We are planning to add new nodes to existing clusters.Before the extension, proceed with the new Node AOS install (Bare metal image) and then add.At this time, I want to know if there is no problem if the existing cluster AOS and the new AOS are different versions.ex) Existing Cluster AOS 5.10.9 / New Extension Node AOS 5.15.xPlease give me some advice.Thank you.
vCenter 7.0U1c and vSphere Clustering Service VMs on local or shared storage.After test upgrading vCenter from 6.5U3 to 7.0U1c (which worked just fine w 0 issues), every vmware cluster with DRS and/or HA enabled gets 3 new vCLS (very tiny VMs w 2GB storage/.13GB RAM).When NCC is ran it registers 3 WARNs that there are VMs running on the SATADOM’s. I would assume that, even what (little) they do (today), would this wear out Satadom’s or M.2’s?Would this be a concern for G4/G5’s (w Satadoms) and not G6/G7’s (M.2’s)?”While vCLS can be moved to shared storage, wouldn’t they be moved back to local storage during an NX cluster stop?(Like to point out this was done in a LAB environment, not in Production… no bits nor bytes were harmed during the upgrade progress.)
Added 9 nodes(AHV) to existing cluster successfully and when i try to verifyLACP status with command (hostssh "ovs-appctl bond/show br0-up" | egrep "===|lacp_status") it is showing LACP status off.Steps i performed before adding the nodes to existing cluster1.Configured network on all 9 nodes2.Configured LACP on all 9 nodes Please advice.
I’m new to Nutanix so bear with me if this has been asked. Is it possible to simulate a x-over cable between two VM’s? basically I want any traffic leaving a load balancer to be passed via layer 1-2 to a WAF appliance so it can be inspected. The WAF is in transparent mode and won’t have an IP Address. In the past with physical machines i would just cable them inline.
Hi All,Current environment (~40 sites & HO) do not have any 10GbE switches and we are planning 2-node clusters per site. If using 2 x 1GbE Nic for the management like a witness communication, can we connect ESXi nodes using 10gbit ‘direct cable’ to support the workload for all VM traffic, vMotion, vmKernel, and Nutanix and most important, two node disks datastore? It should be not any issue for the Nutanix/VMware level, Switch is on L2.
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.