Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,073 Topics
- 2,846 Replies
NCC alert showed following issue and followed KB 2050 Detailed information for cvm_startup_dependency_check:Node 192.168.x.x:FAIL: Failed to open vmx fileRefer to KB 2050 (http://portal.nutanix.com/kb/2050) for details on cvm_startup_dependency_check Based on observation the datastore name used by node had pre-fix which caused the NCC alert to failsChanged NTNX-local-ds-17FM37420063-B (1) to NTNX-local-ds-17FM37420063-BThis helped to resolve the NCC alert message.
Did you know that AHV hosts can be accelerated by discrete graphics hardware (similar to how home gaming systems are accelerated)? This is especially useful for environments such as Virtual Desktop Infrastructure (VDI) deployments where it is desirable to accelerate individual desktops being shared out to large user communities.The host driver for the discrete graphics hardware can be easily installed on AHV hosts via a single command as executed from just a single CVM of a Nutanix cluster (executing the command from a single CVM will install it across all of the CVMs of the cluster – no need to touch each CVM/host individually). Information regarding this single-command procedure can be found from the INSTALLING NVIDIA GRID VIRTUAL GPU MANAGER (HOST DRIVER) section of the AHV ADMINISTRATION GUIDE.Also, just to note, the host driver can be installed on hosts running an ESX hypervisor using a different procedure. That procedure can be found via the knowledge base article Install NVIDIA
Hello good afternoon, I have a Nodes with black screen and blinking cursor, I want to see if there is a way to reset it or to show me the normal console again. If someone can help me, I really appreciate it, I already tried to generate the Phoenix iso and boot it in ubs and connect it to the node and start but send: udhcpc sending discover Thank you for your cooperation
When performing maintenance on a CVM, it is important to not treat it as a regular guest VM. This is because there are crucial services running on each CVM which need to gracefully respond when a CVM goes offline.If you need to shutdown a CVM and are running ESXi, you might otherwise think to simply go to vCenter, right-click on the CVM, and select the “Shut Down Guest OS” option. However, that procedure will not properly shut down the CVM and allow the services to gracefully respond.Instead, in using the cvm_shutdown script, it will first place a necessary HA route in the hypervisor, to redirect storage requests to another CVM, before shutting down the CVM. This will also allow the services to gracefully respond to the CVM being offline.More information and options regarding this script can be found in KB 3270
Did you know you can secure your IMPI Web Interface with an SSL Certificate? Securing the IPMI Web interface is recommended to help reduce susceptibility to attacks. You can further enhance security by installing your own customized certificate or a CA-signed certificate. Nutanix recommends strong keys and signature algorithms. The IPMI module supports SHA2 and RSA 2048 bit SSL. Avoid long certificate chains or large certificates. If the IPMI module shows the default or previously-installed certificate after you install a new one, or you are unable to log in to the IPMI web interface, the chain is too long (chain length longer than one) or certificate too large. As a test, create a simple self-signed certificate and install it to ensure the IPMI is working correctly before attempting to install larger certificates. You can use openssl or keytool to generate keys, certificates, and signing requests. Similarly to any other certificate deployment, the process consists of two steps
Are you someone with the following condition? Your Nutanix Cluster has grown so much that it needs to be moved to a different (bigger) network segment (VLAN). You are daunted by the amount of changes that moving a cluster to a different network space could result in. Have no fear!The procedures are documented as per the CLUSTER IP ADDRESS CONFIGURATION section of the ACROPOLIS ADVANCED ADMINISTRATION GUIDE. From a high level, the procedures involve some preliminary verifications along with securing some downtime for the cluster (as the changes need to be made while the cluster is in a “stopped” state). From there, it is simply a matter of changing the IPMI and hypervisor IP addresses followed by the execution of a script called “external_ip_reconfig” which handles the procedures for changing the IP addresses of the CVMs. Follow that up with some post verifications and your existing cluster should be successfully running on a new network segment!
The Nutanix cluster in Vcenter must be configured according to Nutanix best practices. A quick checklist for the recommended settings review is as follows: vSphere HA Settings Enable host monitoring Enable admission control and use the percentage-based policy with a value based on the number of nodes in the cluster. Set the VM Restart Priority of all Controller VMs to Disabled. Set the Host Isolation Response of the cluster to Power Off. Set the Host Isolation Response of all Controller VMs to Disabled. Set the VM Monitoring for all Controller VMs to Disabled. Enable Datastore Heartbeating by clicking Select only from my preferred datastores and choosing the Nutanix NFS datastore. If the cluster has only one datastore, add an advanced option named das.ignoreInsufficientHbDatastore with Value of true. vSphere DRS Settings Set the Automation Level on all Controller VMs to Disabled. Leave power management disabled
Nutanix cluster works with the vDS and you can use the following guidelines and recommendations to configure the vmk and VM interfaces to be part of the vDS. Nutanix recommendations for implementation Keep the vSwitchNutanix, the vmkernel port (vmk-iscsi-pg), and the Nutanix Controller VM's virtual machine port group (svm-iscsi-pg) configuration intact. It should remain as a standard vSwitch and should not be migrated over to the vDS. Migrating the vSwitchNutanix to the vDS causes issues with upgrade, and also controller VM data path communication. Only migrate one host to a dvSwitch at a time. After migrating the host to the dvSwitch, confirm that the Controller VM can communicate with all other Controller VMs in the cluster. This ensures that the cluster services running on all Controller VMs continue to function during the migration. In general, one Controller VM can be off the network at a given time while the others continue to provide access to the datastore.
When you replace the NIC cards in the Nutanix AHV cluster, it is possible that the new NIC card interfaces are recognised as additional interfaces and are numbered after the existing interfaces. So, for example, in the current AHV node you have 4 interfaces - eth0, eth1, eth2 and eth3 and you replace the NIC card and you see additional interfaces - eth0, eth1, eth2, eth3, eth4 and eth5. Also, if your host was connected earlier on the replaced NIC card, you will lose the host connectivity. We now have the script in AOS 5.10.10 and AOS 5.15 to get the NIC interfaces re-numbered correctly. If you are running older versions or unable to run the script for any reason, you can use the annual method. Please check the Nutanix portal page here and KB3261 for the exact script and the manual method steps.
Let's say you just upgraded your AOS to the latest version and now you are receiving this alert in your environment. "Disk space usage for root on Controller VM has exceeded 80%" The Nutanix CVM is what runs the Nutanix software and serves all of the I/O operations for the hypervisor and all VMs running on that host. Not sure how to bring the root partition on the CVM below 80% and which document to consult to? The following knowledgebase articles might help you in resolving the alert. KB-7604 KB-6637
What is Beam? Beam is a cost and security optimization SaaS product offering by Nutanix. Beam helps cloud-focused organizations to gain visibility into cloud spend across multiple cloud environments. With the Cost Governance feature, Beam provides organizations with deep visibility and rich analytic detailing cloud consumption patterns, along with one-click cost optimization across their cloud environments. The cost governance feature within Beam provides the following capabilities. Visibility into cloud consumption. Optimization of Cloud Consumption. Control over Cloud Consumption. The Security Compliance supports the following capabilities. Visibility into Security Compliance Optimization of Security Compliance Control over Security Compliance For more information and for the full documentation, check out XI Beam user guide Useful links: Set-up Xi Beam Release notes
Nutanix Hyper-Converged Infrastructure brings together the power of compute, storage and networking in just one node. There might be situation where you’ll get queries in regard to mixing different components in NX- Hardware For example Are there any NIC restrictions? Are there any Storage restrictions? Can I have a hybrid SSD/HDD node and all-SSD node in the same cluster? Are there any restrictions related to the DIMM model and placement? What all hypervisors are supported with different Hardware models? If the above-mentioned questions are baffling you, the following product document can help you clarify the doubts and give more information regarding product mixing restrictions Product Mixing Restrictions
In Nutanix Hardware, the IPMI (BMC) keeps track of hardware-related events using the Event Log/System Management Feature. If there is a hardware event that needs to be dealt with, Prism will create an alert and send you an email if Alert Email Configuration is set up. However, there are still other events in IPMI that can be useful.For example, a power button assertion, a chassis intrusion, shutdown related events, or session audits (failed login attempts) will be logged by the IPMI events logs and can be forwarded via email using SMTP.If you may be interested in this functionality, check out KB 2581 to find more information on configuring SMTP in the Nutanix IPMI.
Recently we have introduced a couple of changes in LCM. HTTPS is a requirement for many enterprise customers. Many of our customers employ strict firewalls, Deep Packet filtering algorithms that only let certain HTTPS traffic through the external gateway. And so today we allow LCM to access the Nutanix portal over HTTPS. (The URL accessed when performing inventory is https://download.nutanix.com/lcm/2.0/) Nutanix is transitioning from delivering LCM modules as a payload that is associated with an LCM release to delivering them as release-independent repository image modules (RIM). This includes both software and firmware modules that is available from LCM 2.3.2. That’s great! But how does it affect me?Only if you have blocked HTTP traffic. At the time of this post, we have identified an issue where LCM could incorrectly poll a HTTP endpoint, instead of HTTPS. It has been documented in the release notes as well. (ENG-310334)https://portal.nutanix.com/page/documents/details?targetId
When there is a new patch released by VMware for ESXi, you may want to check the Compatibility Matrix on the Portal, but the matrix shows only ESXi versions and update numbers, it doesn’t show every patch build number. What to do and how to understand if the patch is supported or not? Let’s break down the ESXi versions into possible options: Versions. Those are the numbered versions, for example, 6.0, 6.5, 6.7 and 7.0. All the versions need to be qualified and the qualification is a requirement. They will not be supported if they are not qualified. Check the compatibility matrix on the Nutanix portal to see if it is supported. Updates. Those are what you usually see after the “U” in the version name, for example, 6.7U1, 6.7U2. 6.7U3. They also will not be supported until they are qualified. Check the compatibility matrix on the Nutanix portal to see if the Update is supported. Patches. There are many ways that VMware calls its patches, examples are: ESXi 6.7 EP 15, ESXi 6.7 P01, ES
If your cluster has no direct access to the internet, you can use the Dark Site LCM bundle, which you can put on a local web server and use it as a source for the LCM downloads.The web server can be a virtual machine on the same Nutanix cluster that you want to upgrade or any other machine that the cluster will have access to. In this topic we will be creating a web server on a Windows machine and we will be using Microsoft IIS (Internet Information Services), so you will not need to download and install any third party software. Note that this guide is for absolute beginners, so you will not need any prior experience with this technology. So, first, we need to enable the IIS. Type the “Server Manager” in the Windows search and launch it.In the Server Manager click Manage in the top right corner and select “Add Roles and Features”. Click Next until you get to the Server Roles. In the list of Server Roles, select Web Server and click Next. The default options are sufficient. Click Next
If your cluster has no direct access to the internet, you can use the Dark Site LCM bundle, which you can put on a local web server and use it as a source for the LCM downloads.The web server can be a machine on the same Nutanix cluster that you want to upgrade or any other machine that the cluster will have access to. In this topic we will be creating a web server on a Linux machine and we will be using Apache Web Server. Note that this guide is for absolute beginners, so you will not need any prior experience. For this example, I have selected CentOS 7 as Linux distribution. If you want to use another distribution, the commands can be slightly different, so you will have to consult the documentation of the selected distribution. However, if you use CentOS, you can simply follow this guide. So, to start, install the CentOS selecting the “minimal install” and log in as root. Also, you can select in the installer in the Software Selection the Basic Web Server option. Then, you can skip
Syslog or ‘System Logging Protocol’ is used by routers, switches, access points, servers and of course Nutanix. It is used to send events and logs to a remote syslog server that collects, organizes and filters these logs. In Nutanix, we provide different log modules for the core services that can be enabled separately and you can configure the logging level required for them as well. For example, if you wish to forward just the warning log messages for Acropolis, this would be the command: ncli rsyslog-config add-module server-name=<server_name> module-name=ACROPOLIS level=WARNING For a more comprehensive look at the various modules and log levels, check out syslog server. Bonus: Would you like to know who powered off that VM while you were sleeping? Or in general, WHO did WHAT on WHICH OBJECT, at what TIME, from WHERE, and what was the OUTCOME? You can forward these Audit logs to your syslog server as well from Prism Central. More information on this can be foun
Nutanix Move (Move) is a cross-hypervisor mobility solution to move VMs with minimal downtime. Move supports migration from the following sources to targets, where first platform being the source and second platform being the target.Starting with version 3.0 Move supports Hyper-V which means that you can now migrate consolidate your workloads from ESXi, AWS EC2 and Hyper-V on Nutanix and perform reverse migration in some cases. For a list of supported OS versions, Hyper-V versions, port requirements, useful commands and caveats be sure to check KB-6667 Hyper-V: Move | Basic understanding and troubleshooting.For all things Nutanix Move there is Move User Guide (v3.6).For a Move FAQ try KB-8070 Nutanix Move - FAQ (Frequently Asked Questions)
In Nutanix clusters running AHV hypervisors, we can use the LLDP protocol to find more details about the directly connected switches. LLDP needs to be enabled in the switches for the details to be seen in the AHV hosts. From the AHV host, [root@AHV-HOST ~]# lldpctl------------------------------------------------------------------------LLDP neighbors:------------------------------------------------------------------------Interface: eth2, via: LLDP, RID: 7478, Time: 35 days, 22:55:59 Chassis: ChassisID: mac 00:XX:XX:2d:b6:ba SysName: AHV-HOST SysDescr: Network Switch TTL: 120 MgmtIP: xx.xx.xx.xx Capability: Bridge, on Capability: Router, off Port: PortID: ifname Ethernet9 PortDescr: Nutanix-AHV-Node-A MFS: 9236 VLAN: 100, pvid: yes------------------------------------------------------------------------ You can get the following switch details: Management IP address MAC address I
Phoenix is an ISO-based installer that you can use to perform the following installation tasks on bare-metal hardware one node at a time: Configuring hypervisor settings, virtual switches, and so on after you install a hypervisor on a replacement host boot disk. This option does not require you to include AOS and hypervisor installers in the Phoenix ISO image. Installing the Controller VM, which runs AOS. This option requires you to include the AOS installer in the Phoenix ISO image. Installing the hypervisor on a new or replacement node. This is an alternative to installing the hypervisor by the use of the hypervisor manufacturer's ISO and reduces the two-step procedure—that of first installing the hypervisor and then installing AOS (by the use of the Phoenix ISO image)—to a single-step procedure of installing both software, at once, by the use of only the Phoenix ISO image. However, this option requires you to include the hypervisor ISO image and the AOS installer files in the Ph
Companies merge, hosts are inherited and repurposed, administrators come and go and not necessarily leave things in order after themselves. Or what if you forgot the password and the record of it is not recoverable? Where there’s a will, there’s a way. I touched the other day on how ipmitool can be accessed from a CVM CLI but would require authentication details to pass on to the IPMI. Well, the host or rather the hypervisor installed on the host does not have to do so. Hence even when ADMIN account password (or any other account with full privileges) is lost the IPMI is still accessible from the hypervisor CLI and you can still reset the password, set up new user entirely, or set a new password for a user. The process consists of several steps: Log in to the hypervisor on the host (AHV or ESXi). List user records using ipmitool. Take note of the user ID. Set the password for the user ID. The task is complete. Conveniently, commands syntax between AHV and ESXi differs only in the
Quick tip of the day. Did you know that you can integrate IPMI with the Active Directory? Note: The default timeout is set at 0, which results in an error message HTTP 500 - Internal Server Error. Make sure to update it to 10. Select the User and then select Add Role Group to configure the Role Group. (Be sure that it matches an existing AD group name.) When logging into the IPMI web console, use the following format to log in: username@domain. Should you find yourself with a task too big to be handled manually such as configuring the same but on a miriad of nodes there is some help available. SMCIPMITool is a utility by SuperMicro. You can create a script to go through all your nodes’ IPMI interfaces and join them to the AD. For a complete set of steps see KB-2860 How to configure IPMI Active Directory Authentication for Supermicro platforms. For commands list and use cases see SMCIPMITool User’s Guide.
http://runahv.com Check in with Mission Control to get the most out of Invisible Virtualization.Mission Control is a curated set of short vidoes covering a set of topics from day 0 setup and deployment, migration from other hypervisors, and advanced management. Check it out and let us know what you think. If there’s content you’re interested in, let us know in this thread.
The default name of the CVM is in the below format: NTNX-<Block serial number>-<Node position>-CVM This is what you see as the CVM name in the PRISM. If you want to change the CVM name in your AHV cluster, then we have a script for changing this with the following limitations. nutanix@cvm:~$ change_cvm_display_name --cvm_ip=<IP of CVM that should be renamed> --cvm_name=<new name> The script is available in AOS - 5.10.10, 5.15 and 5.16. The naming convention needs to start with “NTNX-” and end with “-CVM”, Letters, numbers and “-” are supported in the CVM name, CVM should be able to get a shutdown token to be turned off, There will be no redirection of storage traffic to other CVMs, so make sure you have no user vms running in the AHV host - put the AHV host into maintenance mode, Run the script from a different CVM than the CVM that is being re-named and make sure the SSH session is intact until the re-named CVM comes up. For more
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.