Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,136 Topics
- 3,092 Replies
Curious about managing or running AHV? Check out Mission Control and get answers!
http://runahv.com Check in with Mission Control to get the most out of Invisible Virtualization.Mission Control is a curated set of short vidoes covering a set of topics from day 0 setup and deployment, migration from other hypervisors, and advanced management. Check it out and let us know what you think. If there’s content you’re interested in, let us know in this thread.
Customized Prism Central name on browser tab
Hi guys, a non technical question about a customization option that I can’t find anywhere. I have two Prism Central used for Synchronous replication. To Identify them at first sight I have customized title and colors of the login page. It was a bit disappointing to not find the customized name on the browser tab or at least on the menu bar title. Even after editing the cluster params to give a name to the PC cluster (only one VM as for now) the name stick on “Prism Central” on the tab and “Prism” on the menu bar.Considering I’m working on a Synchronous replication and that all the objects involved have the same name (Protection Policies, Recovery plans, categories), sometimes i found myself working on the wrong Prism Central. It would be easier to have the PC name always visibile. Any idea? Thanks in advanceil_gianK’ Tab and menu bar title
CVM is not working
Hello all, i have managed to configure the 3 node cluster with AHV and all of the were working at first but after 2 days one cvm has been down can’t communicate with others.i have tryied to fix it but i couldn’t.both the AHV and host ip does reply for ping but the CVM dosen’t.i have run the NCC and got the log but it only says there is a duplicate ip but when i check for duplicate ip there is no duplicate ip in the configuaration. i have attached the NCC report incase you want to see it.any help will be greatly appretiated. thank you in advance.
In the process of installing the nx-8235-g7 equipment, the foundation cannot perform IPMI MAC ADDRESS DISCOVERY.When the installation was powered on, it was booted into Phoenix.iso and the IPMI IP setting was possible, so the installation was possible with the foundation. Is there any change in IPMI MAC DISCOVERY method from G7 equipment? And when the equipment is received and powered on, it boots into Phoenix.iso. Can you know what this is?And starting from g7, is the IPMI DEFAULT PASSWORD indicated by a sticker on the back of the device where the ipmi ID/PASSWORD is not ADMIN/ADMIN?
Combining a mixed hypervisor cluster with mixed hardware
We have a 4 node cluster configured as follows:2 x Lenovo HX1320 (dual 8-core Xeon Silver 4110 CPU @ 2.10 Ghz / 192 GB RAM / 7.11 TB storage) with ESXi2 x Lenovo HX5520-C (single -core Xeon Silver 4110 CPU @ 2.10 Ghz / 64 GB RAM / 35.83 TB) with AHVWe would like to convert the ESXi nodes to AHV. Looking at the knowledge base and the community it seems like doing this would allow all 4 nodes to share the virtual machine load and because the CPUs are the same we should not have to worry about compatibility issues. Is this a correct assumption? Any issues we should watch for?
How to change GPU mode on AHV Cluster
I have a Nutanix Node - Dell XC740xd with 4 Tesla T4 GPUs installed. I am Running AOS 5.15. It seems my GPU is in passthrough mode, and I would like to switch this to vGPU mode. Does anyone know how to do this? "status": "UNUSED", "assignable": true, "vendor": "NVIDIA", "name": "Tesla T4 compute", "index": 0, "max_resolution": "None", "num_vgpus_allocated": 0, "pci_address": "0000:86:00.0", "fraction": 0, "mode": "PASSTHROUGH_COMPUTE", "num_virtual_display_heads": 0, "guest_driver_version": "None", "frame_buffer_size_mib": 16384, "device_id": 7864
Appendix: Imaging A Node (Phoenix)
Phoenix is an ISO-based installer that you can use to perform the following installation tasks on bare-metal hardware one node at a time: Configuring hypervisor settings, virtual switches, and so on after you install a hypervisor on a replacement host boot disk. This option does not require you to include AOS and hypervisor installers in the Phoenix ISO image. Installing the Controller VM, which runs AOS. This option requires you to include the AOS installer in the Phoenix ISO image. Installing the hypervisor on a new or replacement node. This is an alternative to installing the hypervisor by the use of the hypervisor manufacturer's ISO and reduces the two-step procedure—that of first installing the hypervisor and then installing AOS (by the use of the Phoenix ISO image)—to a single-step procedure of installing both software, at once, by the use of only the Phoenix ISO image. However, this option requires you to include the hypervisor ISO image and the AOS installer files in the Ph
Upgrade to vCenter 7.0
I need to upgrade vcenter to 7.0 in my lab, but I can find confirmation that will work with AOS, version 18.104.22.168 LTS.Actual 6.7U3 is installed, and AOS is on ESXI 6.7U3, i can move the hosts later, when will published in the compatibility matrix (dell R640), but I have to upgrade ASAP the vcenter, thanks for the support!
AOS version is different that additional node for expanding cluster
Hi,Ｉ plan expanding cluster to add 3 nodes that model are XC640 and hypervisor is ESXi 6.7 U2.Cluster’s AOS versio is 5.10.6 LTS but additional node’s 5.16STS.I cannot add nodes...Can I anything to do additional nodes?And Do you know how to recreate CVM on new nodes?
Nutanix Move and Hyper-V
Nutanix Move (Move) is a cross-hypervisor mobility solution to move VMs with minimal downtime. Move supports migration from the following sources to targets, where first platform being the source and second platform being the target.Starting with version 3.0 Move supports Hyper-V which means that you can now migrate consolidate your workloads from ESXi, AWS EC2 and Hyper-V on Nutanix and perform reverse migration in some cases. For a list of supported OS versions, Hyper-V versions, port requirements, useful commands and caveats be sure to check KB-6667 Hyper-V: Move | Basic understanding and troubleshooting.For all things Nutanix Move there is Move User Guide (v3.6).For a Move FAQ try KB-8070 Nutanix Move - FAQ (Frequently Asked Questions)
How to configure a web server for LCM Dark Site on a Linux machine
If your cluster has no direct access to the internet, you can use the Dark Site LCM bundle, which you can put on a local web server and use it as a source for the LCM downloads.The web server can be a machine on the same Nutanix cluster that you want to upgrade or any other machine that the cluster will have access to. In this topic we will be creating a web server on a Linux machine and we will be using Apache Web Server. Note that this guide is for absolute beginners, so you will not need any prior experience. For this example, I have selected CentOS 7 as Linux distribution. If you want to use another distribution, the commands can be slightly different, so you will have to consult the documentation of the selected distribution. However, if you use CentOS, you can simply follow this guide. So, to start, install the CentOS selecting the “minimal install” and log in as root. Also, you can select in the installer in the Software Selection the Basic Web Server option. Then, you can skip
How to configure a web server for LCM Dark Site on a Windows Server
If your cluster has no direct access to the internet, you can use the Dark Site LCM bundle, which you can put on a local web server and use it as a source for the LCM downloads.The web server can be a virtual machine on the same Nutanix cluster that you want to upgrade or any other machine that the cluster will have access to. In this topic we will be creating a web server on a Windows machine and we will be using Microsoft IIS (Internet Information Services), so you will not need to download and install any third party software. Note that this guide is for absolute beginners, so you will not need any prior experience with this technology. So, first, we need to enable the IIS. Type the “Server Manager” in the Windows search and launch it.In the Server Manager click Manage in the top right corner and select “Add Roles and Features”. Click Next until you get to the Server Roles. In the list of Server Roles, select Web Server and click Next. The default options are sufficient. Click Next
Nutanix Installation with different network ports speed
Hello Everybody,Hope you’re all doing well !I want to install Nutanix using Foundation.I have 3 Lenovo nodes and one Cisco 2960X switch that consists of 24 x 1 Gig RJ45 ports and 04 x 1 Gig SFP ports.Unlike Supermicro nodes, Lenovo nodes do not have Shared IPMI ports.So, for each Lenovo node, I have to connect 01 x 10 Gig port and 01 GbE IPMI Port to the same CISCO 2960X switch (10 Gig ports of the nodes connected to 1 Gig SFP ports of the switch and 1Gig IPMI ports to the 1 Gig RJ45 ports ).Please take a look of the picture below…Since 10 Gig ports of the nodes are connected to 1Gig ports of the switch, I wonder if this architecture can allow me to install nutanix and create the cluster without any issues ? Thanks in advance.
LCM and HTTP proxy
Recently we have introduced a couple of changes in LCM. HTTPS is a requirement for many enterprise customers. Many of our customers employ strict firewalls, Deep Packet filtering algorithms that only let certain HTTPS traffic through the external gateway. And so today we allow LCM to access the Nutanix portal over HTTPS. (The URL accessed when performing inventory is https://download.nutanix.com/lcm/2.0/) Nutanix is transitioning from delivering LCM modules as a payload that is associated with an LCM release to delivering them as release-independent repository image modules (RIM). This includes both software and firmware modules that is available from LCM 2.3.2. That’s great! But how does it affect me?Only if you have blocked HTTP traffic. At the time of this post, we have identified an issue where LCM could incorrectly poll a HTTP endpoint, instead of HTTPS. It has been documented in the release notes as well. (ENG-310334)https://portal.nutanix.com/page/documents/details?targetId
Emailing IPMI Event Logs
In Nutanix Hardware, the IPMI (BMC) keeps track of hardware-related events using the Event Log/System Management Feature. If there is a hardware event that needs to be dealt with, Prism will create an alert and send you an email if Alert Email Configuration is set up. However, there are still other events in IPMI that can be useful.For example, a power button assertion, a chassis intrusion, shutdown related events, or session audits (failed login attempts) will be logged by the IPMI events logs and can be forwarded via email using SMTP.If you may be interested in this functionality, check out KB 2581 to find more information on configuring SMTP in the Nutanix IPMI.
Hybrid cold/hot data ratio non disk size dependent
Hello team, I hope that someone can explain to me why, looking to sizer’s recommandations about hybrid configuations, SSD/HDD ratio is not size dependent but form factor dependent. Basically if you select 2 SSD and 3.5”HDD then a minimum 1:1 ratio is possible (SSDx2 + HDDx2) but if you have have SSDx2 and 2.5” HDD then the minimum ratio is 1:2 (SSDx2 + HDDx4). Is it because 3.5” HDD SAS or SATA are usually faster than 2.5”? It’s not about size or hardware vendor, recommandations seems to changes according to the disks form factor only
Upgrade AHV on AWS Cloud instance
HI We use an AWS cloud instance as a remote data protection ‘site’. The production cluster has been successfully updated to AOS 22.214.171.124 (LTS) but this needed an AHV update to keep everything compatible.. Is is possible / desirable to upgrade the AWS cloud instance AHV ? IIRC it’s not supported to do OS upgrades (MIcrosoft OS anyway) on cloud infrastructure, it’s generally a migration to a new server. Is it possible to put a new AWS remote Nutanix instance in place and connect the backup data ??? Any advice on how to keep the whole Nutanix environment up to date is appreciated.
Nutanix AHV cluster - Configuration steps while replacing a NIC
When you replace the NIC cards in the Nutanix AHV cluster, it is possible that the new NIC card interfaces are recognised as additional interfaces and are numbered after the existing interfaces. So, for example, in the current AHV node you have 4 interfaces - eth0, eth1, eth2 and eth3 and you replace the NIC card and you see additional interfaces - eth0, eth1, eth2, eth3, eth4 and eth5. Also, if your host was connected earlier on the replaced NIC card, you will lose the host connectivity. We now have the script in AOS 5.10.10 and AOS 5.15 to get the NIC interfaces re-numbered correctly. If you are running older versions or unable to run the script for any reason, you can use the annual method. Please check the Nutanix portal page here and KB3261 for the exact script and the manual method steps.
Configuring VMware Distributed Switch (vDS) on the Nutanix Platform
Nutanix cluster works with the vDS and you can use the following guidelines and recommendations to configure the vmk and VM interfaces to be part of the vDS. Nutanix recommendations for implementation Keep the vSwitchNutanix, the vmkernel port (vmk-iscsi-pg), and the Nutanix Controller VM's virtual machine port group (svm-iscsi-pg) configuration intact. It should remain as a standard vSwitch and should not be migrated over to the vDS. Migrating the vSwitchNutanix to the vDS causes issues with upgrade, and also controller VM data path communication. Only migrate one host to a dvSwitch at a time. After migrating the host to the dvSwitch, confirm that the Controller VM can communicate with all other Controller VMs in the cluster. This ensures that the cluster services running on all Controller VMs continue to function during the migration. In general, one Controller VM can be off the network at a given time while the others continue to provide access to the datastore.
vSphere cluster settings for Nutanix cluster
The Nutanix cluster in Vcenter must be configured according to Nutanix best practices. A quick checklist for the recommended settings review is as follows: vSphere HA Settings Enable host monitoring Enable admission control and use the percentage-based policy with a value based on the number of nodes in the cluster. Set the VM Restart Priority of all Controller VMs to Disabled. Set the Host Isolation Response of the cluster to Power Off. Set the Host Isolation Response of all Controller VMs to Disabled. Set the VM Monitoring for all Controller VMs to Disabled. Enable Datastore Heartbeating by clicking Select only from my preferred datastores and choosing the Nutanix NFS datastore. If the cluster has only one datastore, add an advanced option named das.ignoreInsufficientHbDatastore with Value of true. vSphere DRS Settings Set the Automation Level on all Controller VMs to Disabled. Leave power management disabled
Already have an account? Login
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.