Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,125 Topics
- 3,036 Replies
By default, Life Cycle Manager (LCM) automatically fetches updates from a pre-configured URL. If LCM fails to access the configured URL to fetch updates, you can configure the LCM to fetch updates locally to upgrade Calm and Epsilon. Perform the following procedure to upgrade Calm and Epsilon at a dark site.Ensure that the LCM version is 2.3 or above.Procedure Set up a local web server that is reachable to your Calm VMs.You will use this server to host the LCM repository. Use the following steps to set up your LCM repository: From a device that has public Internet access, go to the Nutanix portal and select Downloads > Calm. Next to the LCM Dark Site Calm on ESX Bundle entry, click Download to download the latest LCM framework tar file, lcm_dark_site_bundle_version.tgz. Transfer the framework tar file to your local web server. Extract the framework tar file into the release directory.The following files are extracted into the release directory. master_manif
Calm on ESXi 3 Tier (Calm VM) is a standalone VM that you can deploy on ESXi hypervisors and leverage calm functionality without the Nutanix infrastructure.You can deploy Calm using the image available on the Nutanix Support Portal - Downloads page and start managing your applications across a variety of cloud platforms. This eliminates the need for buying Nutanix infrastructure to use Calm. Supported ESXi versions: 6.0.0 and 6.7 (vCenter version 6.7) Scale-out of Calm VM is not supported. Calm Version is 188.8.131.52. Policy engine enablement is not supported There are two available options. Deploying Calm using vSphere web-client Deploying Calm using vSphere cli. 1. Deploying Calm using vSphere web-client: Log on to the vSphere web client. Click the cluster on which you want to deploy the Calm VM. 3. Click Actions > Deploy OVF Template.The Deploy OVF Template window appears.4. In the Deploy OVF Template window, do the following. Click the Local File option to browse and uplo
Hello everybody,I was unsuccessful to boot from a SATA CDROM in an AHV VM on different Nutanix-cluster with the same AOS 5.15.4 LTS version.Allthough I explicitly selected to boot from the SATA CDROM, the VM says ‘Could not read from cdrom (Code xxx) and instantly jumps into 2048 game (what the heck).As soon as I switch to an IDE CDROM, the system boots properly.Is this a bug in AOS 5.15.4 LTS ?ThanksDidi7
I’ve been trying to download the latest version of Prism Central through Prism and consistently fails. I then downloaded the binary and metadata file but they also continue to fail to upload through Prism after various progress. I checked that there’s at least 10GB of free space in the /home directory - are there any logs that I can look at to troubleshoot further?
It can sometimes be confusing as to which network port(s) a Nutanix product or service uses. Also, this information can often be helpful when configuring network security or firewall appliances. For the various Nutanix products and services, a handy list of ports, services, their respective protocols, and a simple description of each can be found in the Port Reference documentation of the Portal. This list is conveniently divided into sections corresponding to each Nutanix product or service. Also, specifically regarding the configuration of network firewalls, a recommendation regarding the specific ports configuration can be found within the Recommendation on Firewall Ports Config knowledge base article.
The Nutanix AOS/AHV was installed on the DX equipment, and the ISCSI Data Services IP was also set up. The Volume Group tab and create Volume Group are not visible in the Prism - Storage - Table.AOS version is 5.20.xIs Volume Group not supported for DX equipment? Or do I need another setting after AOS 5.20?
Hello everybody, I noticed that application consistent VM snapshots of e.g. Windows 10 Pro VMs with NGT enabled and installed in the VM and verified with ‘ncli ngt list’ doesn’t work and results in the following alert… VSS snapshot is not supported for the VM 'TEST-VM', because VSS software is not installed. and … VSS is enabled but VSS software or pre_freeze/post_thaw scripts are not installed on the guest VM(s) TEST-VM protected by TEST-VM. Using a Windows Server 2016 VM with the respective Protection Domain generates application consistent Nutanix VM snapshots. With Windows 10 VMs it won’t work. Verified in 2 different Nutanix clusters. Clusters are equipped with AOS 5.15.4 LTS version. Nowhere I could find a hint, that Windows 10 VMs are not supported. Regards,Didi7
hi Friends, We have a 3-node Lenovo SR530 with IPMI IPs set. We are trying to install and Setup the Cluster remotely (vpn). Below are the details of IPs,XCC IPs - 172.31.199.x (all 3 nodes of same subnet)AHV IPs - 172.31.199.x (all 3 AHVs IPs of same subnet as XCC)CVM IPs - 192.168.5.x (All CVM IPs with thier default IPs) Problem:I have installed Foundation Applet 5.1 and while running on my laptop, its not able to detect the nodes. Any idea on what to check? The other way suggested in one of the forums was execute the cluster create command in one of CVMs. But all the CVMs have same IPs (192.168.5.2). Should I change the IPs of all CVMs with unique IPs matching with AHV and XCC Subnet IPs and then execute cluster create command? How can I know the CVM IP address from the AHV cliFoundation Applet error messageJava Web Start 11.311.2.11Using JRE version 1.8.0_311-b11 Java HotSpot(TM) Client VMJRE expiration date: 19/2/22 12:00 AMconsole.user.home = C:\Users\KishoreSalipalli-------------
There could be scenarios where LCM (Life Cycle Management) update operation was initiated, and then we need to cancel it in between due to any of the following reasons: 1) LCM operation takes more time than the scheduled maintenance window2) LCM operation was initiated by mistake, and so we want to cancel it soon Starting LCM 184.108.40.206 and later versions, we can now cancel or abort an ongoing LCM update operation using the "Stop Update" feature available on the LCM page via Prism. Whenever this feature is used in LCM, it sets a cancel intent in the Nutanix cluster, which tells LCM to stop the update at the next safe point. The important point is that the update may not be canceled immediately after the cancel intent is invoked since LCM stops it at the next safe point. Hence, we will have to wait until the update reaches the next safe point. This safe point is determined by the LCM framework, which depends on the phase at which the cancel intent was set. Kindly refer to KB-4872 to know m
Welcome to License ManagerThe Nutanix corporate web site includes up-to-date information about AOS software editions.License Manager provides Licensing as a Service (LaaS) by integrating the Nutanix Support portal Licensing page with licensing management and agent software residing on Prism Element and Prism Central clusters. Unlike previous license schemes and work flows that were dependent on specific Nutanix software releases, License Manager is an independent software service residing in your cluster software. You can update it independently from Nutanix software such as AOS, Prism Central, Nutanix Cluster Check, and so on.License Manager provides these features and benefits. Simplified upgrade path. Upgradeable through Life Cycle Manager (LCM), which enables Nutanix to regularly introduce licensing features, functions, and fixes. Upgrades through LCM help ensure your cluster is running the latest licensing agent logic. Streamlined web console interface for Prism Element and Pris
This happens because the Windows Operating System does not have the appropriate drivers (VirtIO driver) to read the disk that has the operating system installed on it. In a nutshell: 1- Download both VIrtIO and Windows ISO to the image store in the cluster (you can use prism “image configuration” or prism central Explore->images->add image”) 2- Mount both VirtIO drivers and Windows ISO as CDROMs for the guest VM with problem. 3- Power on the guest VM booting from Windows ISO CDROM, you will see an option called “Troubleshoot” 4- Choose the above option and go to command prompt 5- Get the list of mounted disk drives and navigate tot eh drive where VirtIO resides 7- Load the drivers: “drvload vioscsi.inf” 8- see that drivers are mounted now “wmic logicaldisk get caption” 9- exit and reboot guest Vm 10 After you login, go ahead and install the VirtIO MSI package For more details please see: https://portal.nutanix.com/#/page/kbs/details?targetId=kA00e000000kAWeCAM
Hi!We have two Nutanix clusters running ESXi v6.7.0 with a Horizon 7 VDI environment on each. We needed to be able to sync Horizon AppVolumes between the clusters via some common storage.The solution we were set up with was a FreeNAS VM on one cluster publishing an NFS share which was then attached to the hosts in both clusters. We weren’t bothered about high availability as it’s not an issue if the replication of the appvolumes goes down sometimes as it’s only needed if appvolumes are being changed or their assignments to users is changing.After an NCC upgrade, Prism Element now flags up this external datastore as unsupported. I’m struggling to find out from Support what the actual magnitude of risk is here to the stability of the clusters (especially when it comes to upgrading firmwares, AOS etc).Does anyone here have any thoughts on this situation please?Thanks for reading!
Hello FriendsHow are you? Currently i am trying to foundation A Nutanix 3 node environment but after the foundation process begins it will halt at “waiting for the installer to boot” stage with “fatal” error warning i don’t know what is causing the error can anyone tell me something.I have attached screenshots of errors and process.
If you are unable to add hosts and storage to SCVMM by using the utility provided by Nutanix, you can add the hosts and storage to SCVMM by using the SCVMM user interface. Procedure: Login to the SCVMM user interface and click VMs and Services. Right-click All Hosts and select Add Hyper-V Hosts and Clusters, and click Next. Click Browse and select an existing Run as Account or create a new Run As Account by clicking Create Run As Account. Click OK and then click Next. The specify the search scope for Virtual machine host candidates screen appears. Type the failover cluster name in the Computer names text box, and click Next. Select the failover cluster that you want to add, and click Next. Select Re associate this host with this VMM environment check box, and click Next.The Confirm the settings screen appears. Click Finish Register a Nutanix SMB share as a library share in SCVMM by clicking Library and then adding the Nutanix SMB share. Right-click the Library Server
Nutanix clusters running Hyper-V have the following limitations. Certain limitations might be attributable to other software or hardware vendors: Guidelines: Hyper-V 2016 clusters and support for Windows Server 2016. VHD Set files (.vhds) are a new shared Virtual Disk model for guest clusters in Microsoft Server 2016 and are not supported. You can import existing shared .vhdx disks to Windows Server 2016 clusters. New VHDX format sharing is supported. Only fixed-size VHDX sharing is supported. Use the PowerShellAdd-VMHardDiskDrive command to attach any existing or new VHDX file in shared mode to VMs. For example: Add-VMHardDiskDrive -VMName Node1 -Path \\gogo\smbcontainer\TestDisk\Shared.vhdx -SupportPersistentReservations. Upgrading Hyper-V Hypervisor Hosts When upgrading hosts to Hyper-V 2016, 2019 and later versions, the local administrator user name and password is reset to the default administrator name Administrator and password nutanix/4u. Any previous changes to the admin
Ports and ProtocolsThe Ports and Protocols Reference allows you to determine ports requirements for multiple Nutanix products and services in a single pane. This document is divided into several sections based on the required ports for each product or service.Ports and Protocols Reference covers detailed port information (like protocol, service description, source, destination, and associated service) for the following products and services.Note: Note: The existing port information in this document is based on the latest product release. Updates to this document is made with major release of products. 1-click Upgrade AHV AOS Calm Collector Portal Collector Tool Disaster Recovery - Leap Disaster Recovery - Metro Availability (ESXi and Hyper-V) Disaster Recovery - Metro Availability with Leap (AHV) Disaster Recovery - Metro Availability with Leap (AHV) - CCLM Disaster Recovery - Protection Domain Era File Analytics Files Files Manager Karbon Platform Service
We have Lenovo HX (3752) installed with 3 nodes pre-bundled with AHV and AOS. The onsite technician configured IPMI (XCC) and we are now able to login to the XCC and also able to access the console (AHV). I wanted to do the following,Change the AHV IP address from the console (commands)? Know the AHV version and its default IPs/current IPs from the console? Connect/ssh to CVM and change the CVM IPs? Create Cluster of 3 nodes
With AHV-20190916.189, Nutanix supports Directly Attached Volume Groups in a Guest VM Cluster. On AHV clusters, you can create a guest VM cluster by directly attaching a volume group to guest VMs.After you attach a volume group to guest VMs, vDisks appear as SCSI devices to the guest operating system and you do not need to set up any in-guest connections when you are creating a guest cluster. If you directly attach volume groups to guest VMs, you can seamlessly share vDisks across VMs in the guest cluster.You can directly attach a volume group to guest VMs to create the following guest clusters:1. Microsoft Failover Cluster (MSFT)2. Red Hat Enterprise Linux (RHEL) Cluster For more details, refer to following:Guest VM Cluster Configuration (AHV Only)Create Guest VM Clusters by using iSCSIAcropolis Release Notes
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.