Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,144 Topics
- 3,122 Replies
Booting from SATA CDROM in AOS 5.15.4 LTS (AVH VM) not possible
Hello everybody,I was unsuccessful to boot from a SATA CDROM in an AHV VM on different Nutanix-cluster with the same AOS 5.15.4 LTS version.Allthough I explicitly selected to boot from the SATA CDROM, the VM says ‘Could not read from cdrom (Code xxx) and instantly jumps into 2048 game (what the heck).As soon as I switch to an IDE CDROM, the system boots properly.Is this a bug in AOS 5.15.4 LTS ?ThanksDidi7
Download Prism Central help
I’ve been trying to download the latest version of Prism Central through Prism and consistently fails. I then downloaded the binary and metadata file but they also continue to fail to upload through Prism after various progress. I checked that there’s at least 10GB of free space in the /home directory - are there any logs that I can look at to troubleshoot further?
Nutanix Products and Services Network Port Usage
It can sometimes be confusing as to which network port(s) a Nutanix product or service uses. Also, this information can often be helpful when configuring network security or firewall appliances. For the various Nutanix products and services, a handy list of ports, services, their respective protocols, and a simple description of each can be found in the Port Reference documentation of the Portal. This list is conveniently divided into sections corresponding to each Nutanix product or service. Also, specifically regarding the configuration of network firewalls, a recommendation regarding the specific ports configuration can be found within the Recommendation on Firewall Ports Config knowledge base article.
Can I use Nutanix Volume Group on DX series equipment?
The Nutanix AOS/AHV was installed on the DX equipment, and the ISCSI Data Services IP was also set up. The Volume Group tab and create Volume Group are not visible in the Prism - Storage - Table.AOS version is 5.20.xIs Volume Group not supported for DX equipment? Or do I need another setting after AOS 5.20?
Application consistent VM snapshots in Windows Client VMs not possible
Hello everybody, I noticed that application consistent VM snapshots of e.g. Windows 10 Pro VMs with NGT enabled and installed in the VM and verified with ‘ncli ngt list’ doesn’t work and results in the following alert… VSS snapshot is not supported for the VM 'TEST-VM', because VSS software is not installed. and … VSS is enabled but VSS software or pre_freeze/post_thaw scripts are not installed on the guest VM(s) TEST-VM protected by TEST-VM. Using a Windows Server 2016 VM with the respective Protection Domain generates application consistent Nutanix VM snapshots. With Windows 10 VMs it won’t work. Verified in 2 different Nutanix clusters. Clusters are equipped with AOS 5.15.4 LTS version. Nowhere I could find a hint, that Windows 10 VMs are not supported. Regards,Didi7
Offline foundation applet not detecting nodes (Lenovo)
hi Friends, We have a 3-node Lenovo SR530 with IPMI IPs set. We are trying to install and Setup the Cluster remotely (vpn). Below are the details of IPs,XCC IPs - 172.31.199.x (all 3 nodes of same subnet)AHV IPs - 172.31.199.x (all 3 AHVs IPs of same subnet as XCC)CVM IPs - 192.168.5.x (All CVM IPs with thier default IPs) Problem:I have installed Foundation Applet 5.1 and while running on my laptop, its not able to detect the nodes. Any idea on what to check? The other way suggested in one of the forums was execute the cluster create command in one of CVMs. But all the CVMs have same IPs (192.168.5.2). Should I change the IPs of all CVMs with unique IPs matching with AHV and XCC Subnet IPs and then execute cluster create command? How can I know the CVM IP address from the AHV cliFoundation Applet error messageJava Web Start 11.311.2.11Using JRE version 1.8.0_311-b11 Java HotSpot(TM) Client VMJRE expiration date: 19/2/22 12:00 AMconsole.user.home = C:\Users\KishoreSalipalli-------------
'Stop Update' feature in LCM
There could be scenarios where LCM (Life Cycle Management) update operation was initiated, and then we need to cancel it in between due to any of the following reasons: 1) LCM operation takes more time than the scheduled maintenance window2) LCM operation was initiated by mistake, and so we want to cancel it soon Starting LCM 126.96.36.199 and later versions, we can now cancel or abort an ongoing LCM update operation using the "Stop Update" feature available on the LCM page via Prism. Whenever this feature is used in LCM, it sets a cancel intent in the Nutanix cluster, which tells LCM to stop the update at the next safe point. The important point is that the update may not be canceled immediately after the cancel intent is invoked since LCM stops it at the next safe point. Hence, we will have to wait until the update reaches the next safe point. This safe point is determined by the LCM framework, which depends on the phase at which the cancel intent was set. Kindly refer to KB-4872 to know m
License Manager Guide
Welcome to License ManagerThe Nutanix corporate web site includes up-to-date information about AOS software editions.License Manager provides Licensing as a Service (LaaS) by integrating the Nutanix Support portal Licensing page with licensing management and agent software residing on Prism Element and Prism Central clusters. Unlike previous license schemes and work flows that were dependent on specific Nutanix software releases, License Manager is an independent software service residing in your cluster software. You can update it independently from Nutanix software such as AOS, Prism Central, Nutanix Cluster Check, and so on.License Manager provides these features and benefits. Simplified upgrade path. Upgradeable through Life Cycle Manager (LCM), which enables Nutanix to regularly introduce licensing features, functions, and fixes. Upgrades through LCM help ensure your cluster is running the latest licensing agent logic. Streamlined web console interface for Prism Element and Pris
Windows BSOD after migration to AHV from different platforms or after P2V conversion
This happens because the Windows Operating System does not have the appropriate drivers (VirtIO driver) to read the disk that has the operating system installed on it. In a nutshell: 1- Download both VIrtIO and Windows ISO to the image store in the cluster (you can use prism “image configuration” or prism central Explore->images->add image”) 2- Mount both VirtIO drivers and Windows ISO as CDROMs for the guest VM with problem. 3- Power on the guest VM booting from Windows ISO CDROM, you will see an option called “Troubleshoot” 4- Choose the above option and go to command prompt 5- Get the list of mounted disk drives and navigate tot eh drive where VirtIO resides 7- Load the drivers: “drvload vioscsi.inf” 8- see that drivers are mounted now “wmic logicaldisk get caption” 9- exit and reboot guest Vm 10 After you login, go ahead and install the VirtIO MSI package For more details please see: https://portal.nutanix.com/#/page/kbs/details?targetId=kA00e000000kAWeCAM
Hi!We have two Nutanix clusters running ESXi v6.7.0 with a Horizon 7 VDI environment on each. We needed to be able to sync Horizon AppVolumes between the clusters via some common storage.The solution we were set up with was a FreeNAS VM on one cluster publishing an NFS share which was then attached to the hosts in both clusters. We weren’t bothered about high availability as it’s not an issue if the replication of the appvolumes goes down sometimes as it’s only needed if appvolumes are being changed or their assignments to users is changing.After an NCC upgrade, Prism Element now flags up this external datastore as unsupported. I’m struggling to find out from Support what the actual magnitude of risk is here to the stability of the clusters (especially when it comes to upgrading firmwares, AOS etc).Does anyone here have any thoughts on this situation please?Thanks for reading!
Adding Hosts and Storage to SCVMM Manually (SCVMM User Interface)
If you are unable to add hosts and storage to SCVMM by using the utility provided by Nutanix, you can add the hosts and storage to SCVMM by using the SCVMM user interface. Procedure: Login to the SCVMM user interface and click VMs and Services. Right-click All Hosts and select Add Hyper-V Hosts and Clusters, and click Next. Click Browse and select an existing Run as Account or create a new Run As Account by clicking Create Run As Account. Click OK and then click Next. The specify the search scope for Virtual machine host candidates screen appears. Type the failover cluster name in the Computer names text box, and click Next. Select the failover cluster that you want to add, and click Next. Select Re associate this host with this VMM environment check box, and click Next.The Confirm the settings screen appears. Click Finish Register a Nutanix SMB share as a library share in SCVMM by clicking Library and then adding the Nutanix SMB share. Right-click the Library Server
Limitations and Guidelines for running Hyper-V on Nutanix clusters
Nutanix clusters running Hyper-V have the following limitations. Certain limitations might be attributable to other software or hardware vendors: Guidelines: Hyper-V 2016 clusters and support for Windows Server 2016. VHD Set files (.vhds) are a new shared Virtual Disk model for guest clusters in Microsoft Server 2016 and are not supported. You can import existing shared .vhdx disks to Windows Server 2016 clusters. New VHDX format sharing is supported. Only fixed-size VHDX sharing is supported. Use the PowerShellAdd-VMHardDiskDrive command to attach any existing or new VHDX file in shared mode to VMs. For example: Add-VMHardDiskDrive -VMName Node1 -Path \\gogo\smbcontainer\TestDisk\Shared.vhdx -SupportPersistentReservations. Upgrading Hyper-V Hypervisor Hosts When upgrading hosts to Hyper-V 2016, 2019 and later versions, the local administrator user name and password is reset to the default administrator name Administrator and password nutanix/4u. Any previous changes to the admin
Ports and Protocols Reference Chart
Ports and ProtocolsThe Ports and Protocols Reference allows you to determine ports requirements for multiple Nutanix products and services in a single pane. This document is divided into several sections based on the required ports for each product or service.Ports and Protocols Reference covers detailed port information (like protocol, service description, source, destination, and associated service) for the following products and services.Note: Note: The existing port information in this document is based on the latest product release. Updates to this document is made with major release of products. 1-click Upgrade AHV AOS Calm Collector Portal Collector Tool Disaster Recovery - Leap Disaster Recovery - Metro Availability (ESXi and Hyper-V) Disaster Recovery - Metro Availability with Leap (AHV) Disaster Recovery - Metro Availability with Leap (AHV) - CCLM Disaster Recovery - Protection Domain Era File Analytics Files Files Manager Karbon Platform Service
Configure AHV IP
We have Lenovo HX (3752) installed with 3 nodes pre-bundled with AHV and AOS. The onsite technician configured IPMI (XCC) and we are now able to login to the XCC and also able to access the console (AHV). I wanted to do the following,Change the AHV IP address from the console (commands)? Know the AHV version and its default IPs/current IPs from the console? Connect/ssh to CVM and change the CVM IPs? Create Cluster of 3 nodes
Microsoft Failover Cluster (MSFT) on Nutanix
With AHV-20190916.189, Nutanix supports Directly Attached Volume Groups in a Guest VM Cluster. On AHV clusters, you can create a guest VM cluster by directly attaching a volume group to guest VMs.After you attach a volume group to guest VMs, vDisks appear as SCSI devices to the guest operating system and you do not need to set up any in-guest connections when you are creating a guest cluster. If you directly attach volume groups to guest VMs, you can seamlessly share vDisks across VMs in the guest cluster.You can directly attach a volume group to guest VMs to create the following guest clusters:1. Microsoft Failover Cluster (MSFT)2. Red Hat Enterprise Linux (RHEL) Cluster For more details, refer to following:Guest VM Cluster Configuration (AHV Only)Create Guest VM Clusters by using iSCSIAcropolis Release Notes
Register Clustered SQL Server DB Server VM and Database Failed
I have an error at the Discover Cluster step:HTTPConnectionPool (host = 192.168.53.58, port = 5985): Max retries exceeded with url: / wsman (Caused by ConnectTimeoutError (<requests.packages.urllib3.connection.HTTPConnection object at 0x7fc18c829780>, Connection to 192.168.53.58 timed out. ( connect timeout = 90)))Everything else is ok. Port 5985 is open between VLANsDo you have an idea?
Karbon 2.3 DVP deployment issues (centralised Prism VPN SIte2Site)
Hi there,We are currently deploying (and testing) Karbon as K8 orchestration platform for all our Nutanix platforms (worldwide). I have failed installation attempts from Prism Central.As a side note, we are using VPN Site2Site from centralised Prism Control and I can reach remote K8 VLAN from distant Nutanix deployment. The reachout tests were done with small VM belonging the the K8 VLAN testing node and via private CIDR (bidirectional tests), but utilizing the same encrypted VPN channel.This is what I’m having from karbon_core.out (PCVM):2021-10-13T21:17:12.687Z ssh.go:153: [DEBUG] [k8s_cluster=RGS-PA-K8-STAGING] On 10.20.25.130:22 executing: docker plugin inspect nutanix2021-10-13T21:17:12.825Z ssh.go:166: [WARN] [k8s_cluster=RGS-PA-K8-STAGING] Run cmd failed: Failed to run command: on host(10.20.25.130:22) cmd(docker plugin inspect nutanix) error: "Process exited with status 1", output: "Error: No such plugin: nutanix\n\n"2021-10-13T21:17:12.825Z sshutils.go:44: [ERROR] [k8s_clust
Enable RDMA on new nodes
Hey everyone - we have 2 brand new nodes that need to be added to an existing cluster. We enabled RDMA when we ran Foundation on that cluster and now need to add 2 additional nodes. Do we run the script to choose which card to enable RDMA on before we add to the cluster? Just looking for guidance on the correct way to ensure it is enabled on the new nodes as well. Thanks!
Imaging with Foundation 5.0.4 and G5 node fails
Dear communityI want to install three G5 nodes with the foundation vm 5.0.4 and it fails at the beginning with the error"Failed to connect to the following IPMIs:"Let me describe my setup and my troubleshootingNodes are X10DRT-P-G5-NI22 Set IP for the IPMI on each node and also set the ntp servers on each node Trying to image the AHV Foundation vm is version 5.04 All nodes, DNS server and the Foundation VM are in the same subnet I can ping from the foundation to the IPMI I can also send IPMI cmd to the nodes and getting results=> so used username and passwd are correct and nothing is blocking the trafficThe Foundation fails with error "Failed to connect to the following IPMIs" Following error appears very often in the logfile2021-10-13 14:40:18,125Z Thread-236 detect_node_type.check_for_nutanix:236 DEBUG: Got a pexpect.TIMEOUT exception with process.before = SMC IPMI Tool V2.22.1(Build 190920) - Super Micro Computer, Inc.Press Ctrl+D or "exit" to exitPress "?" or "help" for helpPre
Already have an account? Login
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.