How It works
Have questions about how the Nutanix Platform works? Looking to get started - start here!
- 1,120 Topics
- 1,675 Replies
Below are new knowledge base articles published on the week of March 13-19, 2022.KB 12102 - NCC Health Check: power_supply_issues_check KB 12508 - Nutanix Cloud Clusters (NC2) on AWS - Hibernating slowness when a CVM is unresponsive KB 12733 - Nutanix cluster nodes belonging to different vSphere clusters KB 12812 - Failed to create an application-consistent snapshot on ESXi with VssSyncStart operation failed: iDispatch error #8723 (0x80042413) KB 12843 - How to view All support cases in the Nutanix portal account KB 12867 - Karbon / CSI: Error while trying to create a pod that uses a PVC bound to Nutanix Files KB 12886 - How to change Prism Central (PC) cluster nameNote: You may need to log in to the Support Portal to view some of these articles.
We keep our test cluster up-to-date with the latest STS versions.Unfortunately, by doing so, I cannot upgrade from 22.214.171.124 to 6.1 as it is not supported.Upgrading from 126.96.36.199 is supported.I have two questions, one, when will I be able to upgrade to 6.1 from 188.8.131.52?Second, should I lag a version or two behind in the STS track to make sure I am not blocked from taking a major upgrade?Thanks!
Hi We have 2 sites with fast links between them with 1 Nutanix cluster on each site. All the hosts are running the ESXi 6.7 hypervisor. We use Metro availability so we have 1 ESXi cluster stretched across both sites. Currently Prism central runs on a VM in located on hosts in the primary site in a Metro availability enabled container\DS. We looking into scaling out our Prism Central installation from 1 node to 3 nodes. Ideally I was thinking we could spread the Prism Central nodes across the two sites, perhaps by having 1 node on site 1 and 2 nodes on site 2. However, the Prism Central scale out manual states the following: “All scale out Prism Central VMs must run on the same cluster. For example, running two VMs in cluster_1 and one VM in cluster_2 is not supported". Is this referring to one ESXi cluster or one Nutanix Cluster - I just want to be sure? Thanks in advance.
Below are new knowledge base articles published on the week of March 6-12, 2022.KB 12603 - How to update hosting Prism Element and Prism Central details post Re-IP procedure KB 12651 - Login to PC UI fails with "Server is not reachable" KB 12717 - VIP (Virtual IP) and Prism Gateway (PGW) may become unavailable for maximum 30 minutes when Prism leader node is rebooted or removed from the cluster with 8-core nodes KB 12790 - ESXi - nutanix_nfs_plugin - Established VAAI session with NFS server 192.168.5.2 datastore KB 12800 - Era - Log Catchup activity fails after restore operations on PostgreSQL DBs KB 12820 - Alert - A1120 - CuratorJobRunningTooLong KB 12822 - OVS process on AHV can consume a lot of CPU in certain conditions KB 12833 - Flow Microsegmentation - Service Chain Integration KB 12840 - Nutanix Files :-Share Creation failures may lead to existing volume group(s) deletionNote: You may need to log in to the Support Portal to view some of these articles.
Below are new knowledge base articles published on the week of February 27-March 5, 2022.KB 10834 - Alert - A130348 - VmInPausedState KB 12394 - Exit Maintenance mode error "Client does not have maintenance authority over host" KB 12631 - Set vpxd.stats.maxQueryMetrics in vCenter to get all cluster metrics KB 12776 - NX Hardware: Node hung at DXE--BIOS PCI Bus Enumeration KB 12789 - Era - Linux operating system pre-requisites (/tmp and /dev/shm directories) KB 12793 - Foundation - Cluster create failure for new HPE DX nodes and AHV hypervisor KB 12795 - Windows VSS snapshot failed with VSS shadow copy Service error: The security ID structure is invalid KB 12809 - Support during a critical upcoming upgrade/maintenanceNote: You may need to log in to the Support Portal to view some of these articles.
If you’ve ever encountered an alert like the referenced one below, you might have wondered, “What does this alert mean, and what do I do next?”Reference Alert:1 DIMM RAS event found for P1-DIMMA1(Serial:XXXXXXXX) by Samsung on host x.x.x.x in last 24 hours. Installed BIOS version is PB42.602The answer to the first question is easy. RAS stands for Reliability, Availability, and Serviceability and is an advanced feature that is enabled in the server’s BIOS to detect and alert failing memory region(s) proactively. The second answer is a bit more tricky, but don’t worry, we’ve got you covered with KB-11794 which goes into detail about how to diagnose and resolve a RAS event, and KB-7503 which covers DIMM error handling guidance for G6, G7, and G8 platforms.
Yes! You heard it right, hardware dispatches are now zero-touch which helps to minimize delays and overall improve the dispatch experience. To learn more, please refer to KB 12642Please make sure you have Pulse enabled in order to utilize this feature. Enjoy Dispatching!
Below are new knowledge base articles published on the week of February 20-26, 2022.KB 10560 - Alert - A 130351 - VolumeGroupProtectionFailed KB 10571 - Alert - A130349 - ConsistencyGroupVmConflicts KB 10572 - Alert - A130350 - ConsistencyGroupVgConflicts KB 10573 - Alert - 130352 - VolumeGroupProtectionMightFailPostRecovery KB 10754 - Alert - A130355 - VolumeGroupRecoveryPointReplicationFailed KB 11308 - Alert - A130357 - VolumeGroupProtectionFailedOnPC KB 11413 - Alert - A130358 - ConsistencyGroupWithStaleEntities KB 11744 - Alert - A130361 - PartialVolumeGroupRecoveryPoint KB 11745 - Alert - A130362 - VolumeGroupReplicationTimeExceedsRpo KB 12676 - Cloud Connect: Replicating snapshots from Cloud CVM to local On-prem cluster may fail and cerebro may enter crash loop KB 12767 - Windows VM may fail to boot with INACCESSIBLE_BOOT_DEVICE (0x7B) error if MPIO is configured KB 12773 - The Waste Electrical and Electronic Equipment Directive (WEEE) handlingNote: You may need to log in to the
Have you ever pondered to find a kb which contains all of the Nutanix starter commands for basic troubleshooting?Well, look no further; KB-11619 contains the basic commands to run, while starting with the initial triage on the issue in your Nutanix cluster. Enjoy troubleshooting!
Below are new knowledge base articles published on the week of February 13-19, 2022.KB 10581 - How to update email address for the Nutanix Portal access KB 12044 - NCC Health Check: fanout_secure_port_connection_pe_to_pc_check KB 12745 - Nutanix Files - Files scale out fails with "FM Files Scale Up Task Failed at: Create platform memory hot scaleup task." KB 12747 - Failed To Snapshot Entities Alert due to stale cached vm information KB 12754 - LCM: BIOS-Redfish update fails with "Failed to update BIOS. Status: 200" KB 12755 - Nutanix Move - Migration task is stuck at "In queue for Memory"Note: You may need to log in to the Support Portal to view some of these articles.
Introduced in AOS 5.18 is a feature known as the Recycle Bin, which is enabled by default and allows a 24-hour hold on VM files that were deleted and need to be recovered, unless the cluster free storage space reaches critical thresholds.This feature was introduced to simplify the recovery procedure for accidentally deleted storage entities (Virtual Machines, Volume Groups, or individual disks). You can disable and enable the recycle bin or clear its contents.To restore an entity, you MUST contact Nutanix Support.To learn more about the limitations and how to manage the recycle bin, click here.
New to Nutanix?Want to learn more about all of the Nutanix products but don't know where to find these educational Resources?Don't worry - we have got you covered!Check out the following Knowledge Base article - KB 12730, which has a list of great resources that can help you understand Nutanix so that your experience with us is a great one!
You might have noticed that the versions of AHV hypervisor have names starting from the year, such as 2017 and 2019. For example, AHV-20170830.434 and AHV-20190916.231. The difference between them is that the versions starting with 2017 are built on CentOS 6 image, while the ones starting with 2019 are based on CentOS 7. It is very important to note that all the 2019 AHV versions are only compatible with AOS versions 5.16 and newer. It means that the current LTS AOS versions 5.10.x and 5.15.x do not support AHV 2019. The AHV 2019 is currently supported only with STS AOS versions. The cluster will not let you upgrade the AHV to 2019 version if you are below 5.16. But it is possible to manually image the node with the AHV 2019 ISO and install the earlier AOS there. Also, it is possible to upload the non-default AHV during the foundation procedure together with earlier AOS. Such configurations are unsupported. If you have already done that, you will have to go to the STS AOS to
Below are new knowledge base articles published on the week of February 6-12, 2022.KB 12655 - AHV "firstboot" script fails with "could not find any uplinks to foundation" due to NIC auto-negotiation taking too long KB 12714 - Nutanix Files: NFS write errors due to duplicate hostnames KB 12724 - ERA - Unable to authorise an existing database server for clone operation in Era KB 12727 - Enabling RDMA fails because cpuidle states do not match. Error message "Failed to verify that states are disabled on host" KB 12730 - New To Nutanix - Educational Resources KB 12732 - Nutanix Files: How to create a TLD wtih a dedicated Volume Group KB 12737 - Nutanix Files: CVE-2021-44142 does not impact Nutanix FilesNote: You may need to log in to the Support Portal to view some of these articles.
I am trying to find configuration information for a Nutanix 3-node chassis, model # NX-3360-G5. The only documentation I have been able to obtain pertains to 2 or 4 node chassis. I presume the drive slots designated as node “D” are simply filled with blanking panels but would like some confirmation as such. Additionally, I would like to know the replacement model number and where available. Thank you.
There would be scenarios where in a system administrator or an engineer managing a Nutanix cluster would want to say: Perform some action like shutdown, power-on, delete, take a snapshot, clone etc ib bulk or on multiple VMs at once with some matching critearia such as say common Starting names, or common ending names, some pattern in names and all the powered-on UVMs etc. One way is to one-by-one go and perform that action from Prism for each of those VMs which is very tedious in scenarios where say we have 20 VMs etc on which we want to perform same actions say like shutting them down. What can we do ? In such scenarios we can use some basic shell scripting along with aCLI(Acropolis command line tool). So aCLI is a cmdline tool for managing hosts, networks snapshots, UVMs etc on a Nutanix cluster. Here is a document which explains how to use aCLI https://portal.nutanix.com/page/documents/details?targetId=Command-Ref-AOS-v5_17%3Aman-acli-c.html. So below are some
It’s totally available in Prism UI. But I find it impossible with API:In REST API, using PUT /virtual_disks/ “disk_capacity_in_bytes” receives 200 but not size changes. In ncli, using vdisk edit name=<> max-capacity=<> runs, yet the same no changes. Since the VM disk of this Linux VM doesn’t belong to any volume group, not an option for acli the way I see it.And if I do it via VM commands, it’s only able to create new disk or clone disk, not expand which is what I want.I really need an API for this job. Any ideas? Thanks!
I have one RedHat VM for that i am trying to extend the VM using API (V2). API call call is successful ,but disk is not extending of the vm. In Recent task it shows me VM Update but nothing happening. Same task when I do from Gui its shows VMdisk Update. Can any one please help me what wrong I am doing here.OR I am using wrong api call. In the input details are already provided. Api Call :- PUT : https://192.168.1.2:9440/PrismGateway/services/rest/v2.0/virtual_disks/Cross mark is my api call and Right mark is GUI change where disk is extending but not from API call. Thank you.
Below are new knowledge base articles published on the week of January 30-February 5, 2022.KB 12077 - Alert - A130039 - UnqualifiedDisk KB 12248 - Cluster conversion validation failing because of absence of AHV installer image KB 12531 - Era - Alert "Driver exceeded the timeout value of 70 minutes for the execution of the operation" KB 12612 - Managing storage container in Prism Central fails with "Forbidden" error KB 12642 - Zero-Touch Hardware Dispatches KB 12671 - How to enable multi NIC feature for Mongodb in ERA 2.4 KB 12673 - How to enable and disable ERA services KB 12675 - Cannot modify cluster parameters via Cluster Details UI if Cluster Timezone is invalid KB 12677 - DR - Cerebro service may enter a crash loop due to retrieval of snapshots on a Cluster with AOS version 6.0 and higher from a cluster with AOS version below 6.0. KB 12681 - Maximum Node count for Nutanix clusters KB 12682 - Alert - Nutanix Cloud Clusters - AWS - Cluster Creation Failed due to shared subnets KB 12
Below are the top knowledge base articles for the month of January 2022.KB 7503 - NX Hardware [Memory] – G6, G7 platforms - DIMM Error handling and replacement policy KB 1113 - HDD or SSD disk troubleshooting KB 1540 - [AOS Only] What to do when /home partition or /home/nutanix directory on a Controller VM (CVM) is full KB 3827 - Alert - A130087 - Node Degraded KB 4158 - Alert - A1104 - PhysicalDiskBad KB 6153 - NCC Health Check: default_password_check and pc_default_password_check KB 2473 - NCC Health Check: cvm_memory_usage_check KB 4272 - Alert - A6516 - Average CPU load on Controller VM is critically high KB 2090 - AHV host networking KB 4409 - LCM: (Life Cycle Manager) Troubleshooting Guide KB 4141 - Alert - A1046 - PowerSupplyDown KB 4519 - NCC Health Check: check_ntp KB 3786 - Alert - A1081 - CuratorScanFailure KB 6945 - How Upgrades Work at Nutanix KB 10919 - Logrotate does not rotate ikat logs KB 5228 - NCC Health Check: pcvm_disk_usage_check KB 2480 - NCC Health Check: nic_li
Hi,I have a confusion between the role played by foundation and phoenix, during a LCM firmware upgrade, moreover I am not finding a deep dive article explaining this end to end: 1. Once the phoenix iso is created and mounted to the host along with the upgrade bundle, Can phoenix run independently and complete the upgrades or needs help from foundation for performing some tasks like host reboot ? 2. During LCM upgrade, since the node undergoing upgrades, will have all VMs powered off including the CVM, will it take help from a remote CVM foundation to orchestrate the upgrades on this host ? 3. Putting host in maintenance mode is taken care by LCM or foundation service? 4. Can you please give a high level working of the phoenix mechanism, since the hypervisor installer itself is a separate iso, which needs to be mounted, just wondering how phoenix handles multiple iso, does it mount simultaneously with the upgrade iso? Thanks,Arun
Below are new knowledge base articles published on the week of January 23-29, 2022.KB 11244 - NCC Health Check: stale_app_consistent_snapshot_metadata_chunks_check KB 12563 - Changes in application consistent snaphots for Nutanix workloads KB 12623 - VM Network creation on AHV indicates "VLAN identifier has to be unique within managed (or unmanaged) networks on one Virtual Switch" KB 12649 - Unable to discover nodes with Foundation java applet KB 12666 - ERA - MSSQL Database Provision Best Practices CustomisationNote: You may need to log in to the Support Portal to view some of these articles.
Did you ever wonder what the blinking green light or the solid amber light on the LED of the network card indicates?Are you confused between a blinking green light and a solid green light? Different NIC manufacturers use different LED colors and blink states. Not all NICs are supported for every Nutanix platform. To know what each blink state means on the different manufactured LEDs, check out this article here.
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.