How It works
Have questions about how the Nutanix Platform works? Looking to get started - start here!
- 1,133 Topics
- 1,683 Replies
Below are new knowledge base articles published on the week of March 14-20, 2021.KB 9365 - Alert - A802002 - AncDnsUnresolvable KB 10693 - Alert - A130334 - NGT CD-ROM not Unmounted on the VM KB 10694 - Alert - A130192 - Conflicting NGT policies KB 10746 - AHV Guest VM Boot Order Being Modified to Default by Prism After Any Config Save KB 10847 - LCM Pre-check: "test_ncc_checks" KB 10857 - UVMs with VLAN tag may disconnect from network when uplink bond is tagged with VLAN on AHV host KB 10874 - VM migration tasks stuck and libvirt in inconsistent state on clusters running AHV 20170830.x KB 10897 - Xi Frame - Enterprise profile disks growing on every reboot (not extending their partition) KB 10912 - Nutanix Files: Managing Files-At-Root on NFS distributed shares KB 10944 - Alert - A130103 - NGT Mount failed KB 10957 - AHV networking interfaces are renamed after AHV upgrade to 20201105.1082 or later if RDMA NICs are presentNote: You may need to log in to the Support Portal to view some o
it used to take considerable effort to deploy nutanix in qemu/kvm lab, previously i had to modify install script to bypass bs hdd/ssd performance tests, etcrecently i had to test something and needed to deploy ahv, i noticed installation iso and i thought, hey, i don’t think it was there before, probably nutanix is going in the right direction by simplifying things and now i get “an error occurred while trying to illuminate the chassis led” and index out of range related to boot disk with that iso, loli guess i will have to dig into these install scripts againplease invest time in testing virtual deployments, and please follow KISS principleoff-topic: you have a category here in forums called “Deployment Success”, please create “Deployment Failure”, so that i and many others could properly place these threads
Very frequently we receive an alert just before an upgrade that cluster doesn't have enough space to download the binary files (). If that is happening to you, there are ways to clean the partition below the threshold (75%). Some of them include below Cleaning Old ISOs and Software Binaries ( For old AOS, NCC, Foundation versions) Checking Removing Old Logs (Log files shared for support cases if not cleared later, occupy space) Cleaned up files from the approved directories but still see high usage in /home?At this time, open a support case to identify any other underlying issues or deep cleaning on the nodes with the help of Nutanix Engineer. More details on Nutanix KB below https://portal.nutanix.com/page/documents/kbs/details?targetId=kA0600000008dpDCAQ
Do you already have many security policies defined within one instance of Flow and, have another instance where you need the same set of policies but do not wish to recreate them? Or, do you wish to simply have a backup of your existing security policies just in case you should ever need to restore them sometime in the future?Flow has the native ability to export (and subsequently import) security policies that have already been defined previously. Policies are exported into a single binary file, which can then be transferred to a different instance of Flow or stored-away for backup purposes.Please also note that, when importing a previously exported binary file, any existing policies which are already defined within a given Flow instance are automatically removed in-favor of the newly imported policies.You can find more information regarding this within the Exporting and Importing Security Policies section of the Flow Microsegmentation Guide.
Volumes support ability to boot an Operating System over iSCSI for physical servers. In this configuration, a host can start a supported Operating System from a LUN instead of a Local Disk instance.This procedure describes how to configure Network Adapter BIOS Settings to enable this feature.Important Points:Refer to Intel Ethernet Adapter Vendor Documentation for more details about Intel iSCSI Boot Configuration. According to Intel, we might be able to configure these settings through Adapter's Properties > Data Options Tab in Microsoft Windows Device Manager. Check Volumes Requirements and Supported Clients for a list of the supported network hardware and clients. Refer to Enable Nutanix Volumes and read about the procedures to perform on the Nutanix cluster. Before performing an iSCSI target discovery of the Nutanix cluster, configure BIOS boot settings for the network adapter as described in these stepsFor Step-By-Step Guide, Refer to Link
Below are new knowledge base articles published on the week of July 11-17, 2021.KB 10745 - Configurations required for DoDIN APL KB 11289 - How to upgrade from DoDIN APL configuration KB 11423 - License violation Alerts on Postpaid Prism Central KB 11608 - Improper network configuration with Microsoft NLB server may result in high percentage of Rx errors on host NICs KB 11655 - PostgreSQL Basic Commands KB 11693 - Acropolis leadership change on AHV clusters with 5.19.x or newer with LACP may lead to unexpected link flap KB 11701 - X-Ray: S3 Object Storage Microbenchmark test fails when object size is increased KB 11711 - 1-click ESXi Hypervisor upgrade on Single node cluster hangs when vCLS(vSphere Clustering Service) VM is not powered off KB 11714 - Post AOS upgrade to 188.8.131.52 bond_mode shown as "none" in manage_ovs outputNote: You may need to log in to the Support Portal to view some of these articles.
I have the same issue as this thread : Connect-NTNXCluster gives a 'Count cannot be less than zero' I installed the latest Ntnx Cmdlets from our NTX cluster (it’s on 5.15.3)I get the following error:Connect-NTNXCluster : Count cannot be less than zero.At line:1 char:1How can i fix it please ?I tried with plain text but not working, i also use with Converto-SecureString but no success…It only works when i put password through pop-up…I tried also to creat password with that but...$credential = Get-Credential$credential | Export-CliXml -Path "C:\Scripts\securecreds\Nutanixsecurecreds.cred"$credNutanix = Import-CliXml -Path "C:\Scripts\securecreds\Nutanixsecurecreds.cred"How can i put username and password through the script without pop-up i need to run every week the script.
Below are new knowledge base articles published on the week of December 13-19, 2020.KB 8562 - NCC Health Check: robo_witness_configured_check KB 8563 - NCC Health Check: robo_witness_state_check KB 8565 - NCC Health Check: robo_cluster_witness_sync_check KB 9271 - NCC Health Check: ahv_fs_integrity_check KB 9472 - NCC Health Check: category_protected_vms_multiple_fault_domain_check KB 9525 - Alert - A200330 - Prism Central home partition expansion check KB 9713 - Alert - A130340 - MetroConnectivityUnstable KB 9716 - NCC Health Check: stale_synchronous_replication_parameters_check KB 9845 - NCC Health Check :- "file_server_cvm_config_check" KB 9988 - Pre-Upgrade Check: test_if_expand_cluster_is_not_in_progress KB 10000 - NCC Health Check: objects_deployed_on_unsupported_pe KB 10248 - Alert - A130340 - Cross-container disk migration task is paused. KB 10323 - Move VMs from Protection Domain to Category for Leap KB 10339 - Skipping application consistent snapshot for VM with NVMe disks KB
You can configure a VM to boot over the network in a Preboot Execution Environment (PXE). booting over the network is called PXE booting and does not require the use of installation media. When starting up, a PXE-enabled VM communicates with a DHCP server to obtain information about the boot file it requires. Configuring PXE boot for an AHV VM involves performing the following steps: Configuring the VM to boot over the network. Configuring the PXE environment. The procedure for configuring a VM to boot over the network is the same for managed and unmanaged networks. A VM that is configured to use PXE boots over the network on subsequent restarts until the boot order of the VM is changed. To configure a PXE environment for a VM on a managed network on an AHV host, do the following: Log on to the Prism web console, click the gear icon, and then click Network configuration in the menu. The Network dialog box is displayed. On the Virtual Networks tab, click the pencil icon shown for
Manual license selection allows the customer to select specific licenses available in the customer account and apply the selected licenses to the cluster. This feature can be used when: Mixing license types (e.g., Life-of-Device and Capacity Based Licensing) in a single cluster Selecting term licenses with specific expiration dates Selecting licenses tagged for specific use cases (e.g., owned by a particular organization) Manual license selection is available as an option to license products on Prism Element and Prism Central. Where is the Manual License selection option?You see it on the portal’s ‘Manage licenses’ page displayed after .csf file from Prism (PC or PE appropriately) is uploaded. When ‘Select License’ is selected on each card, The pop-up has a ‘manually manage licenses’ option.How to access ‘manually manage licenses’ option on licensing workflow on portal:In the support portal navigate to Products--->Licenses--->manage licenses Manage licenses page: Here ‘Choo
I have a 20+node cluster and 2 nodes had multiple hard drives go out between them ( 1 SSD and 6 SATA). They are all showing stuck. I do not see any replication happening when using this command: curator_cli get_under_replication_info summary=true.All nodes show up good when using this command:nodetool -h 0 ring | grep Normal Would it be a good idea to kill the tasks with this command:cvm$ progress_monitor_cli --entity_id="<Entity_ID>" --entity_type=<Package_Name> --operation=<Operation -delete
I’m seeing:Detailed information for garbage_egroups_size_check:Node 10.1.11.32:FAIL: 67% of disk 415104829 occupied by garbage egroupsRefer to KB 1574 (http://portal.nutanix.com/kb/1574) for details on garbage_egroups_size_check or Recheck with: ncc health_checks stargate_checks garbage_egroups_size_checkon 2019.02.11 LTSDisk 415104829 (SSD) was added yesterday. I also added one ssd to 2 nodes tonight. The new ssd’s are very busy right now.
For a node to join a Nutanix cluster, it must have a hypervisor and AOS combination that Nutanix supports. AOS is the operating system of the Nutanix Controller VM, which is a VM that must be running in the hypervisor to provide Nutanix-specific functionality. Find the complete list of supported hypervisor/AOS combinations at https://portal.nutanix.com/page/documents/compatibility-matrixFoundation is the official deployment software of Nutanix. Foundation allows you to configure a pre-imaged node, or image a node with a hypervisor and an AOS of your choice. Foundation also allows you to form a cluster out of nodes whose hypervisor and AOS versions are the same, with or without re-imaging. Foundation is available for download at https://portal.nutanix.com/#/page/Foundation.If you already have a running cluster and want to add nodes to it, you must use the Expand Cluster option in Prism, instead of using FoundationNetwork requirementsWhen configuring a Nutanix block a set of IP addresses
Hi,I have three identical nodes (basic Lenovo servers) but one performs really slowly compared to the others. The disks aren’t the same across all three though. I wonder if anyone has any ideas which difference might be causing the performance issues: Firstly the HDDs are 4TB in the two ‘good’ nodes and 2TB in the ‘slower’ one. I wouldn’t expect that to cause performance issues, just overall capacity will be limited? To be fair the 2TB is a WD ‘green’ and the 4TBs are WD ‘red’ drives. Secondly the ‘slower’ host has a Samsung Pro 840 256GB SSD which Prism reports as often having a higher latency than the HDDs! See the image below Are they known for being poor in this kind of environment? I’m not really stressing them. Any good logs to see performance issues? I can see the HDD activity light is permanently on on the slower host. Cheers,Steve
Do you want to import Third Party Appliances on AHV and wondering about following questions?Is my Third-Party Appliance compatible with AHV? What is supported and un-supported? How will I migrate my Appliance to AHV?We have multiple options that can be used to deploy Third Party Appliances on AHV. Third-party application vendors provide different applications or appliances that are certified to run on AHV.Here is the full current list of such applications: Compatibility MatrixNote: Software vendors can utilize this link to request an official software validation on AHVFor more details, refer to KB 9849
Application monitoring provides visibility into integrated applications by collecting application metrics using Nutanix and third-party collectors, providing a single pane of glass for both application and infrastructure data, correlating application instances with virtual infrastructure, and providing deep insights into applications performance metrics. The monitoring integrations dashboard allows you to view information about select applications, such as SQL Server instances, running in the cluster. Before you decide to enable this, there are some pre-requisites that you need to follow. To learn more about the prerequisites, click here and for more information about Application Monitoring, check the Application Monitoring Guide.
Hardware TermsBMC: Baseboard management controller, the microcontroller that manages the motherboard. BIOS: Basic input/output system, the firmware that initializes the motherboard and runtime services on startup. HBA: Host Bus Adapter, a device that manages communication between storage media and other system components. SATA DOM: SATA disk on a module, the hypervisor boot drive for Nutanix platforms up to and including G5. M.2: mSATA 2, the hypervisor boot drive for Nutanix platforms G6 and later.Dell-Specific TermsiDRAC: Integrated Dell remote access controller, a software tool that lets you administer a server without needing physical access. iSM: iDRAC service module, a module that integrates iDRAC with an operating system. PTAgent: Power Tools agent, the software entity responsible for configuring the iSM. Firmware Entities: A collective term for all Dell firmware not related to iDRAC.Lenovo-Specific TermsIMM2: Integrated management module 2, a service processor that manages the
I’m executing command:/usr/local/nutanix/bin/acli vm.listThe output is: 002Nutanix-Production-Mgmt fa4aa572-a757-4e9f-a151-760a290195bd2016-delete- 0 0cc10b90-108f-404f-b48d-ac8a1d1e5bad72222092-crrs-os-VDA717 1034b811-75ea-4b13-9688-892de9bc9b6787777823 be6afd54-e56f-4d9b-8101-c0dd946581c3632224333 348430c2-7567-4579-b04a-9b23ba6890b1Rddu-Case01 66b8ca97-bafc-46f9-9748-6ac9b2e580cdI also need to see the IP address of each VM.I found this link that shows how to do it with NCLI but NOT ACLIListing VMs via CLI for importing into Excel
Below are new knowledge base articles published on the week of August 9-15, 2020.KB 9258 - Alert - A650000 - Data provider collector is in crashloop. KB 9637 - Pre-Upgrade Checks: test_if_rolling_restart_is_not_in_progress KB 9698 - Alert - Clusters on AWS - Hibernate/Resume process taking long time KB 9699 - Alert - Nutanix Clusters on AWS - Capacity not met KB 9701 - Alert - Clusters on AWS - Cluster Node Condemned Timeout KB 9703 - Alert - Clusters on AWS - Cloud Provider Connection Issues KB 9704 - Alert - Clusters on AWS - Handling AWS Health and Scheduled Events notifications KB 9723 - Clusters on AWS - Limitations of Rack Awareness feature KB 9733 - Clusters on AWS – Important Considerations when deploying Nutanix Clusters on AWS KB 9768 - Alert - Clusters on AWS - Cannot Provision Node KB 9770 - Alert - Clusters on AWS - Cluster Key Pair Deleted alert in Nutanix Clusters Console KB 9771 - Alert - Clusters on AWS - Host Agent Ping Timeout KB 9772 - Alert - Clusters on AWS - Node
We have numerous clusters and each have their own hardware platform (and in a couple of cases, we’ve even mixed hardware models within the same cluster). Our current Nutanix footprint;VENDOR MODELS NODE COUNTSuper Micro NX-8155-G7 12Super Micro NX-1175S-G6 7Super Micro NX-3060-G7 6Dell XC630-10 14Super Micro NX-8035-G7 12Super Micro NX-8035-G6 36Is there a preferred or recommended solution for hardware monitoring and alerting? For instance; Dell OpenManage (DOM) or Supermicro Server Manager (SSM). I don’t readily know if either would support the others solution (like DOM supporting IPMIs or SSM supporting iDracs) Or, would Prism Central (and Prism Element) be sufficient for hardware monitoring and alerting?We currently have and are using DOM but not for the SuperMicro hardware so I’m wondering if I’m potentially missing out on a better solution or not.
Hello!Need help to understand problem and help. We have 4 nodes. Today saw one node have a problem with the power supply:use this command:ncc health_checks hardware_checks ipmi_checks power_supply_checkResult:1# FAIL: Power supply 1 is down on block block <name>2# FAIL: Power supply 1 is down on block block <name>My next action if physically all fine with PSU.Find in KB7386:Restore the power supply as soon as possible... If PSU info unavailable, upgrade to the latest BMC firmware, and apply factory default after BMC upgrade…If update BMC how long time to need and downtime need?Need help fix this problem, and your help, what steps do next.
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.