How It works
Have questions about how the Nutanix Platform works? Looking to get started - start here!
- 1,298 Topics
- 1,981 Replies
Nutanix customers using AHV as their Hypervisor often need to evaluate resources being used/consumed on their infrastructure. One of those factors involved is in calculating how much space has been consumed by one or more Virtual Machines. It is possible to get granular in this scenario and be able to calculate the amount of space taken up by a VM. Depending on their environment, it should be possible or feasible for one to print out the space usage in terms of Bytes, KiloBytes, MegaBytes or GigaBytes.I have developed the following string manipulation to be able to get those numbers. Here is a sample from one of the checks I performed:nutanix@NTNX-20FM6L56789-A-CVM:172.31.1.11:~$nutanix@NTNX-20FM6L56789-A-CVM:172.31.1.11:~$ acli vm.list | grep 120-Windows-Invent120-Windows-Invent 99021293-4ea5-440a-a894-846ed07de132nutanix@NTNX-20FM6L56789-A-CVM:172.31.1.11:~$nutanix@NTNX-20FM6L56789-A-CVM:172.31.1.11:~$ acli vm.get 120-Windows-Invent | grep vmdisk_size | awk '
Do you know you can expand Prism central VM from one to three instances to support moreworkloads?If Prism Central is running on a single VM currently, you can expand it to three VMs. This increases both capacity and resiliency of Prism Central (at the cost of maintaining two additional VMs). To scale-out Prism Central instance across multiple VMs, we can follow steps below: Click Gear icon and then select Prism Central Management from Settings menu Manage Prism Central page appears which provides information about Prism Central Instance. To expand Prism Central instance from one to three VMs, click the Scale Out PC button to display the Scale Out PC page and do the following: Here are requirementswhich must be met before you can expand Prism Central or add a Prism Central VM.
Hi everyone, I need some help.I'd like to see the output of several Prism Element API calls in the Nutanix/ESXi hypervizor environment.curl -ks -u 'user:pass' -X GET 'https://<prism_element_ip>:9440/PrismGateway/services/rest/v2.0/cluster'curl -ks -u 'user:pass' -X GET 'https://<prism_element_ip>:9440/PrismGateway/services/rest/v2.0/hosts'curl -ks -u 'user:pass' -X GET 'https://<prism_element_ip>:9440/PrismGateway/services/rest/v2.0/vms?include_vm_disk_config=true&include_vm_nic_config=true'We are developing a monitoring/reporting tool for multiple platforms. We have recently added support for Nutanix as well. It works fine with AHV but we don't have access to a Nutanix/ESXi config.Thanks and regards,P.
Dear Team,We purchased NX-1265-G6-4114-CM (3 Node) but customer refuse to take and this node register with this customer. we want to sell this node to my other customer.I want to know how we can change the customer name,basically how to change ownership one customer to another customer.really appreciate if someone help us.
Below are new knowledge base articles published on the week of August 16-22, 2020.KB 9765 - Application Consistent (VSS) Snapshots fail if guest has in-guest mounted VHDX files KB 9863 - Async DR replication failure in 5.17 AOS KB 9867 - ncc default_password_check may generate "Session Audit" assertion in IPMI system event log. KB 9870 - Cannot enable Karbon Airgap with error "Failed to pass Airgap enable prechecks: Failed to get file via given server url" KB 9877 - Citrix AHV Director Plugin: PSRemotingTransportException - Access is denied KB 9885 - AHV - acpi_pad process consumes too much CPU KB 9895 - HPE: iLO Virtual NICNote: You may need to log in to the Support Portal to view some of these articles.
When hardening ESXi Security, some settings may impact operations of a Nutanix cluster. Here are some recommended settings and their possible effect on a Nutanix cluster Summary of Hardening Requirements can be implemented in /etc/ssh/sshd_config as below:HostbasedAuthentication : noPermitTunnel : noAcceptEnvGatewayPorts : noCompression : noStrictModes : yesKerberosAuthentication : noGSSAPIAuthentication : noPermitUserEnvironment : noPermitEmptyPasswords : noPermitRootLogin : noMatch Address : x.x.x.11,x.x.x.12,x.x.x.13,x.x.x..14,192.168.5.0/24PermitRootLogin : yesPasswordAuthentication : yes For more details,
What is Fault Tolerance (FT)?FT is how the system ensures that both user VM data and cluster infrastructure data is protected from failure. What are Fault Domains?Failure scenarios can be thought of in terms of fault domains. There are four fault domains in a Nutanix cluster:Disk Node Block RackThis article focusses on Rack Awareness. Rack failure can occur in the following situations:All power supplies fail within a rack Top-of-rack (TOR) switch fails Network partition; where one of the racks becomes inaccessible from other racksWhen rack fault tolerance is enabled, the cluster has rack awareness and the guest VMs can continue to run with failure of one rack (RF2) or two racks (RF3). The redundant copies of guest VM data and metadata exist on other racks when one rack fails.Note – Rack fault tolerance has to be configured manually. Requirements and configuration of Rack Fault Tolerance can be found here.To learn more about Rack Fault tolerance on Nutanix Clusters click here.Now, Rack
Below are new knowledge base articles published on the week of August 9-15, 2020.KB 9258 - Alert - A650000 - Data provider collector is in crashloop. KB 9637 - Pre-Upgrade Checks: test_if_rolling_restart_is_not_in_progress KB 9698 - Alert - Clusters on AWS - Hibernate/Resume process taking long time KB 9699 - Alert - Nutanix Clusters on AWS - Capacity not met KB 9701 - Alert - Clusters on AWS - Cluster Node Condemned Timeout KB 9703 - Alert - Clusters on AWS - Cloud Provider Connection Issues KB 9704 - Alert - Clusters on AWS - Handling AWS Health and Scheduled Events notifications KB 9723 - Clusters on AWS - Limitations of Rack Awareness feature KB 9733 - Clusters on AWS – Important Considerations when deploying Nutanix Clusters on AWS KB 9768 - Alert - Clusters on AWS - Cannot Provision Node KB 9770 - Alert - Clusters on AWS - Cluster Key Pair Deleted alert in Nutanix Clusters Console KB 9771 - Alert - Clusters on AWS - Host Agent Ping Timeout KB 9772 - Alert - Clusters on AWS - Node
Do you want to import Third Party Appliances on AHV and wondering about following questions?Is my Third-Party Appliance compatible with AHV? What is supported and un-supported? How will I migrate my Appliance to AHV?We have multiple options that can be used to deploy Third Party Appliances on AHV. Third-party application vendors provide different applications or appliances that are certified to run on AHV.Here is the full current list of such applications: Compatibility MatrixNote: Software vendors can utilize this link to request an official software validation on AHVFor more details, refer to KB 9849
Hello!We have 5 node RF3 cluster. I tried shutting down 2 of 5 node at a time. Cluster stays online, but rebuild never happens. Is it normal? Should 5 node RF3 cluster rebuild after loosing 2 nodes? Can it survive loosing 1 more node after already loosing 2 (only 2 of 5 stays online)?
Hi All, Wondering if anyone has scripted the addition and configuration of nic’s for VM’s that have failed over in a PD? Noticed a few posts from a few years ago but seeing if approach has changed at all. My approach (no script yet): Check which PD’s have become active at remote site to get list of vm’s grab mac and ip info from a mainted csv (plan to use dhcp reservations on dhcp server) Add nic’s with info grabbed from csv Perform some DNS updates Power on Interested to know what others are doing and any feedback on my approach would be appreciated. Thanks! -PeteP
By default, the Prism Central login page includes background animation, and users are logged out automatically after being idle for 15 minutes. You can disable the background animation, change the session timeout for users, and override the session timeout completely.Some points to note:1. This setting is not persistent. In other words, if the Prism service restarts, this setting is lost and must be disabled again.2. Disabling or enabling this setting in Prism Web Console does not propagate to Prism Central or vice versa. The setting must be disabled in Prism Web Console and Prism Central UI separately.3. The timeout interval for an administrator cannot be set for longer than 1 hour. For more information on the details and steps, click here.
Below are new knowledge base articles published on the week of August 2-8, 2020.KB 9473 - Alert - A650001 - CollectorSizingViolation KB 9686 - VMware vSphere ESXi hardening for Nutanix clusters. KB 9764 - LCM | Failed to get CVM managed object on ESXi clusters KB 9767 - Alert - A130120 - System has down-migrated the data of flash-mode-enabled vDisks. KB 9800 - Cloud CVM: aplos is down on the Controller VM after upgrading AOS to 5.17.x KB 9816 - SLES 15 gets stuck for 90 seconds after a restore from a snapshot on AHVNote: You may need to log in to the Support Portal to view some of these articles.
While maintaining your Nutanix environment, there is a need to apply patches to keep everything running smoothly. Some of these patches require reboot of the CVM or Hosts to take effect.For example, rebooting all hosts in a cluster means manually putting the host in maintenance mode, evacuating the VMs, turning off the CVM, rebooting the host, waiting for the CVM and host to boot up, and confirming the cluster data resiliency is OK before proceeding to the next host. As you can see, there is a lot of manual work involved.However, there is a simpler solution: leveraging the rolling reboot script. The script utilizes our shutdown token feature to ensure that the node is up and running before proceeding to the next node. If you are using AHV, the script will be available in Prism. For ESXi hypervisors and for more robust options (even for AHV), there is a command line option found in KB 4723More information about the prerequisites (please read this before using) and execution can be found
Hi,I am trying to monitor the CPU ready time (hypervisor.cpu_ready_time_ppm) for the VMs on AHV but the metrics available via the API does not provide this particular metric.The host metrics available via the API areHost_hypervisor_avg_io_latency_usecsHost_hypervisor_avg_read_io_latency_usecsHost_hypervisor_avg_write_io_latency_usecsHost_hypervisor_cpu_usage_ppmHost_hypervisor_io_bandwidth_kbpsHost_hypervisor_memory_usage_ppmHost_hypervisor_num_ioHost_hypervisor_num_iopsHost_hypervisor_num_read_ioHost_hypervisor_num_read_iopsHost_hypervisor_num_received_bytesHost_hypervisor_num_transmitted_bytesHost_hypervisor_num_write_ioHost_hypervisor_num_write_iopsHost_hypervisor_read_io_bandwidth_kbpsHost_hypervisor_timespan_usecsHost_hypervisor_total_io_size_kbytesHost_hypervisor_total_io_time_usecsHost_hypervisor_total_read_io_size_kbytesHost_hypervisor_total_read_io_time_usecsHost_hypervisor_write_io_bandwidth_kbpsHow can I report on this metric?Thanks.
key-based SSH access to a cluster is supported. Adding a key through the Prism web console provides key-based access to the cluster, Controller VM, and hypervisor host. Each node employs a public/private key pair, and the cluster is made secure by distributing and using these keys. Users can create a key pair (or multiple key pairs) and add the public keys to enable key-based SSH access. However, when site security requirements do not allow such access, you can remove all public keys to prevent SSH access. To control key-based SSH access to the cluster, do the following: Steps:Click the gear icon in the main menu and then select Cluster Lockdown in the Settings page.The Cluster Lockdown dialog box appears. Enabled public keys (if any) are listed in this window. To disable (or enable) remote login access, uncheck (check) theEnable Remote Login with Password box. Remote login access is enabled by default. To add a new public key, click the New Public Key button and then do the followin
With the introduction of download pages on the Nutanix Support Portal, The download bundle links now come with an expiry time set to them.wget command-line utility to download the bundles directly on to your CVM however chops down the signature files on the download links, this is due to the Download links being too long for wget command line utility.There is method to wget a release bundle from the Nutanix Support Portal directly to a CVM. In order to wget a release bundle on to a CVM/Host, run the below command on the CVM where you wish to download the bundlenutanix@cvm$ wget “url” -O <filename>Note: Filename can be obtained from the Download link as shown in the highlighted text above. wget "http://download.nutanix.com/downloads/ncc/v18.104.22.168/nutanix-ncc-el7.3-release-ncc-22.214.171.124-x86_64-latest-installer.sh?Expires=1595478032&Signature=Rb3R02qRIZZWVaHwAr1N7aj7AXQ3NOYPySn1AyF4H77HzhdxXEcVp7YBEGbd7h-oA~CVgkJWwla-iqGWSw4CAJmrugIhM0iXAUCIW-n0aNAvMbEOxxquRRQcMCwpNuHtSchwkOqZ~wR
The Nutanix platform and all products leverage the Security Configuration Management Automation (SCMA) framework to ensure that services are constantly inspected for variance to the security policy. Nutanix has implemented security configuration management automation (SCMA) to check multiple security entities for both Nutanix storage and AHV. It continuously assesses and heals Nutanix clusters to ensure that it meets or exceeds all regulatory requirements. In this process over 1,700 security entities are analyzed and self-corrected across storage and hypervisor (AHV only) layers. Nutanix automatically reports log inconsistencies and reverts them to the baseline.With SCMA, you can schedule the Security Technical Implementation Guides (STIG) to run hourly, daily, weekly, or monthly. STIG has the lowest system priority within the virtual storage controller, ensuring that security checks do not interfere with platform performance.To learn more about this feature, click here.
Networking on a Hyper-V host is very similar to networking on an AHV host and an ESXi host. Naming of objects is again different, but functionality is the same: Just like with AHV and ESXi, the hypervisor has a connection to both the InternalSwitch and the ExternalSwitch. The name of the hypervisor on a Hyper-V host is confusingly also Hyper-V, and this hypervisor attaches to virtual switches by way of a vEthernet port. All virtual network adapters are where the VLANs are defined, and also where the network names are defined. Hyper-V doesn't have port groups, it has network names which don't work at all the same way that they do in AHV. Verification Commands from Powershell Get-netadaptervmq Get-netadaptervmqQueue Please refer to this link for detail explanation of Networking in hyper-vhttps://portal.nutanix.com/page/documents/kbs/details?targetId=kA0600000008fOSCAY
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.