How It works
Have questions about how the Nutanix Platform works? Looking to get started - start here!
- 1,174 Topics
- 1,837 Replies
Nutanix recommends that you use a single container in an AHV cluster to simplify VM and image management. What if you still have to have more than one container and wish to split existing VMs between the containers? For example, you created a container named Production and want to move all production VMs to that container. Since currently there is no Storage vMotion equivalent available to move VMs between containers, what we do as a workaround is that we create images from the vdisk and then use these images to deploy new VMs and selecting the desired container. A general overview of the process looks this: Find the VM disk files. Power off the VM. Create images from the files found in step 1 using acli. Create a new VM using acli or using Prism UI. Use the image created in Step 3 as a source for the disk. Power on the VM and check if everything is working fine. You can delete the old VM or the image if required. To have a detailed look at the steps and the commands i
There are various scenarios in which you may need to unregister a cluster from PC, and it is important to do it correctly. Whether you are decommissioning a Prism Element (PE) cluster which was registered to Prism Central (PC), or you already have decommissioned a cluster but it is still linked to a PC, or you have a cluster that is registered with one PC instance but would like to re-register it with a different PC instance for the benefit of localized management or to configure availability groups using Leap. All What does "correctly" look like? Done properly, unregistration of a cluster from PC involves a remove-from-multicluster step followed by clean-up of associated metadata. This metadata clean-up must be allowed to complete prior to attempting to re-register the cluster to a PC, otherwise, registration could be blocked. How to do it? There is no GUI method to unregister a cluster from Prism Central, so the process requires SSH access to the PC VM as well as to a CVM of th
Our hosts network ports are currently configured as Active-Backup bonds. I can determine which Ethernet port is currently active, which Ethernet port is on standby, and I can issue a command to failover the port. I would like to know if there is a command which can be run from the AHV which outputs the time and date-stamps of the last Host Ethernet port failover.
Are there ANY plans to expand on the PowerShell cmdlets at all, specifically for getting host info? They’re pretty bare bones. I don’t even know the last time the cmdlets were updated/expanded. It seems to me Nutanix regrets making them and hopes they will quietly die. I have relied heavily on PowerShell to keep tabs on a very large Nutanix environment running Nutanix on both AHV and ESXi. I’m talking 700+ nodes. The ESXi clusters are no problem getting the info from because I use VMware’s PowerCLI to supplement the lack of data I can get with Nutanix Cmdlets, but the AHV clusters are another story since I don’t have PowerCLI to fall back on. Can we at least get a cmdlet that will allow us to use PowerShell to initiate ncli or acli commands, like VMware does with the “Get-EsxCLI” cmdlet. That would be very helpful.
What is LCM? The Life Cycle Manager (LCM) tracks software and firmware versions of all entities in the cluster, integrated both on Prism Element and Prism Central. LCM Structure: LCM consists of a framework and a set of modules for inventory and update. LCM supports software updates for all platforms that use Nutanix software. LCM supports firmware updates for a specific platforms. From Prism Element you can use LCM to update AHV, NCC, Foundation, BIOS, BMC, DATA Drives, HBA Controllers, SATADOMs and M.2 Drives (G6 and later). From Prism Central, you can update Calm, Epsilon, Karbon, and Objects. When you run a firmware upgrade on multiple nodes, the LCM updates one node at a time to prevent any down time in your cluster. Before the upgrade starts, all the VMs on that node are migrated to another host and the node enters maintenance mode. Always make sure that your cluster can tolerate a node failure by having the data resiliency status as “OK” in Prism Element. For more informati
What is Nutanix Guest Tool (NGT)? Nutanix Guest Tools (NGT) is a software based in-guest agent framework which enables advanced VM management functionality through the Nutanix Platform. The solution is composed of the NGT installer which is installed on the VMs and the Guest Tools Framework which is used for coordination between the agent and Nutanix platform. The NGT installer contains the following components: Guest Agent Service Self-service Restore (SSR) aka File-level Restore (FLR) CLI VM Mobility Drivers (VirtIO drivers for AHV) VSS Agent and Hardware Provider for Windows VMs App Consistent snapshot support for Linux VMs (via scripts to quiesce) This framework is composed of a few high-level components: Guest Tools Service Guest Agent The figure shows the high-level mapping of the components: Important notes: NGT uses TCP/IP network connectivity secured with SSL. The installation includes identifiers unique to the VM and the cluster, but you can pre-install NGT on a clone ba
Hi,I have a customer with multiple clusters running hyper-V 2016. We are noticing dropped RX packets in the clusters. There was to an extent where there was a RX dropped packets alert generated in one of the clusters. So we proceeded to upgrade the Intel NIC firmware as recommended by support, after this, we notice the alert does not happen anymore. But we still see dropped packets.Is this normal or okay for a cluster/NIC to have rx dropped packets?
Below are the top knowledge base articles for the month of March 2020. KB 4116 - Alert - A1187, A1188 - ECCErrorsLast1Day, ECCErrorsLast10Days KB 7503 - G6, G7 platforms with BIOS 41.002 and higher - DIMM Error handling and replacement policy KB 4141 - Alert - A1046 - PowerSupplyDown KB 1540 - What to do when /home partition or /home/nutanix directory is full KB 1113 - HDD/SSD Troubleshooting KB 4158 - Alert - A1104 - PhysicalDiskBad KB 4188 - Alert - A1050, A1008 - IPMIError KB 2090 - AHV | Host and Guest Networking KB 4519 - NCC Health Check: check_ntp KB 4409 - LCM: (LifeCycle Manager) Troubleshooting Guide KB 8792 - NCC checks: same_hypervisor_version_check, duplicate_cvm_ip_check, same_timezone_check, esx_sioc_status_check, power_supply_check, orphan_vm_snapshot_check giving ERR KB 4541 - Alert - A101055 - MetadataDiskMountedCheck KB 2486 - NCC Health Check: cvm_mtu_check KB 2473 - NCC Health Check: cvm_memory_usage_check KB 3357 - NCC Health Check: ipmi_sel_cecc_check KB 4494 - N
I have an AHV VM that I would like to disconnect the NIC and connect the NIC in my script. I need it disconnected while sysprep runs and sets a static IP address then I’m good to connect the NIC again. (Otherwise it pulls a DHCP address and replaced the hostname with the DHCP address in DNS). I can Get-NTNXVMNIC and see the current status of the NIC but I don’t see a way to change the status (i.e. I don’t find a Set-NTNXVMNIC command). Thanks! -TimG
Below are new knowledge base articles published on the week of March 22-28, 2020. KB 8864 - LCM Pre-check test_hyperv_2019_support KB 8940 - LCM on HPE - SPP update compatibility matrix KB 8999 - NCC Health Check: copyupblockissue_check KB 9007 - LCM Darksite: Inventory success but fails to list the available KB versions. KB 9014 - How to Create a Mapped or Network Drive from your Sandbox to a Utility Server KB 9122 - download for AOS Bundles or any large files from Prism and Portal fails for specific customers KB 9132 - Alert - A1305 - Node is in degraded state KB 9137 - Overview of Memory related enhancements introduced in BIOS: 42.300 and BMC: 7.07 for Nutanix NX-G6 and NX-G7 systems KB 9138 - LCM on INSPUR - BIOS-BMC Compatibility matrix Note: You may need to log in to the Support Portal to view some of these articles.
Dear all, I need to provide some reporting for my management. I'm using REST API & Python, to extract values. I managed to get hosts information (cpu, ram) and physical storage without any problem. Now I need to get the logical storage summary and I can't find the values via the REST Api. (same values as we get on the PRISM homepage on Storage Summary part) It's probably a calculation but I can't find the right one. May I ask you to provide me the values and calculations. Thank you for your help. Best regards Cedric
Hello, We build our vm images using packer and ansible and upload them to different endpoints. First time I uploaded the images to Prism Central using API v3 and batch processing. Prism Central deploys the image to all the cluster that are registrated. That works fine… BUT… HOW can i define the destination storage container on each cluster?? I can define placement policies etc but I cannot assign a category or policy to Storage Container. Prism Central always deploys the images to “SelfServiceContainer”. But we dont want that.
SAS is the leader in analytics and Nutanix is the leader in invisible infrastructure. Nutanix has thousands of customers and many of them already have SAS software running in their organization. They have experienced the benefits of invisible infrastructure and are moving more of their applications (including SAS) to Nutanix. It’s easy to deploy and manage SAS 9.4 and SAS Viya on Nutanix Today, SAS 9.4 helps discover insights, manage data and make analytics approachable. SAS 9.4 has been tested on Nutanix NX Models – both as a hyper-converged infrastructure (HCI), as well as using Nutanix as back-end storage only, for external hosts. Nutanix AHV clusters perform well in both scenarios. SAS software makes great demands of IT infrastructure, so you must get the design right to ensure a successful deployment. Evaluating the SAS I/O requirements accurately is pivotal. Nutanix encourages involvement of Nutanix engineers to determine the back-end service requirements as well as optimal imple
Scheduled power outage? Relocating cluster hardware? If you need to shut down all the nodes in your AHV cluster, here's how.
For most maintenance tasks and upgrades we can keep the cluster up and VMs running, but in some cases the whole cluster will need to be shut down. If you just need to power off a single node, a cluster of three or more nodes won’t need to stop. To stop a single-node cluster please see the section “Shutting Down a Single-node Cluster” in the NX and SX series hardware administration guide. To stop a single node in a larger cluster see the section “Shutting Down a Node in a Cluster (AHV)” If there's going to be a site power outage, a full network outage, or physical relocation of the whole cluster you're going to want to gracefully shut down the whole cluster. The full procedure is covered in the article Shutting Down an AHV Cluster for Maintenance or Relocation. In summary the procedure will be as follows: Update NCC and perform a health check, then address any items of concern. Shut down all the user VMs. Stop any Nutanix Files cluster, if applicable. At this point no VMs other than
Let’s say that you ran the health checks on your cluster and received a failure under the component “cvm_name_check”, what does it mean and how do you fix it? The NCC health check cvm_name_check ensures that any renamed CVMs (Controller VMs) conform to the correct naming convention to avoid issues with certain operations that depend on identifying the CVM from UVMs on the same host. The default Controller VM naming is NTNX-<block_serial>-<position-in-block>-CVM. The display name of the Controller VM must always: Start with "NTNX-"; and End with "-CVM" For more information check out: https://portal.nutanix.com/#/page/kbs/details?targetId=kA00e000000XfCMCA0 To see how to modify the hostname of the Controller VM check out: https://portal.nutanix.com/#/page/kbs/details?targetId=kA032000000TUjkCAG You followed the naming convention and the check is still showing a failure? This might be a false-positive alert depending on your AOS, contact Nutanix support for verificatio
You may have noticed when adding disks to the node or replacing disks with larger capacity ones, utilisation distribution between the disks does not occur immediately. You check on the cluster sometime later and notice that newly added disks still show minimal usage, much lower than expected. By default, the aim is to bring disks utilisation within +/-7.5% spread of the tier utilisation. There are some things to consider when expecting a certain outcome: Disk balancing is not triggered unless the tier usage is at least 35%. Only 1 GB of data is moved during a Curator scan per node. Even if the tier usage is below 35% should any disk usage across the cluster reach 70%, disk balancing takes place. Disk Balancing: Disk balancing ensures data is evenly distributed across all disks in a cluster. In disk balancing, data is moved within the same tier to balance out the disk utilization. This is different from ILM (Information Lifecycle Management), where data is moved between dif
Would like to put together a report that would in detail break down the storage used by VM snapshots, as well as, storage used by Protection Domains. Have been digging around and found a couple scripts that display some useful information however not quite what I’m looking for. Anyone have an idea on how best to proceed?
When a Nutanix / vSphere cluster is deployed by Foundation the recommended drivers are installed, but after some time you may want to check if there is a newer driver recommended. From the Nutanix perspective, we have covered this with an NCC Health Check: esx_driver_compatibility_check so if you update NCC and run a health check, this check should tell you whether there is a later driver version qualified by Nutanix. To run the check from the CLI use “ncc health_checks hypervisor_checks esx_driver_compatibility_check” from any CVM in the cluster. You may see a newer driver listed for your NIC hardware and ESXi version. A newer driver may not have been qualified yet by Nutanix and in some cases could cause issues for the cluster, so generally we recommend staying with the recommended drivers as identified by NCC.
Q. Does Nutanix support inline encryption? Inline encryption is not currently supported on the Nutanix platform. However Data At Rest Encryption (DARE) of two kinds is supported: Using Self Encrypted Drives (SED) is supported Security Guide v5.16: Preparing for Data-at-Rest Encryption (SEDs) Security Guide v5.16: Configuring Data-at-Rest Encryption (SEDs) Software Only Data Encryption Security Guide: Data-at-Rest-Encryption (Software Only) Q. How do I know that the data is encrypted? More details around the encryption status and logs can be viewed via the nCLI, using REST APIs or PowerShell cmdlets. KB-7846 How to verify that data is encrypted with Nutanix data-at-rest encryption Q. Is it possible to monitor the encryption? Monitoring of the encryption state is done via our Nutanix Cluster Checks (NCC) that generate an alert on any issue detected within the cluster. Please keep in mind that enabling encryption is a cluster-scope setting. Q. What is recommended sizing of the CV
Below are new knowledge base articles published on the week of March 15-21, 2020. KB 8885 - Alert - A15039 - IPMI SEL UECC Check KB 9009 - AHV | No Intel Turbo Boost frequencies shown in the output of "cpupower frequency-info" command KB 9070 - [CSI] PVC Volumes Stuck in Pending State | Error: Secret value is not encoded using '<prism-ip>:<prism-port>:<user>:<password>' format KB 9071 - Configuring hypervisor after satadom replacement fails with phoenix 4.5.2 KB 9085 - [Objects 2.0] Error creating the object store at the deploy step. KB 9095 - Nutanix Files- FSVM expansion may file with error IP already in use KB 9102 - How to identify plugged-in SFP module hardware details KB 9103 - WARN: Could not use proxy. URL Error <urlopen error [SSL: TLSV1_ALERT_INTERNAL_ERROR] Note: You may need to log in to the Support Portal to view some of these articles.
In a Nutanix AHV cluster the image service is used to index and manage ISO and virtual disk images for cloning to new VM disks or mounting to the virtual CDROM. With the addition of Prism Central 5.5 or later, this adds a global image service to manage these files across multiple clusters. When managing images from Prism Central we sometimes will see an image show up on a cluster as "inactive". This means the metadata for the image exists but the file does not exist locally on that cluster. The article "Prism Central: Adding Images to Prism Central" gives a few options to remediate this condition when an image is needed on a certain cluster but is inactive. These methods are useful with Prism Central 5.5 and 5.10 versions. In Prism Central 5.11 we have added image placement methods to control where your images will be available. During image upload you can choose to select individual clusters where the image should reside, or you can apply a category to the image to utilize an imag
What is Erasure Coding? Erasure coding increases the usable capacity on a cluster. Instead of replicating data, erasure coding uses a parity information to rebuild data in the event of a disk failure. The capacity savings of erasure coding is in addition to deduplication and compression savings. If you have configured redundancy factor 2, two data copies are maintained. For example, consider a 6-node cluster with 4 data blocks (a b c d). In this example, we start with 4 data blocks (a b c d) configured with redundancy factor 2. In the following image, the white text represents the data blocks and the green text represents the copies. Data copies before Erasure Coding Computing Parity Data copies after Computation of Parity Erasure Coding Best Practices and Requirements: A cluster must have at least four nodes populated with each storage tier (SSD/HDD) represented to enable erasure coding. Avoid strips greater than (4, 1) because capacity savings provide diminishing returns and
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.