How It works
Have questions about how the Nutanix Platform works? Looking to get started - start here!
- 1,302 Topics
- 1,983 Replies
Most of us are aware of what maintenance mode is. But how is its execution different in ESXi and AHV in Nutanix? Well… it’s just a small tweak! While trying to put ESXi host in maintenance mode, it will get stuck as the CVM, which cannot be migrated off to other hosts, needs to be powered off and then the action will be completed. In the case of AHV, this need not be done, once you provide the instruction to put the host in maintenance mode, the user VMs will be migrated to other hosts and the host will enter maintenance mode, irrespective of the CVM being powered off. In short, “The CVM need not be powered off to put AHV host in maintenance mode, which is the case with ESXi” Have any questions? Leave them below and let’s start a discussion! Check out the community post https://next.nutanix.com/how-it-works-22/cluster-maintenance-or-relocation-33391 for more on cluster maintenance or relocation.
Whenever we hear the word cluster, the picture that comes in our mind is multiple nodes. So doesn’t a single or two-node cluster sound weird? But, it’s possible! Well… In a traditional Nutanix cluster at least 3 nodes are expected to form a cluster. There is an option however to form a single-node or a two-nodes cluster for ROBO (Remote Office/Branch Office) implementations or as a backup site. Confused or have questions regarding the implementation or requirements/dependencies? Check out Guide for single-node and two-node clusters for any clarifications. To know more on single-node clusters: Data Protection and Recovery with Prism Element: Single-Node Replication Target Clusters Prism Web Console Guide: Single-Node Clusters For more information on two-nodes cluster: Prism Web Console Guide v5.16: Two-Node Clusters
Below are new knowledge base articles published on the week of February 16-22, 2020. KB 8880 - Understanding 3rd Party Backup Integration KB 8996 - Self Service Restore Failing for Microsoft Storage Spaces Disk Note: You may need to log in to the Support Portal to view some of these articles.
Logbay can auto-upload logs but what is the IP address or DNS name so I can allow it through the firewall?
Your CVMs can upload log bundles directly to Nutanix for an open case, and this can speed case resolution especially if the alternative is the two-step process of pulling multiple GiB over WiFi to download and THEN upload. With common security restrictions around FTP or SFTP leaving the datacenter this may require explicit firewall rules. The logbay command to collect and upload logs looks like this: logbay collect --dst=sftp://nutanix -c <case_number> That destination “nutanix” isn’t helping us with that firewall rule, so where is the upload going? The software is really just automating the old manual methods described in this KB: Uploading Files for Nutanix Support Using FTP, SFTP or the Customer Portal. Referencing that article, we can see the DNS name for FTP uploads is ftp.nutanix.com . The IP address varies by region and might change, so I’d suggest using the URL. To allow log upload, you’ll need to allow port 22 for SFTP. You could also allow port 21 for FTP, but if it nee
I run a 6-node nutanix cluster, and planing to run some containers (LXC) , inside VM using ZFS pool, to leverage on easy clone, migrate, snapshot the containers mainly to transport those clones to other system (DR) I read that ZFS need to deal directly with disks hardware to be sure about what is or what is not actually written to disks. Is ZFS suaitable or not, being run on top of nutanix platform, concerning the data corruption risk or losing pool, because it can’t deal directly with disk hardware? Hardware is supermicro bundled with nutanix-enterprise.
Let’s say that a VM migration is required for data protection or as part of a disaster recovery, what is the scenarios and the recommended practice? Check out our documentation about data protection and disaster recovery we wrote this guide for IT administrators and architects who want more information about the data protection and disaster recovery features built into the Nutanix Enterprise Cloud. The data protection subject is huge and has a lot of branches, let’s focus on a small component which is migrating the VMs from one location to another. The migration could be done between Nutanix hosts, Nutanix Clusters or migration from non-Nutanix Storage to a Nutanix Storage. The simplest scenario is migrating the VMs from one host to another in the same cluster, the VMs will migrate automatically after placing the host into maintenance mode by the recommended practice of the host’s hypervisor, the only CVM that would stay on that host is the Controller VM (CVM) which is responsible for
Let's say you have modified the license assignments, removed a node from the cluster and you got an alert in Prism stating that there is no current license for the cluster. You can reclaim and optionally re-apply licenses for nodes in your clusters. This procedure describes how to reclaim licenses where the cluster is not configured with Portal Connection or the cluster is not connected to the internet (also known as dark-site clusters). After you remove a node, you can also move the node to another cluster. More info. You must un-license your cluster when you plan to destroy a cluster. More info Return licenses to your inventory when you remove one or more nodes from a cluster. You can reclaim licenses for nodes in your clusters in cases where you want to make modifications or downgrade licenses. More info You do not need to reclaim Starter licenses for Nutanix NX Series platforms. These licenses are automatically applied whenever you create a cluster. For more information about rec
Customers can observe cluster issues when they use more than 90 percent of the total available storage on the cluster. For node resiliency and redundancy, the disk usage must be below the Max Usable value. One of the most common alerts is Cluster can not tolerate # node failure(s). This alert is fired by the NCC health check "sufficient_disk_space_check" that checks if there is sufficient storage space on the cluster to provide node resiliency. See more details on (KB 1863) Share your comments in this forum!
Everyone who has used, maintained or supported NX Solution have come across the term “Foundation”. So what exactly is foundation? Let’s take a scenario where you bought a new NX solution (YAYY!!) but what now? Does it have the AOS or the hypervisor installed? How can you create a cluster using the new nodes? Do you install anything in the node before you try to add it to an existing cluster? The answer to all these questions lies in the tool called Foundation, which is used to do a field installation, consisting of installing a hypervisor and Nutanix Controller VM on the node before creating a new cluster or adding a new node to an existing cluster. Field installation can be performed for either factory-prepared nodes or bare metal. So are there any guidelines for using foundation? Of Course, go through the link below to know more about the guidelines Field Installation Guide Want to know more about Foundation? The following links might help you to un
Every architecture remains stable due to scheduled maintenance, in which we might need to shutdown the node or CVM to increase memory, replace network adapters or just some planned maintenance activity. In Nutanix, we have heard the term “CVM maintenance mode”. So what does it mean? Why do we need to put the CVM in maintenance mode if we are thinking of bringing down the CVM? All the CVMs in the cluster interact with each other using microservices handling the cluster stability, we need to inform the cluster if a CVM is going down and shutdown all the services in the CVM gracefully before shutting down the CVM. To make a CVM not part of any active storage and metadata connection, we enable maintenance mode in the CVM. So what is host maintenance mode? Host maintenance mode is used to safely migrate all the User Virtual VMs in the host and make sure NO VMs are running on the Node. If a VM can’t be migrated to another host, you need to shutdown the VM for the host to ente
Let's say that you have purchased a new node and you wish to make it part of your existing cluster, how would you do that? A cluster is a collection of nodes. You can add new nodes to a cluster at any time after physically installing and connecting them to the network on the same subnet as the cluster. The cluster expansion process compares the AOS version on the existing and new nodes and performs any upgrades necessary for all nodes to have the same AOS version. The process for adding a node varies depending on the AOS version, hypervisor type (AHV, ESXi, Hyper-V, or Citrix Hypervisor), data-at-rest encryption status, and certain hardware configuration factors. Here is the guide to expand a cluster that explains the expansion process and the pre & post checks: expanding a cluster Before you start it's better to check this doc before attempting to add a node to the cluster since the process of adding a node varies depending on several factors considerations based on your A
Let’s say you run the NCC health check in your cluster from a cvm and it's clean, other than this one INFO check regarding default_password_check. Running : health_checks system_checks default_password_check[==================================================] 100%/health_checks/system_checks/default_password_check [ INFO ]------------------------------------------------------------------------+Detailed information for default_password_check:Node x.x.x.x:INFO: One or more IPMI devices are still using the default passwordRefer to KB 6153 (http://portal.nutanix.com/kb/6153) for details on default_password_check or Recheck with: ncc health_checks system_checks default_password_check+-----------------------+| State | Count |+-----------------------+| Info | 1 || Total Plugins | 1 |+-----------------------+ Essentially, this check verifies if there are any CVMs (Controller VMs), hosts, IPMIs, or Prism Central (PC) instances with the default credentials. You know it's not something to wo
Let’s take a moment here to accept that we all have had questions regarding the AOS or Hypervisor versions in the cluster and the node to be added during Cluster expansion. These questions arise if the new node is imaged before adding it to the cluster. For us, we can say, “With great expansions, comes not so great dependencies” There can be 3 common scenarios:- Same AOS and Hypervisor version:- All good, no action required... The node is just a click away from the cluster. AOS is same, Hypervisor is different:- Re-image the node before adding it to the cluster. AOS is different, Hypervisor is the same:- Two scenarios here:- Cluster AOS version < Node AOS version:- Upgrade AOS in the cluster and it becomes scenario 1. Cluster AOS version > Node AOS version:- You can upgrade the AOS in the node using the CLI command mentioned in the doc at the end. To have a good idea of the above scenarios and things to consider while expanding the cluster, take a look at this document.
Let's say that you decided to adopt the Container Deployment approach by deploying a Kubernetes cluster, here is a brief explanation about Kubernetes and it's integration with Nutanix. Karbon orchestrates Kubernetes clusters to simplify the provisioning and management of containerized applications. Kubernetes packages applications in their own dedicated containers together with all of the required operational components for running the application. Containers , which run inside of pods on top of nodes, are the core building block of Kubernetes architecture. Containerized applications are simple to manage, easy to deploy, and portable as they are abstracted from the OS of the host. Since different containers share an operational kernel, they do not require as much compute capacity as a VM, making them "lightweight". Using Karbon to manage Kubernetes operations requires a basic familiarity with key Kubernetes concepts. I made this diagram to illustrate a Kubernetes-Nutanix flow: For more
Let’s say you need to increase the CVM memory depending on the workload or to enable certain AOS features. You can increase memory reserved for each Controller VM in your cluster by using the 1-click Controller VM Memory Upgrade available from the Prism web console. To increase the memory, follow these steps:- Login into the Prism console. Click on the gear icon. Select “Configure CVM”. Select the target CVM memory allocation > Apply. Be informed that resizing memory for a Controller VM requires a reboot as part of the process. But there’s nothing to worry as the reboot will be in a rolling way, which means, only 1 CVM will reboot at a time ensuring no production impact. If a Controller VM was already allocated more memory than your choice, it remains at the same memory amount but the CVMs having lesser memory than the choice will get increased. Example:- Let’s say you have 4 CVMs with 2 having 20 GB memory and 2 with 32 GB. You want to increase the 20 GB ones to 28 GB
What if one day you wake up and decided to give AHV a shot? But wait… you are running ESXi. Hmm… If only there was a way to convert ESXi to AHV. Well… there is a feature called “In-place hypervisor conversion” to save the day. But what about the VMs? All the VMs that are running in the ESXi cluster are converted so that they can run on the AHV cluster. Following are the enhancements to this feature are:- Decreased VM downtime. The state of the VM is preserved and the VM is brought back into the same state post-conversion. Preservation of the MAC addresses of the VM NICs after conversion The Prism console is responsive during the conversion process. However, the Prism goes into Read-Only state. You can check out the requirements and limitations for In-place hypervisor conversion here. Finally ready to give AHV a run after going through the requirements? To start the conversion, follow the document here Have any questions? Drop a comment and let us start a discussion
Receiving emails about EOL, but no clue what it means? Once a product is EOL (end of life), then there will be no further upgrades/updates released for it and will no longer be supported by Nutanix. This may include the AOS version, Prism central, Nutanix files or Supported hardware platforms. To view the EOL information, navigate as follows:- Login into Nutanix Support Portal. Menu > Documentation. EOL Information. Select the entity you want to see the information for. Don’t want to follow the above steps and like shortcuts?… Click here then. Have any questions? Drop a comment and let us start a discussion.
So you are planning to configure VM-VM anti-affinity policy in your AHV environment to ensure critical VMs run on different hosts. That’s thoughtful thinking right there and it makes sense. Now if you create this policy when the VMs are off... when you power them on, you’ll see the policy kick in and the VMs will be running on different hosts. But… what if the VMs were powered on when you applied the policy? Do you have to restart the VMs? The answer is No. After the policy is created, Acropolis Dynamic Scheduling (ADS) will migrate the VMs running on the same hosts to different ones. This will not happen immediately though. It takes a while for ADS to start migrating these VMs. So sit relaxed, you created the policy… ADS will take care of it. Want to know how to configure VM-VM anti-affinity policy or how to delete it. Check it out here Have any questions? Drop a comment and let's start a discussion.
Below are new knowledge base articles published on the week of February 9-15, 2020. KB 5512 - REST API: Alerts data placeholders are not populated KB 8803 - How to change ESXi Host TLS version running on Nutanix cluster KB 8812 - Moving Nutanix nodes to chassis of different generation KB 8855 - How to change Virtual Machine video card memory in AHV KB 8897 - Nutanix Collector "We couldn't find this IP address or host name" KB 8916 - LCM operation may timeout when performing upgrades in ESXi environments KB 8924 - How to map Nutanix Volumes disk from Linux VM to vdisk KB 8942 - karbonctl commands with --pc-username and --pc-password flags fail with 'no consumer: "text/html; charset=iso-8859-1"' error KB 8951 - Alert - A160068 - AFSDuplicateIPDetected KB 8955 - RMA: Return Instructions (APAC) KB 8962 - Unable to browse containers using WinSCP when it contains files name with non-English characters KB 8963 - PRISM displays the wrong speed for the NICs faster than 10G Note: You may need to
Every modern architecture solution requires data saving and optimization techniques to manage their workload efficiently and optimally. Nutanix HCI provides various techniques for storage optimization based on your workloads. Let us take a look around regarding the techniques to understand the working and best practices. 1. Compression There are two types of compression available Post-process compression Inline compression Want to know how compression works and how to enable it in your cluster? Compression Guide 2. Deduplication Deduplication reduces space usage by consolidating duplicate data blocks on Nutanix storage. Want to know more about different types of Deduplication and the best practices of using Deduplication? Deduplication 3. Erasure Coding Erasure coding increases the usable capacity on a cluster. Instead of replicating data, erasure coding uses parity information to rebuild data in the event of a disk failure. The capacity savings of erasure codin
Let's say that you upgraded your cluster and noticed that the timezone of your AHV has changed, why and what is the recommended practice? When running a health check, you will see a warning stating that "The AHV host's time zone was changed/set to something other than UTC." Nutanix recommends against making any configuration changes on the AHV hosts which includes changing timezone, from AHV version 20170830.58, the default timezone on AHV hosts is set to UTC upon upgrade or installation, for more information check out KB-6834 In this case, no further action is required from your side. To edit the timezone of a user VM, check out KB-3134 To edit the timezone of a CVM, check out KB-1050
In our environment we are currently hosting 2 HP Gen9 ESXi servers. Our plan is to purchase a single HP Gen 10 and having a 3 node cluster made out of it. To start with I want to create a single node (Gen 10) and slowly start moving VM’s from the esx host to that Nutanix host without interrupting the 2 esx servers. So my question here is, How is it possible for me to start with a Single host and start adding hosts to make up a cluster of 3 nodes (1 → Gen10, 2-Gen9)?
Ever thought a lot about advantages of UEFI over legacy BIOS? No? Who does that anyway? To start with, UEFI firmware is a successor to legacy BIOS firmware that supports larger hard drives, faster boot time and provides more security features. Creating and starting VMs with UEFI firmware provides the following advantages. Faster boot time Avoid legacy option ROM address constraints Include robust reliability and fault management Use UEFI drivers AOS 5.11 onwards we have a functionality to see the Boot Configuration from Prism UI by following the steps below:- Navigate to the VM page -> select your VM. Click on Update. Under boot configuration, you can see if its Legacy BIOS or UEFI. Want to know more about UEFI Support for VM in Nutanix, click here Have any questions? Drop a comment and let us start discussing.
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.