Connect

Episode 1: Acropolis Hypervisor: VM HA Part 2

  • 14 October 2015
  • 0 replies
  • 20107 views
Episode 1: Acropolis Hypervisor: VM HA Part 2
Userlevel 7
Badge +35
This post was authored by Manish Lohani, Director of Product Management

VM-HA Functionality:




A node failure in a hypervisor cluster takes down virtual machines running on it thereby causing service outage. VM-HA feature reduces this unplanned downtime by automatically restarting critical VMs on other healthy nodes within the cluster.

Failure modes:
VM-HA declares a host failure in case of any of the following failure conditions:

  • Node power loss
  • Node hardware failure
  • Physical network card failure
  • Network cable failure
  • Node hypervisor crash
  • Top of rack switch failure

During Nutanix (storage) cluster creation, you as a user can specify the number of simultaneous node failures the (storage) cluster should be able to tolerate without causing any impact to your data. This information is then used by the distributed storage layer to determine the number of RF data copies needed.

In hyper converged architecture on AHV - the storage cluster and the hypervisor cluster are the same as they have the same nodes. Hence, this same storage cluster configuration information is also used by VM-HA to determine the amount of spare capacity to reserve when VM-HA with reservation is enabled.

Admission Control:

When VM-HA is explicitly enabled, software reserves failover capacity and also enables admission control to ensure that HA is available to all powered ON VMs at all times. Admission control fails new VM power ON requests if the cluster does not have sufficient capacity to provide HA for the new VMs.

Admission control is based only on available memory in a cluster and the following two policies are supported:

1. Reserved Host:

  • A host within the cluster is selected as a “Spare Host” for failovers. The software automatically picks a host as a “spare host” and evacuates it if needed by live migrating VMs off it. In case of a host failure, VMs running on the failed host are restarted on the spare host.

2. Reserved Segments

  • There is no single dedicated “spare host” for failovers. The reserved capacity is distributed across nodes within the cluster. The software decides how to distribute the reservation across nodes based on some fairly sophisticated algorithms (A detailed follow up post will cover this topic).

“Reserved Host” policy is simple to implement but suffers a drawback that CPU resources on the spare node are unused. “Reserved Segments” policy distributes reservation across all nodes and hence all nodes are utilized. However, distribution of reservation leads to some capacity wastage due to fragmentation.

Selecting an admission control policy for VM-HA to minimize failover overhead while maximizing availability can quickly become a complex decision as new VMs of various sizes and additional nodes of different sizes are added to a cluster over time.

Nutanix software simplifies this decision for you by automatically picking the most optimal admission control policy for a given set of nodes and VMs at any time. It does so by doing a cost-benefit analysis and picking the best option depending on various factors. You can think of the heuristics behind this analysis as follows:

When all the nodes in a Nutanix cluster have the same amount of memory, “Reserved Host” policy leads to least overhead for failover capacity in most cases and will be preferred. When some nodes in a cluster have more memory than others, depending on the sizes of the VMs enabled for VM-HA “Reserved Segments” policy may lead to lower overhead and is preferred.

Under the hood VM-HA gracefully handles all the usual failure scenarios like network partition, host isolation, network flapping to provide a robust high availability solution for your VMs.

0 replies

Be the first to reply!

Reply