Can I migrate from cluster memory usage 90% to another host? | Nutanix Community
Skip to main content

Currently, the Nutanix cluster consists of three nodes.

The Prime Element has 90% Cluster Memory usage.

Data Resilience Status is indicated by "OK".

 

I'm trying to add more memory to the node in this environment.

If you change a node to maintenance mode for memory DIMM expansion, the VMs on that node will be migrated to another node.

 

The average usage rate of re-memory is over 90%, will the migration be successful?

The Fault Tolerable of Data Resilience is all "1"... I don't know for sure.

 

I wonder if live migration is possible in this environment, and if not, do I need more nodes or downtime?

 

Thank you to everyone who answered.

The customer environment has an average memory usage of 90% for all hosts.

Therefore, we plan to have downtime or add additional nodes as you gave us the guide.

Thank you again!


Hi, just to confirm, your cluster has 90% RAM usage across all hosts, or just one of them?  If it is across all hosts I would definitely prepare for some VM downtime to accomplish any maintenance on this cluster.

 

If it is just one host, what is the utilisation on the other nodes, and based on that, is there sufficient available resource on them to accept the workloads from the hot node?  If you don’t need another node (i.e. it is just RAM that is constrained) then you should arrange a maintenance window to do the upgrades and power off some or all of your VMs first.  You may be able to identify non-critical workloads across all three hosts that when powered off give you sufficient headroom to keep everything else up during the maintenance operations, but you will need to be sure this is the case otherwise some of the VMs may not migrate.


Sounds to me that your Prism Element leader CVM memory utilization of 90%.
If the Resilience is OK you either do not have 90% usage, or you have enabled HA mode, then the cluster calculates the RAM N+1. 
 

What is the alert number given by Prism?. Give support a call and they can help you do a stop of prism on the elected leader and then one of the other two cvms will be elected new leader, that can help offload the CVM Ram Utilization of the affected CVM. 

But if you’re at 90% cluster RAM usage you need to shutdown VMs for maintenance or in the event of a hardware failure. In that case you will need to invest in additional nodes or decrease the ram usage of your guest VMs.
 


Reply