CVM Resources | Nutanix Community
Skip to main content
We have a 4-Host Dell Bundle.



Per Host:


  • 2 x Xeon CPU E5-2620 v3 @ 2.40GHz
  • 256GB RAM
  • 4.4 TB of storage 400GB of which is SSD
we got this error after running the NCC checks:



Node 10.0.0.58:FAIL: CVM memory 16433468 kB is less than the threshold 24 GBNode 10.0.0.55:FAIL: CVM memory 16433468 kB is less than the threshold 24 GBNode 10.0.0.57:FAIL: CVM memory 16433468 kB is less than the threshold 24 GBNode 10.0.0.56:FAIL: CVM memory 16433468 kB is less than the threshold 24 GBRefer to KB 1513 for details on cvm_memory_checkDetailed information for ldap_config_check:Node 10.0.0.56:INFO: No ldap config is specified.Refer to KB 2997 for details on ldap_config_checkDetailed information for cvm_numa_configuration_check:Node 10.0.0.58:INFO: Number of vCPUs(8)on the CVM is more than the max cores(6) per NUMA node.Node 10.0.0.55:INFO: Number of vCPUs(8)on the CVM is more than the max cores(6) per NUMA node.Node 10.0.0.57:INFO: Number of vCPUs(8)on the CVM is more than the max cores(6) per NUMA node.Node 10.0.0.56:INFO: Number of vCPUs(8)on the CVM is more than the max cores(6) per NUMA node.Refer to KB 3034 for details on cvm_numa_configuration_checkDetailed information for vm_checks:I left everything as default from Dell. I thought about calling Dell about this config, but I thought I might prefer to hear it from the horse's mouth.



1. these are 6-core processors, so wouldn't the CVM's be better configured for 6vCPU's? so that they don't have to wait on 6 cores and 2 from the next processor to do instruction?



2. What might be the optimum memory settings for this Cluster? here's some settings:




  • Replication Factor 2
  • Compression On
  • Compression Delay 0 Min.
  • Compression Space Saved 1.3 TiB
  • Compression Ratio 1.53 : 1
  • Performance Tier Deduplication On
  • On Disk Deduplication Off
  • On Disk Dedup Space Saved -
  • On Disk Dedup Ratio 1 : 1
Default CVM config is using 8 vCPU and 16GB Memory



NCC will run per Nutanix best practice to inform cluster condition

As Compress and Dedupe is on, better assign 24GB to the CVM
I'm curious what Nutanix might do to optimize for a hexa-core processor. I understand that the default is 8 vCPU, however. The question is about system optimization. Is there any insight to this?
Virtual Processor Scheduling Algorithms will optimize CPU utilization on CVM

CVM just like VM running on Hypervisor

CPU core utilization dynamically change all the time
Hi there, thanks for reaching out.



I'll add a bit of color here



RE Memory

NCC is flagging the memory because you've got perf-tier Dedupe on, which is recommended to have 24GB of CVM memory per host for.



Inline compression is fantastic and only requires 16GB memory, so no worries there.



RE CPU's

Super, super, super long story, the elevator version is that for the longest time, our "default" has been 8 vCPU CVM's, but we dont RESERVE all 8 vCPU's, so the hypervisor scheduler will prioritize along with "user" VM's (servers, etc).



That NCC check was actually partially driven by me, and to be frank, it went out the door sooner than we wanted. a future NCC release (likely 2.2.2, coming out shortly) will disable this check to prevent confusion in the field.



That said, going forward, a future, future NCC/NOS release will do the following

Turn that check back on

Give proper guidance on CVM sizing based on the hardware platform

Defaults for "new" imaging will take these guidances into account, so it will be good from the jump on new systems. Existing systems would need a bit of tweaking of course.







Bottomline: Either turn off perf tier dedupe to get rid of the first warning (or increase CVM to 24G), and ignore that CVM NUMA check for now, until the new guidance comes out
I have plenty of Memory so I will take it to 24 GB.



However, before ever seeing the NUMA check, I did think it odd to have the CVM require 8vCPU on a system where the processors are 6-core. My understanind of VMware is that in order for an instruction to pass to the CPU, 8 cores would need to be available, and that until they are the VM has to wait. Am I incorrect in this understanding?
You're thinking of co-scheduling and co-stop, which really isn't nearly as big of a problem in 5.5 and 6.0 than it was in the ESX 3.x days.





the issue here is "NUMA width" rather than co-stop, which is less of an CPU scheduling issue, and more of a memory scheduling issue, where you could, in theory, have CPU threads addressing memory across the QPI between processors, which isn't a death sentence, its just less than ideal.



All of this NUMA stuff is all about optimizing as much as possible. You'll be fine with your config now, and you'll likely see a change in guidance after a future release, after we finalize some engineering and QA efforts on making the footprint smaller, which should address NUMA width on 6 core systems.
Thanks, that was very useful info!
One more question regarding this:



I was about to start applying 24GB to the memory sizes of the CVM however, when I looked at Active RAM the CVM's are all either in the 4 or 5 GB range. Is 24GB really the best?
Active memory will be dependent on how hard you are driving your workloads. Might not be very high now, but during times of higher utilization, as you add more workload and demand, it will go up.



Past that, Keep in mind that we'll keep that memory as full as possible, to promote cache hits by keeping as much data in there as possible.