Default CVM config is using 8 vCPU and 16GB Memory
NCC will run per Nutanix best practice to inform cluster condition
As Compress and Dedupe is on, better assign 24GB to the CVM
I'm curious what Nutanix might do to optimize for a hexa-core processor. I understand that the default is 8 vCPU, however. The question is about system optimization. Is there any insight to this?
Virtual Processor Scheduling Algorithms will optimize CPU utilization on CVM
CVM just like VM running on Hypervisor
CPU core utilization dynamically change all the time
Hi there, thanks for reaching out.
I'll add a bit of color here
RE Memory
NCC is flagging the memory because you've got perf-tier Dedupe on, which is recommended to have 24GB of CVM memory per host for.
Inline compression is fantastic and only requires 16GB memory, so no worries there.
RE CPU's
Super, super, super long story, the elevator version is that for the longest time, our "default" has been 8 vCPU CVM's, but we dont RESERVE all 8 vCPU's, so the hypervisor scheduler will prioritize along with "user" VM's (servers, etc).
That NCC check was actually partially driven by me, and to be frank, it went out the door sooner than we wanted. a future NCC release (likely 2.2.2, coming out shortly) will disable this check to prevent confusion in the field.
That said, going forward, a future, future NCC/NOS release will do the following
Turn that check back on
Give proper guidance on CVM sizing based on the hardware platform
Defaults for "new" imaging will take these guidances into account, so it will be good from the jump on new systems. Existing systems would need a bit of tweaking of course.
Bottomline: Either turn off perf tier dedupe to get rid of the first warning (or increase CVM to 24G), and ignore that CVM NUMA check for now, until the new guidance comes out
I have plenty of Memory so I will take it to 24 GB.
However, before ever seeing the NUMA check, I did think it odd to have the CVM require 8vCPU on a system where the processors are 6-core. My understanind of VMware is that in order for an instruction to pass to the CPU, 8 cores would need to be available, and that until they are the VM has to wait. Am I incorrect in this understanding?
You're thinking of co-scheduling and co-stop, which really isn't nearly as big of a problem in 5.5 and 6.0 than it was in the ESX 3.x days.
the issue here is "NUMA width" rather than co-stop, which is less of an CPU scheduling issue, and more of a memory scheduling issue, where you could, in theory, have CPU threads addressing memory across the QPI between processors, which isn't a death sentence, its just less than ideal.
All of this NUMA stuff is all about optimizing as much as possible. You'll be fine with your config now, and you'll likely see a change in guidance after a future release, after we finalize some engineering and QA efforts on making the footprint smaller, which should address NUMA width on 6 core systems.
Thanks, that was very useful info!
One more question regarding this:
I was about to start applying 24GB to the memory sizes of the CVM however, when I looked at Active RAM the CVM's are all either in the 4 or 5 GB range. Is 24GB really the best?
Active memory will be dependent on how hard you are driving your workloads. Might not be very high now, but during times of higher utilization, as you add more workload and demand, it will go up.
Past that, Keep in mind that we'll keep that memory as full as possible, to promote cache hits by keeping as much data in there as possible.