Do you still wonder which one to use Number of vCPU or Number of Cores per Socket when configuring a VM? You read something about it and it makes sense and you think you will now remember only to stumble upon the same dilemma sometime later. Shall we clear this once and for all?
vCPU and cores are inherently bound to a NUMA node which, in essence, is as a processor with direct attached memory. Under some (but not all) conditions, accessing memory or devices across NUMA boundaries will result in decreased performance. Hence the goal is to configure a VM with CPU and memory values that remain within the boundaries of a single NUMA node. The exception would be a NUMA-aware application.
Another thing to remember is that hot add of the number of cores is not supported on AHV meaning adjustment of number of cores value for a VM requires the VM to be a powered off.
To ensure optimal performance of the VMs in the cluster follow simple rules:
- Use vCPUs instead of cores to increase the number of vCPUs available for a VM. Hot add of CPU cores is not supported.
- Use only as many vCPUs as the VM requires to limit resource waste. If the application performance is not affected, it is preferable from an AHV resource scheduling and usage perspective to have two vCPUs running at 50% utilization each, rather than four vCPUs running at 25% utilization each. Two vCPUs are easier to schedule than four.
- Use the physical core count instead of the hyperthreaded count for maximum single VM sizing. Do not configure a single VM with more vCPU cores than pCPU cores available on the AHV host. This configuration can cause significant performance problems for the VM.
For the same performance reasons on Nutanix clusters, the Controller VM (CVM) should be prevented from accessing remote NUMA memory for performance reasons. That is why the CVM should be pinned to a given NUMA node. When installing Nutanix software the Foundation service decides on the NUMA node to host the CVM.
To verify the size of the NUMA node on AHV cluster follow AHV Best Practices: Memory.
To confirm CVM is pinned to the NUMA node KB-8715 [Performance] CVM based NUMA settings might not pin CVM to correct CPU socket on VMware ESXi based systems
Piling on to this thread, I addressed this relatively recently when it came up on Reddit, here: https://www.reddit.com/r/nutanix/comments/nvviq1/ahv_vcpu_best_practice_2021_when_to_configure/h21uj35?utm_source=share&utm_medium=web2x&context=3
Copying in my list of some key "generalisms" from me, feel free to take these as company level canon, I'll put this in stone right now:
Give your apps what they need, nothing more, nothing less
If that changes over time, bump it up as needed based on the observed utilization and your performance requirements
If you need to make a VM big, don't be wary of making it big (see rule 1)
If you can fit things more sanely from a "t-shirt size" perspective that aligns better with single pNUMA node or smaller, do it. Do not just make your t-shirt size aligned to a NUMA node for the sake of alignment, see rule 1 :)
Do not arbitrarily pick VM sizes (like 12 vCores because my pSocket is 12) just because the hardware you have today looks a certain way. That hardware could/will upgrade over time, and do you really want to go reverse engineer VM sizing X years down the road? (I think not)
If you care about CPU sensitivity, care about oversubscription first and making sure the underlying hardware is up for the job. That like buying the right tool for a job at a hardware store
If you still care about CPU sensitivity, follow the KISS principle first. Start with sane defaults , measure, then tune from there. micro optimizations and premature optimizations are the devil no matter what tech you use :)
If you really really care about CPU sensitivity (think hana), start thinking about the implementation specifics of how we offer vNUMA optimizations - we've got several knobs there, and lets dive in!
See the rest of the thread in the reddit link above