VCPU(S) vs. Number Of Cores Per vCPU | Nutanix Community
Skip to main content
Hello everybody,



let's assume we have one Nutanix block with 3 nodes. In each node we only have 1 CPU socket with 10 Cores.



A VM can only run on one node. So, in this case, wouldn't it make more sense to let a VM with 4 CPUs run on 1 VCPU and 4 Cores, instead of 4 VCPUs with 1 core, as each node has only 1 CPU socket?



Or does Nutanix always recommend to use only 1 Core per VCPU not matter how much CPU sockets are available in a node?



Best regards,

Didi7
VMWare, AHV, KVM, Hyper-V, it doesn't matter, you should never assign more sockets than you have available on a given host. This would introduce a CPU scheduling issue and you'll likely see CPU ready times increase as a result. If there isn't heavy load on the node you probably wouldn't notice, but you should use 1 socket and 4 cores if you have a single socket node. If you had 2 physical sockets on the host you could do 2 sockets with two cores each, but still not 4 sockets with 1 core each.



Hope that makes sense.
Hello ddubuque,



just saw this ...



https://next.nutanix.com/server-virtualization-27/ahv-vcpu-vs-vcores-18216



... and thought, how about Nutanix nodes that only have 1 CPU socket.



Before we migrated to Nutanix, we used to have a lot of VMware ESXi hosts with not more than 2 CPU sockets and there to me it only made sense to use not more that 2 vCPUs per VM, which resulted in a 4 CPU-machine be configured with not more than 2 vCPUs and 2 Cores or 1 vCPU with 4 Cores but in no case more than 2 vCPUs, as the ESXi host only had 2 CPU sockets.



Then I saw the upper mentioned Nutanix forum thread and thought, maybe Nutanix sees it different but the question, whether it makes sense to also use more than 1 CPU socket, if you only have 1 CPU socket available in one node, wasn't answered completely.



Nevertheless, I agree with you.



Best regards,

Didi7
@Jon I saw you liked the response in the mentioned article.



https://next.nutanix.com/server-virtualization-27/ahv-vcpu-vs-vcores-18216



I've always understood that you should never allocate vCPU in excess of your physical socket count. I'm genuinely curious how AHV handles that, and is it not really a concern from a CPU scheduling perspective?
The only time you really, really need to worry about sockets and sockets per core is when you have very big VMs, like SAP HANA, exchange, SQL, etc



otherwise, just doing THE RIGHT amount of vCPUs for your workload is almost always the right idea.



That and never making more vCPUs (regardless of how you do it) in a single VM, than there is physical cores in any given system.
That and never making more vCPUs (regardless of how you do it) in a single VM, than there is physical cores in any given system.



Didn't you want to say …



That and never making more vCPUs (regardless of how you do it) in a single VM, than there is physical CPU sockets in any given system.



???



Nutanix should make this more clear in their documentation, cause I just found the upper …



https://next.nutanix.com/server-virtualization-27/ahv-vcpu-vs-vcores-18216



... thread regarding the use of vCPUs and Cores in a Nutanix VM.



IMO, you should point out more, that it's not recommended to put more vCPUs instead of Cores into a Nutanix VM, when there is only one or two physical CPU sockets.



Regards,

Didi7
Does it make any difference to AHV at all whether we assign 2 vCPUs with 2 cores each or 1 vCPU with 4 cores? Maybe it doesn't and AHV just assigns "4" anyway while VM config mainly defines how many sockets and cores are presented to the guest OS?
We're using some terms interchangeably, let me clarify what I'm talking about with a practical example using a hypothetical Linux VM running on AHV.



Let's say that said AHV host had 40 total cores, which would show up as "40" in the "CPU(s)" line of lscpu on AHV (or hostssh lscpu within a CVM, or total cores in Prism UI -> Hardware).



Within a single virtual machine, the amount of "CPU(s)" that lscpu in Linux should never exceed the amount of "CPU(s)" on a single AHV host.



Let's say you assigned 40 "CPUs" to this hypothetical single Linux VM.



If that VM drove its CPUs to 100%, you'd have no more CPU to run ... anything, as each physical core would be at 100%.



This is a universal rule of thumb for any virtualization platform. There are situations where it makes sense to oversubscribe, from a single VM, but if you're looking for an 80/20 recommendation, this is it.



To be clear, for the 90/10 rule, it does not make a difference 2x20 or 1x40 in this made up example, as we'll process it just the same.



For the 10% / micro-optimization situations, lets use SAP HANA as a practical example

For HANA, you would want to very purposefully align your "Sockets x Cores" to align with the underlying Quad Socket server, which is defined in our SAP HANA BPG. This comes along with additional special settings to very particularly align the HANA VM with the underlying hardware.



This is similar on AHV as it is on ESX.



I use this very narrow example (not everyone runs HANA of course) to highlight, when you need to micro-optimize, its to grab the last few %'s of performance out of the hardware and the hypervisor.



This is almost entirely when consolidation isn't the top priority of your deployment, but rather things like hardware abstraction, business continuity, rapid HA, life cycle management, and NOT how much you can pack in a single server.



TLDR

When you want to make things easy to manage


  1. don't micro-optimize, as that will actually make managing your environment harder. This is universally true for any platform, on-prem, aws, azure, etc etc.
  2. Size for the requirements. If you need 4x CPUs, provision some math that gives you 4. 1x4, 2x2, go nuts.

Hi Jon,



thank you very much for your comprehensive post!



If you need 4x CPUs, provision some math that gives you 4. 1x4, 2x2, go nuts.




Just for the final bit of clarification, as this has come up somewhere in this or a related thread: Could we do 4x1 as well on a 2-socket host?



CU,

Peter
🙂 yes, that works too
@Jon thanks for the hypothetical, it helped when clarifying as the current AHV Best Practices Guide still clearly states that you start by increasing vCPU before increasing vCores.