Solved

1GbE Nic or 10GbE nic for Nutanix

  • 23 February 2017
  • 4 replies
  • 2417 views

Badge +1
  • Pathfinder
  • 0 replies
Hi,

Current environment do not have any 10GbE switches, if using 2 x 1GbE Nic will it can support the workload for all VM client network, vMotion, Vkernal, Management and Backup? Will it encounter any performance or I/O bottleneck issue when running all the nwtwoek traffice within this 2 x 1GbE NIC?

May i know what is the best practice to choose whether should use 1GbE Nic or 10GbE Nic when purchase of Nutanix appliance (3 Nodes)?.

Regards,
mek
icon

Best answer by bbbburns 23 February 2017, 17:00

mek



The short answer is that it will work, but I wouldn't recommend it unless you don't have any other options. I always recommend that people use low latency, non-blocking, enterprise grade, 10GbE top of rack switches for a Nutanix cluster.



The reason for this is that you eliminate the network as a potential bottleneck between servers. The performance of your network traffic between Nutanix nodes directly relates to your VM disk write performance. 10GbE networking has lower latency and higher throughput than 1GbE and will give your VM storage writes lower latency and higher throughput.



The non-blocking portion means that you're not oversubscribing bandwidth between two connected nodes. This eliminates a scenario where network congestion could add latency to your storage traffic (or worse, drop packets).



Having said ALL of that - for a smaller Nutanix cluster like your 3 node example, performance of 1GbE can be acceptable. If your workload is not very network heavy, and not very storage write heavy you may not notice any difference. In that case I would recommend putting all of the User VM traffic on one 1GbE adapter and splitting your CVM traffic to the second 1GbE adapter. This can easily be done with PortGroup preferences in vSphere.



If you plan to grow your cluster beyond a small number of nodes, if the VMs are network intensive, or if the VMs are going to be writing a lot of data to disk, I would stay away from the 1GbE network and investigate 10GbE top of rack switching.



Prices for reliable and high performing 10GbE top of rack might surprise you in a good way.
View original

This topic has been closed for comments

4 replies

Userlevel 2
Badge +14
mek

The short answer is that it will work, but I wouldn't recommend it unless you don't have any other options. I always recommend that people use low latency, non-blocking, enterprise grade, 10GbE top of rack switches for a Nutanix cluster.

The reason for this is that you eliminate the network as a potential bottleneck between servers. The performance of your network traffic between Nutanix nodes directly relates to your VM disk write performance. 10GbE networking has lower latency and higher throughput than 1GbE and will give your VM storage writes lower latency and higher throughput.

The non-blocking portion means that you're not oversubscribing bandwidth between two connected nodes. This eliminates a scenario where network congestion could add latency to your storage traffic (or worse, drop packets).

Having said ALL of that - for a smaller Nutanix cluster like your 3 node example, performance of 1GbE can be acceptable. If your workload is not very network heavy, and not very storage write heavy you may not notice any difference. In that case I would recommend putting all of the User VM traffic on one 1GbE adapter and splitting your CVM traffic to the second 1GbE adapter. This can easily be done with PortGroup preferences in vSphere.

If you plan to grow your cluster beyond a small number of nodes, if the VMs are network intensive, or if the VMs are going to be writing a lot of data to disk, I would stay away from the 1GbE network and investigate 10GbE top of rack switching.

Prices for reliable and high performing 10GbE top of rack might surprise you in a good way.
Badge +1
Hi bbbburns,

Thank for your reply and detail explanation. Will take your advice into consideration when doing the planning.

Regards

@bbbburns Thanks for the info,

Which document can you share about Network configuration to separate CVM traffic from VM traffic. 

Appreciate your response.

Userlevel 2
Badge +14

@aAqeel can you start a new thread with your question as the main topic? In this new thread can you explain more about what sort of separation you want between CVM and VM traffic?