Current environment do not have any 10GbE switches, if using 2 x 1GbE Nic will it can support the workload for all VM client network, vMotion, Vkernal, Management and Backup? Will it encounter any performance or I/O bottleneck issue when running all the nwtwoek traffice within this 2 x 1GbE NIC?
May i know what is the best practice to choose whether should use 1GbE Nic or 10GbE Nic when purchase of Nutanix appliance (3 Nodes)?.
Best answer by bbbburns
The short answer is that it will work, but I wouldn't recommend it unless you don't have any other options. I always recommend that people use low latency, non-blocking, enterprise grade, 10GbE top of rack switches for a Nutanix cluster.
The reason for this is that you eliminate the network as a potential bottleneck between servers. The performance of your network traffic between Nutanix nodes directly relates to your VM disk write performance. 10GbE networking has lower latency and higher throughput than 1GbE and will give your VM storage writes lower latency and higher throughput.
The non-blocking portion means that you're not oversubscribing bandwidth between two connected nodes. This eliminates a scenario where network congestion could add latency to your storage traffic (or worse, drop packets).
Having said ALL of that - for a smaller Nutanix cluster like your 3 node example, performance of 1GbE can be acceptable. If your workload is not very network heavy, and not very storage write heavy you may not notice any difference. In that case I would recommend putting all of the User VM traffic on one 1GbE adapter and splitting your CVM traffic to the second 1GbE adapter. This can easily be done with PortGroup preferences in vSphere.
If you plan to grow your cluster beyond a small number of nodes, if the VMs are network intensive, or if the VMs are going to be writing a lot of data to disk, I would stay away from the 1GbE network and investigate 10GbE top of rack switching.
Prices for reliable and high performing 10GbE top of rack might surprise you in a good way.