Nutanix AOS / AVH - SFP+ 10G Direct Attach Cable for Nodes

  • 20 January 2021
  • 1 reply

Hi All,

Current environment (~40 sites & HO) do not have any 10GbE switches and we are planning 2-node clusters per site. If using 2 x 1GbE Nic for the management like a witness communication, can we connect ESXi nodes using  10gbit ‘direct cable’ to support the workload for all VM traffic, vMotion, vmKernel, and Nutanix and most important, two node disks datastore? It should be not any issue for the Nutanix/VMware level, Switch is on L2.


Best answer by Alona 28 January 2021, 04:29

View original

This topic has been closed for comments

1 reply

Userlevel 6
Badge +5

Hi Dominik,

if I understand your question correctly, you’re asking if it is OK for a Witness VM to connect to the cluster via 1GbE link.

I would suggest looking not at the throughput but on the latency. The link may be OK but the switch can be busy, as an example.

Network latency between a two-node cluster and the Witness VM should not exceed 500 ms. (RPC timeouts are triggered if the network latency is higher.) During a failure scenario, nodes keep trying to reach (ping) the Witness VM until successful. Nodes ping the Witness VM every 60 seconds, and each Witness request has a two-second timeout, so it can tolerate up to one second of link latency.

Source: Prism Web Console Guide: Two-Node Clusters