Solved

CVM cluster and ESXI mgmt&production network consideration

  • 22 July 2016
  • 1 reply
  • 2943 views

Userlevel 1
Badge +14
Hi guys,
two basic questions:
1)can cvm cluster and ESXI data use the same 10g switch with no performance penelty ?
(I have only one pair of 10g switch, and NX1065 one dual port card. As we know, cvm cluster must use 10g network, but I`d like also to use 10g for ESXI data network. )

2) generally ESXI mgmt(usually,vmkernel port for NFS access) is on the same 10g link with cvm cluster(also CVM prism mgmt), If the user`d like to have a separate uplink or subnet for ESXI MGMT, is it ok ? I`m worried about the performance problem.

thanks in advance !
icon

Best answer by Jon 22 July 2016, 18:17

Almost all of our customers (of which we have thousands) use 2x 10G ports per server, and run all traffic over them.

The only time, that I know, of that we have ran into any sort of network performance problem was when those ports were connected to really low end, terrible switches, like Cisco's FEX's (which are NOT a switch, just a density increasing line card).


Think about it, even in a three node cluster, you have 2x10x3 gbps of bandwidth, so 60gbps of bandwidth. Thats a TON, and you'd likely be limited by the uplinks FROM the 10G switch to the rest of the world (router, firewall, load balancer, all probably less than 60gbps of bandwidth)

Now, expand that example out. In a 20 node cluster, thats 2x10x20 gbps of bandwidth, 400 gbps of bandwidth in total.

Combined with the practice of data locality, where we are keeping almost all reads, and half of the writes OFF the network, that turns into us being VERY efficient on the bandwidth that is available.

Bandwidth is rarely, if ever, a problem in our architecture.
View original

1 reply

Userlevel 7
Badge +30
Almost all of our customers (of which we have thousands) use 2x 10G ports per server, and run all traffic over them.

The only time, that I know, of that we have ran into any sort of network performance problem was when those ports were connected to really low end, terrible switches, like Cisco's FEX's (which are NOT a switch, just a density increasing line card).


Think about it, even in a three node cluster, you have 2x10x3 gbps of bandwidth, so 60gbps of bandwidth. Thats a TON, and you'd likely be limited by the uplinks FROM the 10G switch to the rest of the world (router, firewall, load balancer, all probably less than 60gbps of bandwidth)

Now, expand that example out. In a 20 node cluster, thats 2x10x20 gbps of bandwidth, 400 gbps of bandwidth in total.

Combined with the practice of data locality, where we are keeping almost all reads, and half of the writes OFF the network, that turns into us being VERY efficient on the bandwidth that is available.

Bandwidth is rarely, if ever, a problem in our architecture.

Reply