Skip to main content
Is there any special guide, how to setup cisco 10 Gigabit switched for nutanix environments?

Im using vsphere 5.0 U3 with nutanix.

Should jumbo frames be enabled?

Is it possible with iperf inside the cvm to get 9 Gbit/s with with one process?

I get 1,57 Gbit/s

Good hints from Intel. The INtel nics are onboard. Which PCI Express speed X4,x8,x16?

http://www.intel.com/support/network/sb/CS-025829.htm

This graph is intended to show (not guarantee) the performance benefit of using multiple TCP streams.

PCI Express ImplementationEncoded Data RateUnencoded Data Rate

x15 Gb/sec4 Gb/sec (0.5 GB/sec)

x420 Gb/sec16 Gb/sec (2 GB/sec)

x840 Gb/sec32 Gb/sec (4 GB/sec)

x1680 Gb/sec64 Gb/sec (8 GB/sec)

http://dak1n1.com/blog/7-performance-tuning-intel-10gbe

Maybe useful script:

For me, I had 10 machines to test, so I scripted it instead of running any commands by hand. This is the script I used: https://github.com/dak1n1/cluster-netbench/blob/m...

http://www.vmware.com/pdf/10GigE_performance.pdf#sthash.5qtK7vzy.dpuf

A single one‐vCPU virtual machine can drive 8Gbps of traffic on the send path and 4Gbps traffic on the receive path when using standard MTU (1500 byte) frames.

Using jumbo frames, a single virtual machine can saturate a 10Gbps link on the transmit path and can receive network traffic at rates up to 5.7Gbps.
WHat about MTU?

Good Infos :

http://kendrickcoleman.com/index.php/Tech-Blog/vsphere-and-vcloud-host-10gb-nic-design-with-ucs-a-more.html



2 x 10 Gigabit Configuration

http://www.kendrickcoleman.com/images/stories/onetime/vcd-hostnic/vCD-Host-2x10GbE.png
FLow control on? 10000 full duplex or auto auto on the cisco switch ? Mtu=9216
If you do iperf tests then open two putty sessions on the esxi host and do ethtool -S vmnic0.

I see that the rx_missed counters go up. Flow Controll is not enabled. Used the standard vswitch with esxi 5.0 U3

After I enabled flow controll on the cisco switch, rx_missed counter is 0.
You most likey will need to use support CISCO transceivers.



Are you going to run a vSwitch or use a vDS? If you're concered about flooding the port I would recommend using a vDS.



Iperf is on the CVM's of troubleshooting. Usally it's ran when the performance numbers after setup are off to confirm the network settings.



Procedure for 60 second iperf test:


  1. Log into the first CVM in the cluster as the nutanix user.
  2. Start the iperf server so it will listen for transfer requests from clients.nutanix@CVM-A$ iperf -s -f MB------------------------------------------------------------Server listening on TCP port 5001TCP window size: 0.08 MByte (default)------------------------------------------------------------
  3. Log into the next CVM in the cluster as the nutanix user.
  4. Start the iperf server client and run the first test. The following example connects to the first CVM IP address 172.16.1.10 runs the test for 60 seconds.nutanix@CVM-B$ iperf -c 172.16.1.10 -f MB -t 60------------------------------------------------------------Client connecting to 172.16.1.10, TCP port 5001TCP window size: 0.02 MByte (default)------------------------------------------------------------[ 3] local 172.16.1.11 port 41650 connected with 172.16.1.10 port 5001[ ID] Interval Transfer Bandwidth[ 3] 0.0-60.0 sec 17139 MBytes 286 MBytes/sec
  5. Repeat steps 3 and 4 for each CVM in the cluster. Please ensure that you you do not run more than one iperf test at the same time.
  6. Once all CVMs have been tested with the first CVM running as the server, then repeat steps 1-5 using each of the CVM's in the server. For example, a 4 node cluster would have the "iperf -c" test run 12 times to validate all possible network paths.
Procedure for 10GB data transfer iperf test:


  1. Log into the first CVM in the cluster as the nutanix user.
  2. Start the iperf server so it will listen for transfer requests from clients.nutanix@CVM-A$ iperf -s -f MB------------------------------------------------------------Server listening on TCP port 5001TCP window size: 0.08 MByte (default)------------------------------------------------------------
  3. Log into the next CVM in the cluster as the nutanix user.
  4. Start the iperf server client and run the first test. The following example connects to the first CVM IP address 172.16.1.10 runs the test for 60 seconds.nutanix@CVM-B$ iperf -c 172.16.1.10 -f MB -n 10GB------------------------------------------------------------Client connecting to 172.16.1.10, TCP port 5001TCP window size: 0.02 MByte (default)------------------------------------------------------------[ 3] local 172.16.1.11 port 41650 connected with 172.16.1.10 port 5001[ ID] Interval Transfer Bandwidth[ 3] 0.0-32.8 sec 10240 MBytes 312 MBytes/sec
  5. Repeat steps 3 and 4 for each CVM in the cluster. Please ensure that you you do not run more than one iperf test at the same time.
  6. Once all CVMs have been tested with the first CVM running as the server, then repeat steps 1-5 using each of the CVM's in the server.

Cisco Nexus 5010 , there is no specific configuration required,

Here is the sample of config from a customer

interface Ethernet1/3description nutanixswitchport mode trunkswitchport trunk allowed vlan 515,600,602,1666,1755,1794,1837,1841spanning-tree port type edge



Depending on transceiver , you may have to do this



service unsupported-transceiver
On this blog someone shows 18 Gigabit per 10 seconds, so only 1,8 Gigabits per seconds?



http://blog.cyberexplorer.me/2013/03/improving-vm-to-vm-network-throughput.html
The nuxtnaix server 3451 uses two Intel Network cards 82599eb .

This card has only 10000 Gbit full duplex option in GUI and on command line.



Does this mean, the cisco swith has to be configured also 10000 Gbit full fuplex and not autonegotiate?

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1004089



On the other side when autonegotiation is disabled option are lost.



http://etherealmind.com/ethernet-autonegotiation-works-why-how-standard-should-be-set/



Disabling autonegotiation can result in physical links issues going undetected. The Fast Link Pule process does some testing for the physical link properties as well as negotiation on several Ethernet properties.


  • Unable to detect bad cables
  • Unable to detect link failures
  • Unable to check link partners capabilities
  • Unable to move systems from one port to another or to another switch or router
  • Unable to determine performance issues on higher layer applications
  • Unable to implement Pause Frames (Flow Control)(4)