Network Considerations

  • 2 March 2016
  • 3 replies
  • 1942 views

Badge +2
Hi Folks,

I'm in the middle of a deployment and have a question.

We have a pair of Nexus 3548's acting as 10 G edge switch to our Nuntanix Clusters who are trunked up to a pair of Cisco 4500's

We have 4 nodes in the cluster with a total of 10 nics all 10 gig and assigned to the VDS in Vsphere. On the VDS we set the MTU to 9000

Is there a way to separate the CVM traffic from the management traffic?

I would like to keep the CVM traffic from going up to the 4500's however we needed to create an SVI on the 4500's so we could manage them.

I'm getting Jumbo frame errors on both the Nexus and the 4500's and when I set the MTU on the VDS back to 1500 the errors went away. My guess is the CVM traffic is what's causing the issues and I would prefre not to mess with the MTU settings on the 4500's.

What is best practice here? What are other doing?

Thanks

This topic has been closed for comments

3 replies

Userlevel 3
Badge +14
Mike,

Yeah - the default setting of 1500 is what I use in my own lab and test environments for vSwitch0 MTU.

I know a few of the other engineers at Nutanix (and some customers) use and test jumbo frames - but they have their own lab networks setup for end-to-end jumbo. I don't control all of my own lab network so stick with 1500. All of our performance testing I do (with the exception of NSX testing) also happens with the default MTU.

You can get performance gains with end-to-end jumbo frames, but the extra complexity (for me) isn't worth the percentage gains. Check out Michael Webster's post here for an alternative to my opinion 😉 You can decide for yourself based on the performance charts and config complexity which you'd like to use.

http://longwhiteclouds.com/2013/09/10/the-great-jumbo-frames-debate/

This pull quote is something I strongly agree with:
Why 10G plus equipment doesn’t come out of the factory configured to accept Jumbo Frames I don’t know. In any case the necessary configuration is a trivial exercise when setting up new infrastructure, but to retrofit existing network switches, routers and the virtual environment on a large scale if it wasn’t done originally can be a little harder and more complex.
Badge +2
The VMware doc for Nutanix recomends using jumbo frames, which is why we had it set.

So on the VDS you just have the MTU set to 1500?
Userlevel 3
Badge +14
Hi 

Is there a specific need for jumbo frames? We find that performance is still excellent with standard 1500 byte MTUs. You may find this is the simplest answer to your problem, and it's what I would recommend unless there is a hard customer requirement you need to satisfy.


If you have a need for jumbo frames for either performance or feature (VXLAN / NSX) requirements we recommend taking a look at the following guides:

VMware vSphere Networking on Nutanix
http://go.nutanix.com/rs/nutanix/images/Nutanix_TechNote-VMware_vSphere_Networking_with_Nutanix.pdf

VMWARE NSX FOR VSPHERE
http://www.nutanix.com/go/vmware-nsx-for-vsphere.html

If you want to separate the management traffic of the CVM from the storage traffic of the CVM we call this a "multi-homed" or "three-legged" CVM. I wouldn't recommend this unless separation is required for security reasons, because you'll have to go in and change some things in the CVM. We have the procedure documented here:

Connecting CVMs to Multiple Networks
https://portal.nutanix.com/#/page/kbs/details?targetId=kA0600000008i7qCAA

Multiple networks for the CVM would allow you to keep the jumbo frame enabled storage interface traffic down on the leaf switches (if they're interconnected), while putting the 1500 byte MTU mgmt interface on a routed network that can reach up to the 4500 switches.

Ultimately - I think that configuration is too complex to recommend.

If you do go down the path of enabling jumbo frames I'd recommend making sure the end-to-end network path for any possible traffic is jumbo frame capable and configured as such.