The Nutanix CVMs must be configured for Jumbo Frames on both the internal and external interfaces. The converged network also needs to be configured for Jumbo Frames. Most importantly, the configuration needs to be validated to ensure Jumbo Frames are properly implemented end-to-end. (page 17) http://go.nutanix.com/rs/nutanix/images/Nutanix_TechNote-VMware_vSphere_Networking_with_Nutanix.pdf
Does this mean we need to logon to the CVM and change the network configuration, i.e. /etc/sysconfig/network-scripts/ifcfg-eth0 ? We are pre-deployment, and I've set the MTU on these interfaces and tested pinging, and it work properly, however the internal interface seems to reset to 1500 each time the CVM is rebooted. I feel like Jumbo Frames is recommended and best practices, but it's not fully explained as to what interfaces should be set. I have the same question regarding the vSwitchNutanix, it appears it's best to set it to 9000, but since there is no physical layer, does it make a difference?
Thanks!
Best answer by Jon
Bladerunner - No doubt that Jumbo frames has a place, especially in very high throughput situations, like high end databases. Don't disagree with you there
In our current code levels, it really only makes a difference in corner and edge cases. For most customers, default configuration will do have than enough performance. We've even got high end DB customers using default with great success. Generally we only go down the jumbo route when we're tuning for those last few percentage points of performance on ultra high end workloads.
Keep in mind, the days where storage traffic == jumbo traffic are gone with Nutanix, as Nutanix's data locality greatly reduces the amount of chatty bandwidth chewing up the wire.
My best analogy here is by asking what the read / write ratio is on your current storage system. Maybe people will say something like 60% reads or 70% reads, or something along those lines.
Shifting that workload over to Nutanix, Given that Nutanix has an extremely high focus on data locality, those 60% or 70% or whatever the % of reads *dont even hit the network*, therefore, there are no switch side CPU cycles to optimize, no switch ASIC buffers to manage, it just reads it via the local PCI bus, sometimes right from DRAM. Doesn't even leave the server.
Obviously writes are always going to hit some of the network, albeit one copy is always going to be local, but the RF traffic with RF2 == one copy remote and RF3 == two copy remote. In this example, that means only 30-40% of your traffic is ever going to hit the network, and its basically got a clear highway to do so, since the read traffic is all local and not on the wire
Meaning those switch side CPU's have a heck of a lot of free time, and switch side buffers are realtively empty, so Jumbo frames just becomes "something extra" that you need to configure and manage, and doesn't add a whole lot of business value IMHO.
RE Best Practices
There's a reason I personally call them recommended practices instead, because best practices implied an ummutable edict for all of time. We wrote the networking guide for vSphere back in 2014, where Jumbo frames made a bit more sense for our product. We've made more enhancements than I can literally count, and default non-jumbo performance is pretty darn good now.
We've actually updated our vSphere networking guide, waiting to go through publishing right now actually, and you'll see the verbiage around jumbo frames change.