We are in the process of evaluating HCI Platforms. We plan to run VMware ESX server on Nutanix HCI for a VDI solution. Currentlty running VDI solution on Citrix and Cisco UCS now. The end solution will support roughly 3000 - 5000 users.
I would like to get any thoughts and recommendations on the topic of the number of 10-GB NICs to have on the Nutanix HCI platform, per node. Most HCI platforms are sold with two physical NICs. Obviously, this makes it hard when using multiple NIC's in VMware to protect important traffic - specifically, the management and IP storage traffic. Where in UCS there might be 6-8 NICs on an ESX server - 2 x for Mgmt, 2 x for vMotion, 2 x for IP Storage, and 2 x for Guest VMs - this gets consolidated on to two NICs in a Nutanix node. I understand this will work, but, is it a better practice to use multiple and separate physical NICs in Nutanix to isolate VMware traffic and functions to ensure each function has bandwidth? In short, for VMware on Nutanix, is it a good idea or recommended to have a minimum 4 x NICs per host...if not more?
Are there any Nutanix users which actually had some big issues after an implementation of VMware ESX server for VDI and ended up having to add a bunch more nodes to distribute load and/or add NICs to nodes to separate their traffic such as IP storage? I know it’s more cabling, but it is a lot easier to add NIC's rather than setting up granular QoS to make sure things like a vMotion event doesn’t step on the HCI IP Storage Traffic or Management Traffic. Specifically, because Management and IP Storage have their own physical NICs.
For 98% of time, QoS is probably not necessary for HCI NICs. However, if those NICs are shared for Management, IP Storage, vMotion, and Guest VMs, you have a lot of contention for bandwidth and a lot more potential for important traffic to get stepped on.
I understand there are ways to handle this besides just using different NICs, like NETIOC on the DVS (Traffic Shaping on the VMware DVS) and QoS on the network. Both of these solutions are are significantly harder to manage when compared to just using separate NICs.
What are your thoughts and your implementations?
Page 1 / 1
Actually I think you answered your own question. NIOCv3 does what you're looking for, (broad traffic control) and its much simplier to configiure/deploy than it appears at first. I use NIOC in my environment for general control, along with NSX for microsegmentation, which works great, but to be fair I'm not running a VDI workload, just a bunch of standard 3 tier apps.
NIOC is seperate from Traffic Shaping, and while that makes it flexible and easy to deploy, it also brings up a potential concern with NIOC - it only effects egress traffic. In my experience this hasn't been an issue, but I can see some obvious corner cases where this could be a problem, especially with Nutanix. Some additional details - http://blog.jgriffiths.org/deep-dive-vsphere-traffic-shaping/
NIOC is seperate from Traffic Shaping, and while that makes it flexible and easy to deploy, it also brings up a potential concern with NIOC - it only effects egress traffic. In my experience this hasn't been an issue, but I can see some obvious corner cases where this could be a problem, especially with Nutanix. Some additional details - http://blog.jgriffiths.org/deep-dive-vsphere-traffic-shaping/
I manage over 100 hosts that only have 2 10Gb NIC's. I have had ZERO issues from this architecture.
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.