Good Day Folks:
I am in the process of turning up some Windows Server/SQL Failover clusters using the in-guest iSCSI method. Working as intended with one default NIC on each Windows Failover Cluster VM (when they are in the same subnet as the external services ip/segment). However I started looking at it more and wanted to tune this a bit. IE: Add a secondary NIC on these VMs on a say /24 that is not-routed and use this for the iSCSI traffic. Which in this KB on page 23 you reccomend. "Use a single subnet (broadcast domain) for iSCSI traffic. Avoid routing between the client initiator(s) and CVM target(s)."
That does not quite read correct to me, in that same PDF it states, "With the 4.7 AOS release, the data services IP address must be in the same subnet as the eth0 (management) network of the CVMs."
So on the guest OS, I use the external data service IP for discovery, which is in the same segment as Prism, AH hosts, etc. How would I create a non route-able segment, for iSCSI traffic if the data services IP must be in the same MGMT network. Are we not just targeting the external services IP for the target discovery? Any help would be appreciated, just trying to understand that better.
The intent of not routing the client traffic is for performance reasons. While the DSIP network may be routable, the client should not be in a subnet which requires routing to the DSIP and CVMs. Let me know if you need more details.
Thanks as always Mike
The intent of not routing the client traffic is for performance reasons. (understood and agree - assuming we mean iSCSI traffic) While the DSIP network may be routable (it is in my case), the client should not be in a subnet which requires routing to the DSIP and CVMs. (how would client/client segment communicate to the DSIP if we had a segment that was non routable? Selecting a target portal IP of the DSIP in another segment it would go nowhere) Let me know if you need more details.
Thanks for chiming in, I must be missing something here..
Pretty sure what you're asking for is a "backend" network basically, having the DS VIP and associated interfaces on say "vlan 200" and the regular CVM traffic/management on "vlan 100" (obviously making up numbers for this crude example). This would model a typical "block" network for things like Fiber Channel and some iSCSI deployments.
Is that correct?
If so, we're working on a network segmentation feature that would do exactly this, but in the mean time, it does not work this way out of the box.
Thanks for your response Jon-
Exactly what I am referring too, in the case now all that traffic has to ride on that same segment. Not a big deal, was just trying to explore my options of how to build it out.
Thank you both for your input.
Like Jon mentioned we're working on formal network segmentation to make this easier. But one thing you could do if isolation is a requirement is to isloate the network being used by eth0. This would isolate both the iscsi and the inter CVM traffic. You would then have to add another interface/network for "management" if you wanted that to be on a routed network. Example for ESXi: https://portal.nutanix.com/#/page/kbs/details?targetId=kA0600000008i7qCAA
Heyo - Quick update on this. The changes for this are still in progress but have not been committed to either 5.5 or 5.5.1 (impending release)
For those who have access to our support portal, portal.nutanix.com, it would help if you submit a support ticket with priority “RFE” and reference FEAT-3906. This is the internal ticket number that tracks this work.
Submitting a request this way helps us track volume in engineering.