Solved

MS SQL Clusters - Multiple SCSI Controllers

  • 12 October 2016
  • 7 replies
  • 1796 views

Badge +4
Hi All-I am in the process of creating some MSCS SQL Clusters on Server 2012R2 . In the applicable documentation in regards to using iSCSI-based VGs with Nutanix. Also read Mike Websters information about hot expaning the disks associated with those VGs in an upcoming software release, really nice! My question is about how I break the I/O over the multiple VMWare P/V SCSI controllers currently. If I followed the instructions in

https://portal.nutanix.com/#/page/solutions/details?targetId=BP-2049_Acropolis_Block_Services:BP-2049_Acropolis_Block_Services

Wouldn't all the I/O go over the default and only SCSI controller for that VM? I am trying to distribute the I/O for logs, tempDB, etc over multiple SCSI controllers. It was worked well in our environment in which we leverage RDMs and would like to try to repeat this on the Nutanix side. Can you let me know if this is feasable and the best way to achieve it.
icon

Best answer by mmcghee 12 October 2016, 21:32

View original

This topic has been closed for comments

7 replies

Userlevel 3
Badge +17
Hi Adam,

When using iSCSI, storage traffic will go through the vNICs you have configured for the VM. From a performance perspective there are some cases where having muliple vNICs can help, like when CPU utilization is high due to a lot of IOPs. Multiple vNICs can help to balance the network operations across multiple cores. Enabling RSS helps with this but multiple vNICs can also have a benefit. If you have concerns you could add another vNIC and then balance the iSCSI sessions across the two.

HTH,
Mike
Userlevel 3
Badge +17
I may be misunderstanding but I thought the goal was to provide storage for a shared disk SQL/WSFC running within the VMs. Our supported method for this is using iSCSI directly to the VMs with ESXi. You wouldn't be able to create a shared disk cluster using a datastore (where you'd be better off using our standard NFS method) and we do not support RDMs today.
Userlevel 1
Badge +9
Thanks  for reeling us back in...working in a supported configuration is also a key consideration 🙂
Badge +4
Yup, you are right Mike, sorry caught a little drift there. We actually do both WSFC and AGs so I will be exploring both options where applicable. Thank you both for your input - much appreciated.
Userlevel 1
Badge +9
Are you presenting the Volume Group to the whole VMware cluster and then creating a datastore out of that or are you directly presenting the ABS Volume Group to the virtual SQL nodes? To replicate the IO separation you used with RDMs, you'll want to go with the option to create a datastore on the VMware cluster. This will let you can allocate multiple PVSCSI controllers and assign virtual disks to the VMs like you're used to for DBs, Logs, TempDB, etc.. Be sure to check VMware KB 2038869 to see if your network configuration requires iSCSI port binding. If you have separate VLANs for iSCSI traffic from other networks you'll need port binding. For redundancy at the host level, you'll want two vmkernel ports added to each host for iSCSI and then bind those each of those ports to a physical NIC. VMware KB 2045040 explains the process.

If you're mapping iSCSI directly from the VMs all the iSCSI configuration is all in the guest. Like  says, all the IO will be going through the NIC interface. Depending on your IP/VLAN setup you might use NIC teaming or MPIO. See this Microsoft blog for more info here. Most likely you'll be using MPIO.

In either case, Nutanix has made multipathing incredibly easy with AOS 4.7 and the use of the "external data services IP address". Now you just have one target IP to use and Nutanix uses the external data services IP address as a virtual IP to automagically manage the iSCSI connections and backend MPIO across all CVMs.

Hopefully I wasn't too far off topic from your question and this info helps get you closer to what you're looking for.
Badge +4
Mike(s)-
Thanks so much for your responses, basically at this point on our POC cluster I am turning up both configurations to see what I can get the best performance out of and what is easiest from a managability perspective. If testing goes well (which I anticipate it will, move SQL workload over to some new nodes). Can either of you comment on what Nutanix finds as the sweet spot, the iSCSi mapping to the VMs, or presenting the volume groups and creating a datastore. I see pros/cons with both, curious if anyone could chime in on good/bad/ugly of both approaches.
Userlevel 1
Badge +9
My personal preference is to use a datastore to maintain visibility of storage allocation at the virtualization layer, for monitoring tools, image based backups and replication.
Earlier this year I did some comparisons between in guest (single vNIC) and datastore based configurations and found the datastore config was always faster. My testing wasn't extremely scientific, but it was consistent.