Solved

Can external legacy storage be accessed via Nutanix AHV or ESXi?

  • 6 February 2020
  • 5 replies
  • 10030 views

There can be legitimate cases where access to external storage, such as massive tape libraries, massive disk archives is needed to some, but not all VMs running in a Nutanix cluster.  These VMs would be ideally under the AHV hypervisor, but is ESXi is an acceptable alternative.  These external storage systems would likely support 10GbE iSCSI (and faster) and/or 8 gbit fibre channel or faster.

I understand that supporting such external access would make the VM “special” and would restrict VM mobility to other specific nodes that were configured for similar external access. The mobility issues are a later discussion, if external storage access is possible.

If the external storage was a NAS device, running NFS and/or SMB, I believe it would be straigntforward … just define the external IP address in the VM, and access the data from the VM.  The same technique could be used for external iSCSI devices, for Ethernet iSCSi.

 

In many cases, if the external storage needed very high performance access, I would assume that a dedicated NIC or fiber channel controller would be added to the node, used exclusively by the VMs accessing the external storage.

 

I understand that such an external storage connection violates the precepts of HCI computing.  However … in the support for customer “choice” … not all customers may want to migrate their external storage to native Nutanix … or in some cases, it might not be financially attractive … with a multi-thousand disk or tape cold storage archive.

Assuming external storage access is possible, where in the AHV documentation can I find the details?

 

Thank you for your help.

 

Dave B

 

icon

Best answer by vkumarjr 10 February 2020, 01:41

View original

This topic has been closed for comments

5 replies

Userlevel 1
Badge +1

@db808

 

Understood the requirement, however Nutanix does not recommend connecting 3rd party storage devices as this can have a negative effect on system stability and may result in unexpected downtime.

Nutanix installations are meant to only manage local storage resources and uses settings for storage optimization which may be incompatible with 3rd party storage devices.

 

 

Note: 3rd party storage should not be connected to a Nutanix system as it can cause outages because APD is disabled.

 

Would like to share following KB which has above details

3rd party storage might cause issues on a Nutanix system

https://portal.nutanix.com/#/page/kbs/details?targetId=kA00e000000CtelCAC

 

And because of above recommendation there is no / not much AHV documentation available.

 

HTH

 

Hello vkumarjr,

Thank you for your reply, but let me clarify my question.

 

When I used the term “access” to an external device, I meant some type of pass through of the IO to a specific VM (through a hypervisor). This VM would need to have the proper device drivers installed within the VM’s operating system to support and manage the external device.

 

I was not asking for Nutanix to “manage” the external device, or even know anything about it.

 

The next sub-question would be whether the IO to this external device could go through a host controller that could be used for other purposes? This would mean that their could be co-mingled devices attached to this controllers, some managed by Nutanix, some by the “special” VM.  This co-mingling, especially without some SR-IOV-like support, opens up significant risk, as the controller is being shared in some form between Nutanix and the special VM.  If one software stack issues some form of “device reset”, it could cause errors and IO corruption with IO in progress from the “other” software stack.

 

OK.  So for robustness … the “special” host IO controller for this external storage device would need to be exclusively reserved and assigned to this “special” VM (through its hypervisor), and conversely, Nutanix software may need to be configured to ignore this “special” controller for its own use … ie, don’t try to “discover” devices on this controller.

 

So with an “exclusive” controller, this question about allowing connection to an external device, degenerates into a form of a generic pass-through capability, possibly with an exclusion list for Nutanix to ignore the controller.

 

This pass-through capability can be a very useful mechanism to allow for use of various “accelerator” or “special” devices for specific interested customers, before these “special” devices are broadly needed by the general Nutanix customer population. If these “special” devices become popular, Nutanix can make the business decision to offer some higher level of support or functionality. 

There are many examples of such “special” PCIe devices; GPUs from Nvidia, AMD, and now Intel, various AI-related compute engines for machine learning, custom FPGAs and ASIC accelerators, many of which have been used in the high-performance financial trading industry.  As the industry moves forward to 100 gbit and faster network connections, there is an increased need for additional “offloads” of encryption, checksumming, and other functions because the general purpose Intel/AMD CPUs can no longer process the data at line rate.

 

So the main question is NOT for Nutanix to support some low-volume controller (which opens up many risks), but to allow for a form of pass-through to a designated hypervisor and specially assigned VM.  This also assumes that the hypervisor knows how to pass through the host controller management to the specific VM. This may or may not mean that AHV can be used for this hypervisor.

 

The benefit for the customer, as stated as a goal of Nutanix, is offering customer “choice”.  In this case, the choice of using some special device, not-managed by Nutanix.

 

Perhaps this “special” Nutanix node is one of the new compute-only nodes, with no local Nutanix supported local storage … that has a “special” host controller being passed through to a “special” VM.

A concrete example for large legacy customers, would be to allow access to some form of cold archive system (often robotic tape libraries) that contain hundreds of petabytes of information.  Asking a customer to migrate this cold archive data to Nutanix converged storage is unrealistic, and cost-prohibitive.

 

Again … let me stress … at the initial phases … this is a host controller pass-through issue. I am not asking that the Nutanix software “manage” the host controller, other than a pass through to the hypervisor, and ultimately the VM.

 

Thank you for your help and feedback.

 

Regards,

 

db808

 

 

 

 

 

 

 

 

Userlevel 1
Badge +1

@db808

Appreciate your time/explanation

 

Coming on the access part ( Having a pass through feature ), i am afraid, currently there is no such feature for any special VMs, i did check with different pair of eyes to confirm this. 

 

I shall update this thread if i get any details on this.

 

Regards

Vignesh

Thank you Vignesh,

Unfortunately, this is not the answer that I was looking for.  Please relay this information to the product roadmap people.  I can summarize it as two major points.

 

Customers are already using and have investments in expensive “devices” today that they would like to access from the Nutanix environment.  These devices fall into two broad classifications:

  1. External Devices using a readily available controller with support already in Linux and Windows.  Examples would be fibre channel controllers, SAS controllers, USB3 controllers. Types of external devices using these controllers are typically forms of disk and tape storage. Customers already have systems, sometimes even virtualized systems that are connected and using these external devices. It would be valuable to support some form of “raw” connection to these external devices, with the management done by the drivers in the VM.  As an analogy, think of an external iSCSI device, which could be directly accessed by a VM today.  Nutanix is not involved in managing the external iSCSI device.  It is handled as a socket connection to a remote endpoint.  It would be useful to access such external devices also via fibre channel, and perhaps SAS.
  2. “Special” interface and/or accelerator controller, such as a GPU, ASIC, FPGA, or other PCIe controller.  This would need some form of controller passthrough to the hypervisor and VM. 

Without having a workaround for item #1 .. the external disk or tape archive over fibre channel or SAS, Nutanix is forcing the customer to build/buy/create some form of gateway system that can wrap the external device’s access into something that can be accessed via Ethernet/TCP.  This can be a non-trivial undertaking … and offers a significant impediment to the use of Nutanix HCI systems.  Effectively, Nutanix is in itself its own “silo” with limited access to important (and expensive) external devices using fibre channel and/or SAS connections.

Thank you for your feedback,

 

Dave B

db808

 

It is unfortunate that many of these HCI systems just flat out will not support FC. It really puts a strain on the SysAdmins as, in my case, we have 1.5PB of FC storage (in addition to FC LTO-8 tape libraries (yes tape, that’s what is called ‘firm storage’ when all your arrays’ SSDs and what not that stores your backups goes south :-) ) that uses 8 and 16Gb (and for the newest 32Gb) connections, that would only be accessible if I did some sort of iSCSI type of “front-end” kludge so that the Nutanix host/VMs can see it. I get that the HCI has its own storage but the lack of even an HBA that can be installed and seen for “legacy” (although I take issue with that for FC) storage. We are evaluating a Nutanix box for a VDI initiative (well, my boss said to evaluate it...) but the thought of having to learn another hypervisor (we’re 100% vSphere) and its management interface (no, it’s not “one click” like the brochure says) or have some sort of administrative layer ‘shim” between the hardware and ESXi, isn’t a thing I’m looking forward to. It is a very slick box and the SD storage is neat, but again, the lack of FC is a real problem for enterprises--unless you have funding to do fork lift replacements of all your FC storage and hosts...we don’t have that kind of money.