Blog

Fast Cars - Comparing and Contrasting Your Hyperconverged Infrastructure

  • 17 September 2018
  • 0 replies
  • 4099 views
Fast Cars - Comparing and Contrasting Your Hyperconverged Infrastructure
Userlevel 7
Badge +35
This post was authored by Marc Trouard-Riolle, Senior Product Marketing Manager

Approximately ten years ago, while working as a Systems Engineer on a server virtualization project proof-of-concept, a prospective customer complained that the infrastructure analysis tool I had been using had recommended a lower number of VMs per host than he was happy with, and lower than what a friend working nearby had achieved (with the same tool). The friend worked at a different company, with different server vendors, different infrastructure, and likely different application usage profiles, so to me it was no surprise that the results were different, yet the comparison was still made and the results contrasted against what someone else had achieved.

When it comes to compute, storage and network infrastructures, it is imperative that we test and analyze infrastructure performance, such as the min-max values from four-corner testing of scalability or performance. It helps to understand what is achievable with the infrastructure and aids justifying the project investment. Basing day-to-day infrastructure operations expectations based solely on these maximums however, is unrealistic for the majority of use cases and can lead to disappointing infrastructure results and failed projects.

One analogy is that vehicle specifications from many of the auto manufacturers. “You got a fast car, but is it fast enough so you can fly away?” Even with most models stating top speeds well in excess of 100 miles per hour, they are typically not driven that fast continuously. While it is interesting to know what top speed my car can achieve, it does not mean I expect to drive it at those speeds all the time. From an IT perspective, we require assurance that proposed solution stacks can not only achieve the required peak performance in the short term, but in particular that they can maintain an expected level of service throughout the infrastructure’s lifecycle, tolerating the many commonly occurring data-center operations that can affect performance.

This is where Nutanix X-Ray helps, enabling customers to test, analyze and understand how their hyperconverged infrastructure (software and associated hardware platform) will perform during real-world scenarios, and not just how fast the applications or storage can perform during the perfect conditions of day 0 or day 1.

Nutanix is proud to announce general availability of X-Ray 3.1, the most significant release yet for hyperconverged infrastructures with product enhancements to the platforms support, product usability, and testing validation.

Platforms


New for this release, integration of Microsoft Hyper-V and Storage Spaces Direct, the core components of an on-premises Microsoft AzureStack complement X-Ray’s broad hyperconverged infrastructure and hypervisor platforms support.


This enables organizations to test and analyze their applications for consistent performance during common datacenter scenarios based on either a complete Microsoft HCI stack, using all existing X-Ray test scenarios.

Either Microsoft Storage Spaces Direct or Nutanix Enterprise Cloud platforms can be tested in conjunction with Microsoft Hyper-V, providing flexibility on the combination of testing platforms and hypervisors based on the particular organizational needs.

To add a Microsoft Hyper-V target cluster, simply select the ‘SCVMM’ target from the ‘manager type’ list, providing the necessary target IP address or DNS name, together with credentials for that infrastructure.


Usability


Early releases of X-Ray did a good job of showing detailed information pertaining to test scenario configurations and the graphical results following a test, however, end-user feedback received suggested that more detail could be provided on what the test scenario was going to do. Additionally, there was a level of interpretation required to fully understand whether the test results were actually good with performance as expected, as shown in the OLTP IOPS test results shown below. For example, is the average 4200 IOPS good for this infrastructure and for the test in question?


To tackle the end-user feedback, X-Ray’s user interface has been updated:

Greater test scenario detail, together with an image that represents the scenario action. In the below example, we can see one of the nodes (#1) is designated the fail node to understand the impact, if any, to the OLTP database VMs running in the cluster.


Test results now show a “What’s a good result” tooltip, for quick visibility of the expected performance, together with an ‘expected performance’ line on the graph to indicate the 95% of expected performance.

In the below example, a good result for this particular test is for the OLTP database to consistently achieve 4200 IOPS, however in the first graph we see a large variance outside of the expected range, whereas the second graph showing a good result is continuously within a very small margin of the 4200 IOPS mark.


Validation


Initially launched in X-Ray 3.0, X-Ray pre-checks have been enhanced to enable comprehensive pre-test validation, covering aspects of infrastructure authentication, connectivity and VM deployment. Many scenario tests are extremely thorough and can take many hours to complete, hence there is nothing more annoying than to find a test set running a day previous failed because of a simple password typo, or network connectivity (IP addressing) issue for example.


Now, the optional validation process can confirm the required scenario test settings, which includes deploying a VM to test end-to-end VM communication. If edits are needed to the test configuration, these can then more easily be changed before executing the test, streamlining the testing process and enabling organizations to be more assured the tests will run successfully.


If you’re investigating HCI or Enterprise Clouds for your organization and want the freedom to compare and contrast how your applications will operate, take a look at Nutanix X-Ray. Designed to show you the real day-to-day real-world achievable performance results beyond day 0, for your choice of HCI platform, hypervisor and hardware platform.

Visit www.nutanix.com/xray for further information, and a one-time registration. For documentation and file downloads, together with older blog URLs and open-source software information, please head to the X-Ray community site at: https://next.nutanix.com/nutanix-x-ray-18.

Happy testing and safe driving! :-)

Disclaimer: This blog may contain links to external websites that are not part of Nutanix.com. Nutanix does not control these sites and disclaims all responsibility for the content or accuracy of any external site. Our decision to link to an external site should not be considered an endorsement of any content on such site.

2018 Nutanix, Inc. All rights reserved. Nutanix and the Nutanix logo are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names mentioned herein are for identification purposes only and may be the trademarks of their respective holder(s).

This topic has been closed for comments