The widespread movement towards virtualized desktop and app delivery is accelerating due to a number of market transitions:
- “Anywhere access” to virtual apps and desktops to increase employee productivity
- Centralized security to protect sensitive information
- Reduced cost and complexity of app and desktop management across physical devices and hybrid-cloud infrastructures
- Reduced false alarms to service desks while enabling BYOD freedom of choice
- Comprehensive desktop virtualization
Some of the trending use cases are obvious, such as VDI for healthcare, education, and public sector, while others are not:
- The desire to mobilize and cloud-enable Windows apps via any device
- The need for instant, connected communications, such as Skype for Business
- The ability to realize the virtual performance equivalence of 3D and graphics-intensive apps
But if the use cases for VDI are so great, why do so many VDI initiatives still end up stalling? A better question might be: How do I eliminate the risks of performance degradation and capacity saturation?
Bringing advanced server virtualization into a traditional datacenter can dramatically increase the demand on storage performance and storage capacity. This is because virtual servers increase workload density in the datacenter, resulting in much greater demands on the back-end infrastructure, which puts a spotlight on the conventional storage network (SAN and NAS), as well as the centralized storage infrastructure.
SAN/NAS storage networks are notoriously complex to manage and expensive to scale when supporting growth that is driven by virtualization.
The centralized storage array itself was architected prior to the advent of virtualization. Consequently, most arrays manage storage using LUNs and RAID groups with varying vendor-dependent standards. This makes it very difficult to optimally provision storage for virtual environments and introduces challenges in monitoring storage demands and performance per VM.
Additionally, because most storage arrays included a finite number of storage controllers, with limited or no ability to scale out, these legacy technologies introduce potential I/O bottlenecks.
With a focus on making IT “invisible”, Nutanix offers a completely new approach that leverages web-scale engineering, similar to companies such as Google, Facebook and Amazon, to natively converge compute, virtualization, and storage. Nutanix has purpose built its architecture for virtualization, meanwhile incorporating major enterprise storage features, making its solution easily consumable by enterprises.
Xangati’s performance analytics support for Nutanix includes the application of a storm-tracker utility that provides real-time contention analysis across all silos, virtual and physical, including storage storms. For example, Xangati analyzes contention among multiple virtual machines using the same datastore that are experiencing high datastore latency, or on a datastore that is also showing high datastore latency and possibly high usage rates. Xangati also reports out metrics on storms contributing to degrading conditions, whereby a VM's access to resources such as CPU, memory, and storage are excessively hampered.
Once Xangati has detected a storm, users can request a recommendation on what possible actions to take to prevent it, or mitigate the effects of a future storm. For example, for a storage contention storm on a particular datastore, the Xangati recommendation presents a trend graph of storage contention storms for the same datastore that have occurred in the past, to show how intense and frequent that storm has been historically. The remedial action will also check the capacity of the datastore, to see if utilization could be a cause of the storm, and if so, recommend higher capacity be added to that particular datastore.
Xangati also generates an absolute capacity alert whenever a datastore exceeds the dynamic threshold specified. Users can also generate a Xangati Capacity Trend report for VM, Host, Datastore, Server, PortGroup/vSwitch, Router Interface and Storage Volume objects based on data available. The report will also contain collected statistics about resource usage and capacity for the selected object, plus predictive analysis of its future usage, including consumptive metrics for Datastores and Storage Volumes (Storage Usage).
As a Nutanix Ready partner for Management & Operations, Xangati uniquely provides a live, continuous high fidelity stream of fine-grain monitoring performance metrics for Nutanix environments, that are analyzed across all datacenter functional components and siloed operations. Xangati’s patented in-memory architecture allows for high speed and scalability across hundreds of thousands of interactions with end-to-end visibility ranging from the virtual desktop to the converged infrastructure to hybrid clouds.
Xangati performance analytics tie together visibility from the infrastructure to the end-user user, including quality of experience metrics for Citrix XenApp and XenDesktop, that show connect, login and reconnect transactional response rates for many thousands of users. Xangati’s analytics and control capabilities enable IT Operations to assure performance, optimize costs through better capacity utilization, respond quickly to customer issues and deliver a better user experience.
Want to learn more? Check out the following links below for more content: