Designing, Optimizing and Scaling Microsoft SQL Server

  • 2 November 2020
  • 0 replies

Userlevel 3
Badge +2

Executive Summary

This document makes recommendations for designing, optimizing, and scaling Microsoft SQL Server deployments on the Nutanix enterprise cloud. Historically, it has been a challenge to virtualize SQL Server because of the high cost of traditional virtualization stacks and the impact that a SAN-based architecture can have on performance. Businesses and their IT departments have constantly fought to balance cost, operational simplicity, and consistent predictable performance.

Nutanix removes many of these challenges and makes virtualizing a business-critical application such as SQL Server much easier. The Nutanix distributed storage fabric is a software-defined solution that provides all the features one typically expects in an enterprise SAN, without a SAN’s physical limitations and bottlenecks. SQL Server particularly benefits from the following storage features:

  • Localized I/O and the use of flash for index and key database files to lower operation latency.

  • A highly distributed approach that can handle both random and sequential workloads.

  • The ability to add new nodes and scale the infrastructure without system downtime or performance impact.

  • Nutanix data protection and disaster recovery workflows that simplify backup operations and business continuity processes.

Nutanix lets you run both Microsoft SQL Server and other VM workloads simultaneously on the same platform. Density for SQL Server deployments is driven by the database’s CPU and storage requirements. To take full advantage of the system’s performance and capabilities, validated testing shows that it is better to scale out and increase the number of SQL Server VMs on the Nutanix platform than to scale up individual SQL Server instances. The Nutanix platform handles SQL Server’s demanding throughput and transaction requirements with localized I/O, server-attached flash, and distributed data protection capabilities.



This best practice document is part of the Nutanix Solutions Library. We wrote it for those architecting, designing, managing, and supporting Nutanix infrastructures. Readers should already be familiar with a hypervisor (VMware vSphere, Microsoft Hyper-V, or the native Nutanix hypervisor, AHV), Microsoft SQL Server, and Nutanix.

The document addresses key items for each role, enabling a successful design, implementation, and transition to operation. Most of the recommendations apply equally to all currently supported versions of Microsoft SQL Server. We call out differences between versions as needed.


This document covers the following subject areas:

  • Overview of the Nutanix solution.

  • The benefits of running Microsoft SQL Server on Nutanix.

  • Overview of high-level SQL Server best practices for Nutanix.

  • Design and configuration considerations when architecting a SQL Server solution on Nutanix.

  • Virtualization optimizations for VMware ESXi, Microsoft Hyper-V, and Nutanix AHV.

Nutanix Enterprise Cloud Overview

Nutanix delivers a web-scale, hyperconverged infrastructure solution purpose-built for virtualization and both containerized and private cloud environments. This solution brings the scale, resilience, and economic benefits of web-scale architecture to the enterprise through the Nutanix enterprise cloud platform, which combines the core HCI product families—Nutanix AOS and Nutanix Prism management—along with other software products that automate, secure, and back up cost-optimized infrastructure.

Available attributes of the Nutanix enterprise cloud OS stack include:

  • Optimized for storage and compute resources.

  • Machine learning to plan for and adapt to changing conditions automatically.

  • Intrinsic security features and functions for data protection and cyberthreat defense.

  • Self-healing to tolerate and adjust to component failures.

  • API-based automation and rich analytics.

  • Simplified one-click upgrades and software life cycle management.

  • Native file services for user and application data.

  • Native backup and disaster recovery solutions.

  • Powerful and feature-rich virtualization.

  • Flexible virtual networking for visualization, automation, and security.

  • Cloud automation and life cycle management.

Nutanix provides services and can be broken down into three main components: an HCI-based distributed storage fabric, management and operational intelligence from Prism, and AHV virtualization. Nutanix Prism furnishes one-click infrastructure management for virtual environments running on AOS. AOS is hypervisor agnostic, supporting two third-party hypervisors—VMware ESXi and Microsoft Hyper-V—in addition to the native Nutanix hypervisor, AHV.

Understanding Microsoft SQL Server

SQL Server Overview

Microsoft SQL Server is a relational database management system (RDBMS) that supports a wide variety of business intelligence, analytics applications, and transaction processing in corporate IT environments. SQL Server is one of the most widely used database platforms in the world, with organizations typically deploying from dozens to hundreds of instances across the enterprise.


SQL Server Workload Types

When designing a new SQL Server database solution, it’s important to understand the different SQL Server workload types. Each workload type has its own distinct characteristics and requirements that feed into how the overall solution is delivered.

Online Transaction Processing (OLTP)

OLTP is a data-modeling approach used to facilitate and manage typical business applications. Most applications are OLTP-based. Pure OLTP applications tend to have large numbers of small queries and usually a 70:30 split between reads and writes. Sustained CPU usage during work hours and high sensitivity to latency are common characteristics for OLTP workloads. Processors with faster base clock speeds and faster turbo speeds tend to perform better with OLTP queries. These workloads have more I/O operations per second than an equivalent data warehouse. With a single OLTP database, administrators see mostly sequential write activity to the transaction log file and more random write activity to the data files.

Online Analytic Processing (OLAP)

OLAP is an approach to answering multidimensional queries that was developed for Management Information Systems (MIS) and Decision Support Systems (DSS). OLAP workloads have several different components, and the common workflow reads data from multiple sources, aggregates this data, processes the data, then returns it to customers, quite often in the shape of a report or dashboard. OLAP workloads are characterized by their use of a large sequential I/O profile, which means these workloads are geared toward throughput rather than latency. Processors with higher physical core counts are very useful for OLAP workloads, as is having a large amount of memory to process larger datasets. OLAP workloads tend to have a lot of random I/O, so flash-based storage for the cube files can be very beneficial.

Decision Support Systems (DSS)

Decision Support Systems are often used in conjunction with OLAP to access and manipulate data and support information-driven decision making in an organization. DSS workloads tend to have a few longer-running queries that are resource-intensive (CPU, memory, I/O) and generally occur during month-, quarter-, or year-end. DSS queries favor read over write, so it is important for the Nutanix platform hosting this workload type to have nodes sized to allow 100 percent of the dataset to reside in local flash. With Nutanix data locality, this sizing ensures that the system can provide that data as quickly as possible. This advantage places a premium on having processors with higher physical core counts.

Batch or ETL

These workload types tend to be write-intensive, run during off-peak hours, tolerate contention better than OLTP workloads, and sometimes consume additional bandwidth. Organizations often run batch workloads at the end of the business day to generate reports about transactions that occurred during that day. We can break batch or ETL workloads down into three distinct phases:

  1. Extract: The system contacts multiple sources and obtains data from these sources.

  2. Transform: The system performs an action (or actions) on the obtained data to prepare it for loading into a target system.

  3. Load: The data loads into the target system (often a data warehouse).

For complete documentation, please refer Microsoft SQL Server Best practices.

This topic has been closed for comments