Nutanix Volumes - Recommendations And Best Practices | Nutanix Community
Skip to main content
  • Use the Data Services IP method for external host connectivity to VGs.

  • For backward compatibility, you can upgrade existing environments non disruptively and continue to use MPIO for load balancing and path resiliency.

  • For security, use at least one-way CHAP.

  • Leave ADS enabled. (Enabled is the default setting.)

  • Use multiple disks rather than a single large disk for an application. Consider using a minimum of one disk per Nutanix node to distribute the workload across all nodes in a cluster. Multiple disks per Nutanix node may also improve an application’s performance.

  • For performance-intensive environments, we recommend using between four and eight disks per CVM for a given workload.

  • Use dedicated network interfaces for iSCSI traffic in your hosts.

  • Place hosts that use Nutanix Volumes on the same subnet as the iSCSI data services IP.

  • Use a single subnet (broadcast domain) for iSCSI traffic. Avoid routing between the client initiators and CVM targets.

  • Receive-side scaling (RSS) allows the system to use multiple CPUs for network activity. With RSS enabled, multiple CPU cores process network traffic, preventing a single CPU core from becoming a bottleneck. Enabling RSS within hosts can be beneficial for heavy iSCSI workloads. For VMs running in ESXi environments, RSS requires VMXNET3 VNICs. For Hyper-V environments, enable VMQ to take full advantage of Virtual RSS.

  • The Nutanix CVM uses the standard Ethernet MTU (maximum transmission unit) of 1,500 bytes for all the network interfaces by default. The standard 1,500-byte MTU delivers excellent performance and stability. Nutanix does not support configuring the MTU on a CVM's network interfaces to higher values. You can enable jumbo frames (MTU of 9,000 bytes) on the physical network interfaces of AHV, ESXi, or Hyper-V hosts and user VMs if the applications on your user VMs require them. If you choose to use jumbo frames on hypervisor hosts, be sure to enable them end to end in the desired network and consider both the physical and virtual network infrastructure impacted by the change.

  • For Linux environments, ensure that the SCSI device timeout is 60 seconds. See Red Hat’s documentation for an example of checking and modifying this setting.

  • For Linux environments, use persistent file system or device naming identifiers to ensure that applications reference storage devices correctly across system reboots. See Red Hat’s documentation on persistent naming attributes for more details.

  • For Windows environments, set the TcpAckFrequency value to 1 for the NIC connecting to the Volumes iSCSI targets, so that every packet is acknowledged immediately. See Microsoft Support’s documentation for more details.

  • When using the iSCSI Data Services IP:

    • Tests with the default iSCSI timeout and timer settings have shown path failover to take 15 to 20 seconds. These results are well within the Windows default disk timeout, which is 60 seconds. In general, Nutanix recommends using the default iSCSI client timer settings, with one exception— when you use MPIO—as noted below.

    • In physical server environments that require NIC redundancy, you can use either NIC teaming (also called bonding) or MPIO.

    • When using MPIO for NIC redundancy, use an active-active load balance policy such as round robin.

    • When using MPIO, set the Windows iSCSI LinkDownTime setting to 60 seconds.

Client Tuning Recommendations

Not all environments require tuning, but there are additional iSCSI settings that can benefit performance in some environments.

  • For large block sequential workloads with I/O sizes of 1 MB or larger, it’s beneficial to increase the iSCSI MaxTransferLength from 256 KB to 1 MB.

  • For workloads with large storage queue depth requirements, it can be beneficial to increase the initiator and device iSCSI client queue depths.

The default Nutanix iSCSI target values are as follows:

  • Iscsi_max_recv_data_segment_length

    • Maximum number of bytes allowed in a single PDU data segment.

    • Default: 1048576

  • Iscsi_desired_first_burst_length

    • Maximum amount in bytes of unsolicited data an iSCSI initiator may send to the target for a single SCSI command.

    • Default: 16777216

  • Iscsi_desired_max_burst_length

    • Desired value for MaxBurstLength if negotiated.

    • Default: 16777216

  • Iscsi_session_queue_size

    • Maximum number of outstanding requests an initiator can have on a given iSCSI session.

    • Default: 512

Parent topic:Recommendations

Linux Client Tuning Example

Configure the following iSCSI settings on the guest OS in the /etc/iscsi/iscsid.conf file and restart the iscsid process.

node.session.timeo.replacement_timeout = 120

node.conn:0].timeo.noop_out_interval = 5

node.conn:0].timeo.noop_out_timeout = 10

node.session.cmds_max = 2048

node.session.queue_depth = 1024

node.session.iscsi.ImmediateData = Yes

node.session.iscsi.FirstBurstLength = 1048576

node.session.iscsi.MaxBurstLength = 16776192

node.conn:0].iscsi.MaxRecvDataSegmentLength = 1048576

discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 1048576

 

For more information:

Nutanix Volumes

If configure ISCSI Data Service IP By using a different subnet with AHV & CVM can it work?