Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,115 Topics
- 3,003 Replies
Nutanix VirtIO includes device drivers specifically used by Windows VMs hosted in the Nutanix environment to enhance their stability and performance. This concept is very similar to VMware Tools for ESXi environments.The VirtIO bundles various drivers including:Balloon Driver Ethernet Adapter RNG Device SCSI pass-through controller Serial Driver SCSI ControllerThe VirtIO package is found on the Support Portal under AHV (please select “VirtIO” from the corresponding drop-down menu).To note, the device driver versions contained within the various available Nutanix VirtIO packages may be the same if there have been no updates for the drivers between the package releases. To correlate the driver versions associated with each VirtIO package release, please reference KB 5491 in the Support Portal.Further to note, beginning with VirtIO package release version 1.1.6 all driver versions match the VirtIO package version.
Many users are unaware that network traffic can be segmented (or separated) within a Nutanix cluster for various functions or purposes. For example, backplane traffic can be separated from Management-Plane Traffic so as to allow for even greater available bandwidth for the backplane traffic. Further, as another example, DMZ related traffic could be isolated to specific host uplinks. The four primary means of network segmentation are the following:Isolating Backplane Traffic by using VLANs (Logical Segmentation) Isolating Backplane Traffic Physically (Physical Segmentation) Isolating Service-Specific Traffic Isolating Stargate-to-Stargate traffic over RDMATo note, certain means of segmentation are limited to certain hypervisor versions. For example, the segmentation of management and backplane traffic is supported across the AHV, ESX and Hyper-V (Hyper-V offering logical segmentation only) hypervisors, while service-specific segmentation is supported only by the AHV and ESX hypervisors.
Executive SummaryThis document makes recommendations for designing, optimizing, and scaling Microsoft SQL Server deployments on the Nutanix enterprise cloud. Historically, it has been a challenge to virtualize SQL Server because of the high cost of traditional virtualization stacks and the impact that a SAN-based architecture can have on performance. Businesses and their IT departments have constantly fought to balance cost, operational simplicity, and consistent predictable performance.Nutanix removes many of these challenges and makes virtualizing a business-critical application such as SQL Server much easier. The Nutanix distributed storage fabric is a software-defined solution that provides all the features one typically expects in an enterprise SAN, without a SAN’s physical limitations and bottlenecks. SQL Server particularly benefits from the following storage features: Localized I/O and the use of flash for index and key database files to lower operation latency. A highly dist
This article contains IPMI commands for checking and setting interfaces to dedicated or shared mode. For example, after a BMC upgrade, the IPMI might not be accessible. So, you need to verify and change the interfaces to dedicated or shared mode. Note: To run ipmitool commands on an ESXI host, prefix all commands with a forward slash (/). Note: To run ipmitool commands from a remote system such as the CVM (Controller VM), add the "-I lanplus", "-H <IPMI IP>", "-U <username>" and "-P <password>" parameters to the ipmitool command. For example: nutanix@cvm$ ipmitool -I lanplus –H x.x.x.x –U ADMIN –P <password> <command> Quanta Platform Use these commands for an NX-3400 (Quanta) platform. All commands are executed dynamically and a restart is not required. Check the status. [root@host]# ipmitool raw 0x0c 0x02 0x01 0xff 0 0 An output similar to the following is displayed.1100 :00 - Shared port 1101 :01 - D
There are times when working with your environment that one of the Linux guest servers will go unresponsive. Nutanix Support may ask for crash dump files from the Linux VM to further analyze the cause.One such method is to use “sysrq”. This utility found on the Linux Guest OS allows access to several essential kernel commands and allow Nutanix Support to take a more holistic approach in trying to find a root cause.NOTE: If the VM is hung/inaccessible and "sysrq" has not already been activated, you may not be able to generate a core dump with the following method. You can configure the "sysrq" when VM is up and running and wait for the future occurrence.For more information on how to activate this feature, take a look at the KB 9066 on the Support Portal
From the AHV Best Practices document, live migration network is in host management network(br0).In the network segment enviroment, backplane is isolated in another network which maybe has high bandwidth, does the live migration has any change under this condition? or Live Migration network can do some setting like ESXi vmotion network?
Flash Mode is a great feature ensuring that VM workloads remain within the flash (SSD) tier of storage. Once flash mode is enabled for a virtual machine, all of the disks associated with that VM (including any future created disks) automatically get added to the flash tier.However, sometimes having so many disks within the flash tier can cause performance degradation for other VMs that are not configured for flash mode (but could benefit from using the flash tier of storage) or can cause the available flash tier space to be consumed too quickly. Further, it is sometimes not desirable to have all of the disks associated with a virtual machine contained within the flash tier.Accordingly and, though not available as a Prism Web user-interface (UI) modifiable option, individual VM disks can be configured to not use the flash tier even while the VM itself is configured for Flash Mode. The procedure for removing individual VM disks from the flash tier involves using the Acropolis Command-Lin
If you have cloned multiple VMs from a single VM (master VM), you can enable NGT and mount the NGT installer simultaneously on multiple VMs by using the master VM image.Before you beginEnsure the following before you perform this task: Install NGT on the master VM. Clone the required number of VMs from the master VM. Shut down the cloned VMs. Perform the following procedure to enable NGT and mount the NGT installer simultaneously on multiple VMs by using the master VM image.Note: After you perform the following procedure, you do not need to separately install NGT on the cloned VMs.Procedure For every cloned VM, log on to the Controller VM and run the following command.ncli> ngt mount vm-id=clone_vm_id Replace clone_vm_id with the ID of the cloned VM. To find the ID of the cloned VM, run ncli> vm list name="<clone-vm-name>" command. Note the value of the Id field as clone_vm_id.<ncli> vm list name="<clone-vm-name>"Id : 00058a81-64bb-2
Many users try to periodically execute the NCC health checks as a good offensive tactic against any issues that might appear within their cluster, which is a great idea! However, rather than writing that down as a reminder somewhere or simply trying to remember to do so, many do not realize that this task can be scheduled right from within Prism.The task can be configured to execute as per the following schedules: Every 4 hours Every Day Every Week When choosing the every day or week options, you are also presented options to configure the execution according to a specific time of the day and specific days of the week respectively.What happens with the results when the scheduled NCC checks are executed? An email is sent to the email recipients configured within the Alert Email Configuration settings of the cluster.For more information, including the specific procedures for configuring this feature, please refer to the Scheduling and Automatically Emailing NCC Results section of the
As companies become more security aware, third party security tools are being utilized more heavily than ever before. One such tool is a security scanner which can review open network ports within an environment and report back on certain vulnerabilities (CVEs). This includes the open ports of Nutanix specific components such as the IPMI.While it is important to keep the IPMI/BMC upgraded to the latest version so as to integrate the latest security patches, there are CVEs that will still report as failed by scanners based upon the default IPMI configuration. This is due to the virtual media port (623) and the iKVM port (5900) being opened by default.The virtual media port allows the user to open a remote session to the host console and the iKVM port allows the hosts to query information from the BMC.The specific CVEs affected by these two ports being open can be found in KB 2555. NOTE: If these features are disabled, you will be unable to query any BMC info nor able to open a remote co
Security PoliciesTraditional data centers use firewalls to implement security checks at the perimeter—the points at which traffic enters and leaves the data center network. Such perimeter firewalls are effective at protecting the network from external threats. However, they offer no protection against threats that originate from within the data center and spread laterally, from one compromised machine to another.The problem is compounded by virtualized workloads changing their network configurations and hosts as they start, stop, and migrate frequently. For example, IP addresses and MAC addresses can change as applications are shut down on one host and started on another. Manual enforcement of security policies through traditional firewalls, which rely on network configurations to inspect traffic, cannot keep up with these frequent changes and are error-prone.Network-centric security policies also require the involvement of network security teams that have intimate knowledge of network
Ubuntu Cloud Images are pre-installed disk images that are customized by Ubuntu engineering to provide Ubuntu Certified Images, Openstack, LXD, and other features in public cloud environments.Due to the pre-built image and customization the OS requires specific virtual hardware such as a serial port that AHV does not include by default. When trying to boot a VM on AHV from any of the Ubuntu Cloud images, the boot process stalls after loading the Btrfs module as the hardware is not there.This can be helped by customizing the VM manually using a combination of Prism and aCLI commands or by leveraging API calls.Download Ubuntu cloud image in .img format Upload as Disk to Prism Image Services Create a new VM and attach the disk created in step 2 as Type: "Disk", Operation: "Clone from Image Service", Bus: "SCSI" and for Image select the name of the uploaded image and then click "Add" Select "Custom Script" at bottom of the window (AOS 5.10.6 and later) and in "Type Or Paste Script" paste t
I feel it is time to address a seemingly minor question: should you re-image your nodes when re-using them from an existing cluster? To give a better idea of a setup, think of a cluster with eight nodes, for example. You would like to scale down the cluster and re-use 4 of those nodes to form another cluster. You have evicted the nodes, and you could create a new cluster at this stage but let's take a look at the pros and cons of rushing forward.Cluster creation does no wipe the system. All the files on the nodes remain as they are when you trigger cluster creation. That sometimes can mean issues with LCM upgrades or the cluster creation may error out. To resolve those issues you would need to find those files and chances are you would need a Nutanix Support Engineer to deal with that.Some features introduced in software releases are only available if the version is a fresh installation rather than an upgrade.Any networking changes you might need are easy to apply during the foundation
Cluster creation or initialization is a process of bootstrapping the cluster by configuring the unconfigured nodes, load some node information into Zeus configuration file and start the services. Let's break it down. What is an Unconfigured node? Node which are factory shipped. Node which was removed from an existing cluster. These nodes are typically Pre-installed with CVM/HypervisorNO IPv4 address is configured But IPv6 link local address configured on eth0. This will always remain on a host. Before you begin to create a cluster with your brand-new nodes,You Must Have IPv4 Address configuration IPMI IP address Hypervisor IP address CVM IP address DNS/NTP IP address (Required while creating cluster via foundation) Hypervisor and CVM should be installed. What are the methods of Cluster initialization? Foundation: One click process for cluster creation Re-Images multiple nodes Assigns IP address on each node Manual Manual Hypervisor inst
Many users will create a Linux VM on their Nutanix AHV cluster using default installation options, then configure and install any appropriate applications or services within the VM, and then move onto other tasks. While this approach is certainly acceptable, many users are unaware that there are additional modifications that can be made to the Linux VM OS which can enhance the performance or the overall functionality of the VM.For example, there are several Linux kernel parameters which can be configured including “vm.overcommit_memory” and “vm.swappiness”. If leveraging iSCSI connectivity, there are several parameters that can be modified within the iscsid.conf file which can increase performance.Regarding disk usage, volume group striping can be employed using LVM to further increase throughput. There are further parameters that can be employed when mounting disks, and accessing disks can be assisted via a max_sector_kb parameter.You can find more information regarding these modifica
Typically, while deploying a new Windows VM on AHV, the VirtIO driver ISO needs to be mounted to allow the Windows Setup to discover the associated vDisk. Therefore, for each VM that needs to be created, 2 ISOs need to be mounted (the Windows installation ISO and VirtIO ISO).In order to simplify and expedite the deployment process, the VirIO drivers can be injected into the Windows installation ISO to create a single customized ISO.The prerequisites for this process are the following tools/files:PowerShell Windows ADK (Deployment and Imaging Tools Environment) Windows installation ISO Nutanix VirtIO driver package (It is recommended to use the latest version which can be downloaded from Support Portal) Administrative privileges on your Windows workstation.For detailed steps and screenshots regarding this process, please review KB 10290 in the Nutanix Support Portal.
Nutanix Objects OverviewNutanix Objects™ (Objects) is a software-defined Object Store Service. This service is designed with an Amazon Web Services Simple Storage Service (AWS S3) compatible REST API interface capable of handling petabytes of unstructured and machine-generated data. Objects addresses storage-related use cases for backup, and long-term retention and data storage for your cloud-native applications by using standard S3 APIs. You no longer have to introduce an external, separately managed storage solution. Objects is deployed and managed as part of the Nutanix Enterprise Cloud OS.You can manage objects by using Prism Central or the S3-compatible REST APIs after an administrator has authorized the applications and users to access buckets accordingly.For more information on Objects architecture, refer to Nutanix Bible.Usage of ObjectsFollowing are examples of solutions you can implement by using Objects: Backup – You can integrate Objects with the backup applications such as
Many users are unaware that there are additional (beyond what is displayed through the Prism web user-interface) configurable security-related options which can be used to increase the security settings of the controller VMs (CVMs) themselves. These options are modified using the Nutanix Command Line Interface (nCLI) of the CVMs and include some of the following items: Enablement of an Advanced Intrusion Detection Environment (AIDE) Enforcement of a strong password policy Enablement of a defense knowledge consent banner Restriction to allow only SNMP version 3 You can find more information regarding these options, including the procedures to enable/disable them, within the Hardening Controller VM section of the AOS Security Guide. Also to note, there are similar options available for Acropolis Hypervisor (AHV) hosts which are configured using the same procedures. You can find more information regarding those options within the Hardening AHV section of this same guide.
Use the Data Services IP method for external host connectivity to VGs. For backward compatibility, you can upgrade existing environments non disruptively and continue to use MPIO for load balancing and path resiliency. For security, use at least one-way CHAP. Leave ADS enabled. (Enabled is the default setting.) Use multiple disks rather than a single large disk for an application. Consider using a minimum of one disk per Nutanix node to distribute the workload across all nodes in a cluster. Multiple disks per Nutanix node may also improve an application’s performance. For performance-intensive environments, we recommend using between four and eight disks per CVM for a given workload. Use dedicated network interfaces for iSCSI traffic in your hosts. Place hosts that use Nutanix Volumes on the same subnet as the iSCSI data services IP. Use a single subnet (broadcast domain) for iSCSI traffic. Avoid routing between the client initiators and CVM targets. Receive-side sca
Any modern environments consist of multiple layers each of which contains multiple components. There are switches and routers, firewalls, physical server, application servers, applications themselves and, of course, users. Each of the components has logs of more than one kind, location and severity. All the components interact with each other directly or indirectly. I am certain you have found yourself in a situation where to establish a root cause you had to inspect logs of more than one entity. Establishing a timeline of events is always easier when the sources of the events’ clocks are synchronised and are located in one central location. While the clocks are handled by the NTP the centralised logs location is a syslog server or in this case a remote syslog server implying that it is separate to the origination of the logs. In addition to the benefits already mentioned, remote syslog server allows to access logs for the systems that are already dead, decommissioned or replaced. Nuta
SAP helps customers migrate from traditional relational databases to their in-memory SAP HANA database to gain more agility in their business processes. Many SAP customers are searching for ways to deploy SAP HANA in an efficient, simple way that minimizes risk while preserving the benefits of an agile platform. Nutanix provides such an option. The native Nutanix hypervisor, AHV, and Nutanix enterprise cloud OS software are certified for production SAP HANA deployments. HCI for SAP HANA CertificationThe certification has two primary segments. 1. As the first step, a platform vendor (Nutanix, in this case) must validate their platform, which consists of a hypervisor and an HCI component.2. In a second step, the hardware OEM must certify a suggested configuration through some additional HCI-related tests. When both parts of the validation are complete, the solution is certified and listed in the HCI for SAP HANA category on the SAP website. The hardware OEM is then responsible for selli
Many users are unaware that there are additional (beyond what is presented via the Prism user-interface) security parameters that can be employed on AHV hosts to increase the overall security of them. These security parameters are configured via Nutanix Command-Line Interface (NCLI) and include the following: Advanced Intrusion Detection Environment (AIDE) - a file and directory integrity checker High Strength Password Enforcement - configure the maximum and minimum number of characters the password must contain along with number of passwords retained in history to prevent repeated use Core Dumps - the recorded state of the working memory for a process is dumped to a file if the process ever crashes Login Banner - display a customized messages when user login to a node More information regarding these parameters, including the procedures to enable/disable them, can be found within the Hardening AHV section of the Nutanix Security Guide. Also to note, there are similar parameters
UPGRADING SERVER FIRMWARENutanix recommends that you use the Service Pack for ProLiant® (SPP) ISO file for applying firmware updates. Perform this procedure on every host in the cluster, one host at a time. About this taskTo upgrade the firmware on a server, do the following:Procedure If the server is part of a Nutanix cluster, place the server in maintenance mode.Information about placing a server in maintenance mode is available in the host management section of the Acropolis Command-Line Interface (aCLI) documentation. See the Command Reference for the supported AOS version. Turn on the server to the SPP ISO. Connect to the iLO by using the iLO IP address. Log on to the iLO user interface by using the administrator credentials.The default administrator user name is Administrator on all HPE® ProLiant® servers. Passwords for the iLO administrator differ from one server to another, and are available on the service tag on the server. Attach the SPP ISO to the server by usi
Nutanix AHV uses Open vSwitch(OVS) to connect to the CVM, the hypervisor and the user VMs to each other and to the physical network on each node. The CVM manages the OVS inside the AHV host. Since the OVS is an open source software switch that behaves like a layer-2 learning switch, it maintains a MAC address table. Each AHV server maintains an OVS instance, managed as a single logical switch through Prism. Bonds:When multiple uplinks are used they are added to a bond acting as a single logical interface, to which the bridge is connected. Open vSwitch (OVS) does not support bonds with single uplink and as workaround we directly connect bridge to single uplink.WARNING:Avoid the use of a single uplink configuration and do not attempt to modify a single uplink configuration using manage_ovs if the version of AOS is 5.10.x prior to 5.10.4. Warning:Updating uplinks can cause short network disconnect. It is strongly recommended performing network changes on a single node at the time after ma
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.