Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,184 Topics
- 3,243 Replies
AHV host supports load balancing of vDisks in a volume group for Guest VMs. Load Balancing of vDisks in a volume group enables IO-intensive VMs to use the storage capabilities of multiple CVMs If you enable load balancing on a volume group, the guest VM communicates directly with each CVM hosting a vDisk. Each vDisk is served by a single CVM. Therefore, to use the storage capabilities of multiple CVMs, create more than one vDisk for a file System and use the OS-level striped volumes to spread the workload. This configuration improves and prevents storage bottlenecks. vDisk load balancing is disabled by default for volume groups that are directly attached to the VMs. You can attach a maximum of 10 load balanced volume groups per guest VM. For Linux VMs, ensure that the SCSI device timeout is 60 seconds. Procedure: SSH into the CVM as a nutanix user Do one of the following Enable vDisk load balancing if you are creating a volume group acli vg.create vg_name load_balance_vm_attachm
We recently performed a cluster shutdown with hardware power off of an AHV cluster running AOS 184.108.40.206.We powered on all the nodes, and waited 10 minutes for the AHV hypervisor to boot and the CVMs to boot and get ready.Even after 30 minutes the CVMs did not recognise the confirmed correct password for the “nutanix” userID during ssh login attempt to any CVM.Fortunately one SSH key had previously been registered into Prism Element, which allowed SSH via this key. The cluster was NOT configured to be locked down.The key owner successfully connected to a CVM via SSH and performed sudo passwd set of the “nutanix” userID to a confirmed password.Despite setting this password, the same CVM still refused to accept the “nutanix” userID and confirmed correct password during SSH password login. I am suspecting that with the cluster service stopped, but with one SSH key present, that the CVMs operate as if lockdown were enabled.Can someone please confirm for this?This prevented the password hol
Hey guys,I’m getting an error regarding email alerts, this is what I can observer on the send-email.log2021-06-17 07:37:03,407Z INFO send-email:242 Not sending emails for first 1 hours of cluster creation. Cluster Age = -16009102.3605 secs 2021-06-17 07:38:03,611Z INFO send-email:242 Not sending emails for first 1 hours of cluster creation. Cluster Age = -16009042.1568 secsCluster was online for 2months already, I tried already to stop and start the cluster but that sec timer is set to around 6 months.
Volumes acts as one or more targets for client Windows or Linux operating systems running on a bare metal server or as guest VMs using iSCSI initiators. Do not use Volumes to create an iSCSI datastore to Hyper-V or ESXi hosts. This configuration is not supported.The iSCSI data services IP acts as an iSCSI target discovery portal and initial connection point. The client only needs this single IP address to help load balance storage requests and provide path optimization in the cluster, preventing bottlenecks.Enable Nutanix Volume Get the client IP address that you will add to a volume group client whitelist. Create an iSCSI data services IP address for the Nutanix cluster. This address cannot be the same as the cluster virtual IP address. Provision storage on the Nutanix cluster by creating a volume group consisting of one or more vDisks. Perform an iSCSI target discovery of the Nutanix cluster from the clients. [Optional] Configure CHAP or Mutual CHAP authentication on the init
I had a LCM failure while upgrading firmware on a node. The node entered maintenance mode and I was never able to get it out. I have subsequently removed the node from the cluster so that I can finish the upgrade process. I have successfully completed the firmware upgrade on the other nodes.Now I am trying to upgrade AHV. LCM is failing prechecks with the message Operation failed: Failed to find node <uuid>The <uuid> is the one of the node that is no longer in the cluster. Some how I need to get past this so I can complete the AHV upgrade. I currently have two nodes at the most recent version and one at an old version. Is there someway to bypass the precheck or to modify the LCM data about the removed node so I can press on?
This article contains IPMI commands for checking and setting interfaces to dedicated or shared mode. For example, after a BMC upgrade, the IPMI might not be accessible. So, you need to verify and change the interfaces to dedicated or shared mode. Note: To run ipmitool commands on an ESXI host, prefix all commands with a forward slash (/). Note: To run ipmitool commands from a remote system such as the CVM (Controller VM), add the "-I lanplus", "-H <IPMI IP>", "-U <username>" and "-P <password>" parameters to the ipmitool command. For example: nutanix@cvm$ ipmitool -I lanplus –H x.x.x.x –U ADMIN –P <password> <command> Quanta Platform Use these commands for an NX-3400 (Quanta) platform. All commands are executed dynamically and a restart is not required. Check the status. [root@host]# ipmitool raw 0x0c 0x02 0x01 0xff 0 0 An output similar to the following is displayed.1100 :00 - Shared port 1101 :01 - D
Hi! I’ve been getting some reports of users unable to use the numpad part of their 10-key keyboard in Frame… seems to work outside of Frame just fine, and no, numlock isn’t on :) I’ve seen it, though I haven’t been able to reproduce it myself, or isolate a trigger for when it happens to people. Just curious if anyone else has come across something like this?
From the AHV Best Practices document, live migration network is in host management network(br0).In the network segment enviroment, backplane is isolated in another network which maybe has high bandwidth, does the live migration has any change under this condition? or Live Migration network can do some setting like ESXi vmotion network?
Flash Mode is a great feature ensuring that VM workloads remain within the flash (SSD) tier of storage. Once flash mode is enabled for a virtual machine, all of the disks associated with that VM (including any future created disks) automatically get added to the flash tier.However, sometimes having so many disks within the flash tier can cause performance degradation for other VMs that are not configured for flash mode (but could benefit from using the flash tier of storage) or can cause the available flash tier space to be consumed too quickly. Further, it is sometimes not desirable to have all of the disks associated with a virtual machine contained within the flash tier.Accordingly and, though not available as a Prism Web user-interface (UI) modifiable option, individual VM disks can be configured to not use the flash tier even while the VM itself is configured for Flash Mode. The procedure for removing individual VM disks from the flash tier involves using the Acropolis Command-Lin
If you have cloned multiple VMs from a single VM (master VM), you can enable NGT and mount the NGT installer simultaneously on multiple VMs by using the master VM image.Before you beginEnsure the following before you perform this task: Install NGT on the master VM. Clone the required number of VMs from the master VM. Shut down the cloned VMs. Perform the following procedure to enable NGT and mount the NGT installer simultaneously on multiple VMs by using the master VM image.Note: After you perform the following procedure, you do not need to separately install NGT on the cloned VMs.Procedure For every cloned VM, log on to the Controller VM and run the following command.ncli> ngt mount vm-id=clone_vm_id Replace clone_vm_id with the ID of the cloned VM. To find the ID of the cloned VM, run ncli> vm list name="<clone-vm-name>" command. Note the value of the Id field as clone_vm_id.<ncli> vm list name="<clone-vm-name>"Id : 00058a81-64bb-2
The Discoveries menu in Portal is a new feature that allows customers to view critical issues in their environment with a more holistic approach. Using Nutanix Insights, Portal will provide this view based on Field Advisories, constant health checks, end of support equipment, etc.The Discoveries Details View provides the background, analysis, and corrective actions that can be done to remedy these issues. This will also give you the option to create a case based on the specific issue/topic.For more information, take a look at the “Discoveries Menu” in the Support Portal.
As the Nutanix Life Cycle Management (LCM) platform evolves, more firmware entities are added to the module. The latest addition are the NICs associated with Nutanix hardware.Starting in LCM 2.3.4, firmware updates for Mellanox and SuperMicro NICs on AHV and ESXi are supported. Currently, Hyper-V platforms and Intel/Silicom NIC cards are not supported.For more information about this feature and software requirements, take a look at KB 10073 found on the Support Portal.
The Cluster role based access Control (RBAC) or Enhanced Prism Central RBAC feature provides “Prism Admin” and “Prism Viewer” role based access to Prism Central with access restricted to one or more AOS clusters registered to Prism Central. With Cluster RBAC, the Prism Central admin or viewer user is able to access Prism Central and view and act on the entities like VM, host and container from the allowed AOS clusters. The users will also be able to perform the “Launch Prism Element” action on the allowed AOS clusters and manage the cluster with respective Prism Admin or viewer access. Enabling cluster RBAC Pre-checks Verify supported the Prism Center version and the AHV cluster version. The AOS cluster where Prism Central is deployed must be registered to the Prism Central. CMSP must be enabled. Identity and Access Management (IAM) is automatically enabled as part of CMSP enablement. The prerequisites for CMSP and IAM also apply to cluster RBAC Procedure Connect to the Prism
Security PoliciesTraditional data centers use firewalls to implement security checks at the perimeter—the points at which traffic enters and leaves the data center network. Such perimeter firewalls are effective at protecting the network from external threats. However, they offer no protection against threats that originate from within the data center and spread laterally, from one compromised machine to another.The problem is compounded by virtualized workloads changing their network configurations and hosts as they start, stop, and migrate frequently. For example, IP addresses and MAC addresses can change as applications are shut down on one host and started on another. Manual enforcement of security policies through traditional firewalls, which rely on network configurations to inspect traffic, cannot keep up with these frequent changes and are error-prone.Network-centric security policies also require the involvement of network security teams that have intimate knowledge of network
I feel it is time to address a seemingly minor question: should you re-image your nodes when re-using them from an existing cluster? To give a better idea of a setup, think of a cluster with eight nodes, for example. You would like to scale down the cluster and re-use 4 of those nodes to form another cluster. You have evicted the nodes, and you could create a new cluster at this stage but let's take a look at the pros and cons of rushing forward.Cluster creation does no wipe the system. All the files on the nodes remain as they are when you trigger cluster creation. That sometimes can mean issues with LCM upgrades or the cluster creation may error out. To resolve those issues you would need to find those files and chances are you would need a Nutanix Support Engineer to deal with that.Some features introduced in software releases are only available if the version is a fresh installation rather than an upgrade.Any networking changes you might need are easy to apply during the foundation
Using the Nutanix CMDlets it turns out that out of the 7 protection domains in my cluster, one has only 1 Consistency Group assigned to it, which protects over 1200 files. (Almost 4000). How can I create a few CGs within a given PD so I don’t get the threshold alert in the future?
Nutanix Objects OverviewNutanix Objects™ (Objects) is a software-defined Object Store Service. This service is designed with an Amazon Web Services Simple Storage Service (AWS S3) compatible REST API interface capable of handling petabytes of unstructured and machine-generated data. Objects addresses storage-related use cases for backup, and long-term retention and data storage for your cloud-native applications by using standard S3 APIs. You no longer have to introduce an external, separately managed storage solution. Objects is deployed and managed as part of the Nutanix Enterprise Cloud OS.You can manage objects by using Prism Central or the S3-compatible REST APIs after an administrator has authorized the applications and users to access buckets accordingly.For more information on Objects architecture, refer to Nutanix Bible.Usage of ObjectsFollowing are examples of solutions you can implement by using Objects: Backup – You can integrate Objects with the backup applications such as
Application monitoring provides visibility into integrated applications by collecting application metrics using Nutanix and third-party collectors, providing a single pane of glass for both application and infrastructure data, correlating application instances with virtual infrastructure and providing deep insights into applications performance metrics.Application monitoring provides visibility into the following applications Microsoft SQL server VMware vCenter Server The monitoring integrations dashboard allows you to view information about select applications, such as SQL Server or vCenter instances, running in the cluster.To access the monitoring integrations dashboard, select Operations -> Integrations from the entities menu. The monitoring integrations dashboard allows you to view the summary information about application instances and access detailed information about each instance.You can filter the list by opening the filter pane to select one or more of the filter options
Hello, all. We’re running vCenter 7 with AOS 5.15.x, and I’m learning about how VMware has now decoupled the DRS/HA cluster availability from vCenter appliance and moved that into a three VM cluster (the vCLS VMs).In the interest of trying to update our graceful startup/shutdown documentation and code snippets/scripts, I’m trying to figure out how to handle these vCLS VMs. They reside on the Nutanix shared storage, so I obviously would like to shut them down before gracefully shutting down the Nutanix CVMs/ADSF cluster as well as ensure the CVMs are up and cluster is good before allowing them to power back on using that storage. Evidently, these vCLS VMs are very aggressive about powering back on or recreating themselves once deleted, so I’m a little unsure what to expect.So with regard to powering back on the ESX hosts, I assume when I take them back out of maintenance mode the CVMs will be powered back on (or maybe I have to do that manually?), and after waiting a few minutes, I woul
Use the Data Services IP method for external host connectivity to VGs. For backward compatibility, you can upgrade existing environments non disruptively and continue to use MPIO for load balancing and path resiliency. For security, use at least one-way CHAP. Leave ADS enabled. (Enabled is the default setting.) Use multiple disks rather than a single large disk for an application. Consider using a minimum of one disk per Nutanix node to distribute the workload across all nodes in a cluster. Multiple disks per Nutanix node may also improve an application’s performance. For performance-intensive environments, we recommend using between four and eight disks per CVM for a given workload. Use dedicated network interfaces for iSCSI traffic in your hosts. Place hosts that use Nutanix Volumes on the same subnet as the iSCSI data services IP. Use a single subnet (broadcast domain) for iSCSI traffic. Avoid routing between the client initiators and CVM targets. Receive-side sca
Any modern environments consist of multiple layers each of which contains multiple components. There are switches and routers, firewalls, physical server, application servers, applications themselves and, of course, users. Each of the components has logs of more than one kind, location and severity. All the components interact with each other directly or indirectly. I am certain you have found yourself in a situation where to establish a root cause you had to inspect logs of more than one entity. Establishing a timeline of events is always easier when the sources of the events’ clocks are synchronised and are located in one central location. While the clocks are handled by the NTP the centralised logs location is a syslog server or in this case a remote syslog server implying that it is separate to the origination of the logs. In addition to the benefits already mentioned, remote syslog server allows to access logs for the systems that are already dead, decommissioned or replaced. Nuta
SAP helps customers migrate from traditional relational databases to their in-memory SAP HANA database to gain more agility in their business processes. Many SAP customers are searching for ways to deploy SAP HANA in an efficient, simple way that minimizes risk while preserving the benefits of an agile platform. Nutanix provides such an option. The native Nutanix hypervisor, AHV, and Nutanix enterprise cloud OS software are certified for production SAP HANA deployments. HCI for SAP HANA CertificationThe certification has two primary segments. 1. As the first step, a platform vendor (Nutanix, in this case) must validate their platform, which consists of a hypervisor and an HCI component.2. In a second step, the hardware OEM must certify a suggested configuration through some additional HCI-related tests. When both parts of the validation are complete, the solution is certified and listed in the HCI for SAP HANA category on the SAP website. The hardware OEM is then responsible for selli
Many users are unaware that there are additional (beyond what is presented via the Prism user-interface) security parameters that can be employed on AHV hosts to increase the overall security of them. These security parameters are configured via Nutanix Command-Line Interface (NCLI) and include the following: Advanced Intrusion Detection Environment (AIDE) - a file and directory integrity checker High Strength Password Enforcement - configure the maximum and minimum number of characters the password must contain along with number of passwords retained in history to prevent repeated use Core Dumps - the recorded state of the working memory for a process is dumped to a file if the process ever crashes Login Banner - display a customized messages when user login to a node More information regarding these parameters, including the procedures to enable/disable them, can be found within the Hardening AHV section of the Nutanix Security Guide. Also to note, there are similar parameters
UPGRADING SERVER FIRMWARENutanix recommends that you use the Service Pack for ProLiant® (SPP) ISO file for applying firmware updates. Perform this procedure on every host in the cluster, one host at a time. About this taskTo upgrade the firmware on a server, do the following:Procedure If the server is part of a Nutanix cluster, place the server in maintenance mode.Information about placing a server in maintenance mode is available in the host management section of the Acropolis Command-Line Interface (aCLI) documentation. See the Command Reference for the supported AOS version. Turn on the server to the SPP ISO. Connect to the iLO by using the iLO IP address. Log on to the iLO user interface by using the administrator credentials.The default administrator user name is Administrator on all HPE® ProLiant® servers. Passwords for the iLO administrator differ from one server to another, and are available on the service tag on the server. Attach the SPP ISO to the server by usi
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.