Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,184 Topics
- 3,243 Replies
AHV supports GPU-accelerated computing for guest VMs. You can configure either GPU pass-through or a virtual GPU. Let us say you have an AHV host with GPU compatible hardware and looking for a simple way to install the required drivers. Nutanix recommends a specific method for installing the Nvidia GPU host driver in AHV hosts. The method involves a script which is used for installation or upgrade of all the hosts in the cluster. Go through the following document to understand the process in-depth Installing AHV GPU Drivers Have questions regarding the usage of the script? What will happen if one of the nodes doesn’t have a GPU? What will happen if the driver version is different on one node than the rest of the cluster? How can I install the driver onto the new nodes only, without affecting the currently running nodes? Can I install different versions of the driver onto different nodes of the cluster? The following knowledge base article can help you to
Hi All, I’m sorry if asking questions on group. I have new deployment with HPE DX360 via foundation portable 5.0.1 and flat switch but somehow it run into error and error message prompted during installation via foundation which im not so sure why. Im using foundation 5.01 and upload with aos 5.20. The foundation error log stuck here after sometime;StandardError: Failed to connect to Phoenix at 192.168.100.32021-06-28 06:10:50,703Z ERROR Exception in running <ImagingStepInitIPMI(<NodeConfig(192.168.100.3) @2190>) @8fb0>Traceback (most recent call last): File "foundation\imaging_step.py", line 161, in _run File "foundation\decorators.py", line 77, in wrap_method File "foundation\imaging_step_init_ipmi.py", line 273, in run File "foundation\imaging_step_init_ipmi.py", line 184, in boot_phoenixStandardError: Failed to connect to Phoenix at 192.168.100.32021-06-28 06:10:50,706Z DEBUG Setting state of <ImagingStepInitIPMI(<NodeConfig(192.168.100.3) @2190>) @8fb0>
Hi I’m new to nutanix with ESXi. I recently deployed a 4 nodes nutanix with esxi. I’ve created containers and add the nodes to the vCenter. I have not add the vCenter to the Prism. I check at prism and there’s a warning saying the hosts are not connected to vCenter. Anyone encounter this issue before?
Just a quick question about sizing SSD capacity to HDD capacity in a node (in a new homogenous cluster). When specifying drive space in a node, what is the percentage SSD to HDD per node at a minimum. Is there a best practice guide that someone can point me towards? I’m not asking about the number of HDDs vs the number of SSDs.The question is around capacity and not number or drives. e.g. I configure a node with 4TB of SSD (2x1.92TB) and 60TB of HDD (10 x 6TB) and Nutanix Sizer says this is fine. This is roughly 6.25% SSD But what should I have as a minimum SSD capacity of of a hybrid node is 6% Okay, or should it be more like 10% or 20% (or more)?
We get some questions from customer. Our customer have several Nutanix units with several models including:NX-1365-G4 NX-1365-G5 NX-8235-G4 NX-6235-G4 NX-8135-G4The question is whether HDD can be replaced with SSD? What is the procedure and what about the data on the HDD? Thanks
We are in the Process of expanding our 4 Node Cluster to 6 Nodes. The Installation of the Hosts was successful using the Foundation VM. However, the servers have Nvidia cards installed, so we need to install the driver as well. When the Cluster is expanded with the 2 extra nodes, my understanding of the command “install_host_package” is, that it will install the drivers on all hosts, that have Nvidia cards. Even those, that already have the driver. To do so it will migrate the VMs and put the Hosts into maintenance. This is not an option for us, since the reason we are expanding the cluster is that the current cluster already is overloaded. I am looking for a way to install the drivers without interfering with the up and running environment. For example, is there a way to install the drivers before expanding the cluster?
I have the latest version of Nutanix ERA installed in my lab and already have a registered Oracle server. From that server the template created without any problems and i can create new servers. But when i want to clone que existent server always fails wit: Driver execution failed. Error: 'NoneType' object is not iterable. Any ideas
Objects version upgrades by using LCM feature. You can perform LCM upgrades through Prism Central (PC). Objects is a part of the PC upgrades module in LCM. LCM upgrades the following components of Objects: Objects Manager and Objects Services.Objects ManagerObjects Manager is a containerized service running on a PC VM. The objects manager is primarily responsible for taking input from a user for deploying the object store, validating the user inputs, managing certificates, deploying the Objects Services and serves as an interface between PC and backing object store. A single Objects Manager can manage one or more object stores. In case of scale-out PC, the Objects Manager service runs on each of the PC nodes and provides high availability.During the upgrade of the Objects Manager, no disruption happens to the Objects IO. However, the user interface will not be available for a short period of time for statistics and management.Objects ServiceObjects Service provides the object store int
Era is a database as a service (DBaaS) that automates and simplifies database administration, bringing one-click simplicity and invisible operations to database provisioning and life cycle management.Era enables database administrators to perform operations such as database registration, provisioning, cloning, patching, restore and much more. It allows administrators to define standards for their database provisioning needs with end-state driven functionality that includes High Availability (HA) database deployments.Era allows multi-cluster database management. Database administration for different databases across multiple Nutanix clusters can be performed with a single Era instance. With the extension of support for Nutanix clusters, you can now use all the capabilities of Era on Nutanix clusters (both cloud and on-prem).Procedure: In the dropdown list of the main menu, select Databases. Go to Sources, select all the databases and click Remove to remove all the databases from Era. Cl
HI all, I'm preparing a change on the cluster (6 node cluster running AHV). I need to re-ip cvm's and ahv's and add them to a specific vlan (now they are not in a vlan). Before I do this in production I decided to do this on the test lab. (old baby, 4 node G5). I've followed this guide: https://next.nutanix.com/installation-configuration-23/physical-relocation-of-nutanix-clusters-38403 but got stuck step 7 after booting the cvm's. The ouput of svmips shows the old cvm ip's but they are all booted with the new ones. (and accessible via ssh on the new ip's)The output of hostips shows the old ahv ip's byt the are all booted with the new ones. (and accassible via ssh on the new ip's). I must also say that the external_ip_reconfig script hangs at a specific time and now the command “cluster start” gives an error the the cluster is still in reconfigure mode. Anyone here who has the golden tip to get the command svmips and hostips to show the correct ip's? (zk_server_config_file also had the
Hello, I am trying to deploy Nutanix CE on a Dell T7600. It is working fine when using AHV as hypervisor. But when using ESXi, i am not able to mount containers.I receive error message : There was an error while mounting the datastore for storage container 'default-container-11071639353873'In vmkernel.log i can see : 2021-06-22T14:36:46.848Z cpu17:2099496 opID=7bd90377)World: 12556: VC opID 738348c2 maps to vmkernel opID 7bd903772021-06-22T14:36:46.848Z cpu17:2099496 opID=7bd90377)NFS: 162: Command: (mount) Server: (192.168.5.2) IP: (192.168.5.2) Path: (/default-container-11071639353873) Label: (default-container-11071639353873) Options: (None)2021-06-22T14:36:46.848Z cpu17:2099496 opID=7bd90377)StorageApdHandler: 960: APD Handle 732ec6b0-3763e7bf Created with lock[StorageApd-0x4316fd802900]2021-06-22T14:36:46.849Z cpu17:2099496 opID=7bd90377)SunRPC: 1095: Destroying world 0x201ac22021-06-22T14:36:46.849Z cpu17:2099496 opID=7bd90377)SunRPC: 1095: Destroying world 0x201ac32021-06-22T14:
Hi I have problem after fresh install Nutanix CE.Dont have acces to CVM console and Prism from outside. SSH from hypervisor work good But from other machine have time-out error Install on Vmware ESXI 7 on Dell server Cluster works fine EDIT: use version ce-2020.09.16.isoon PC in Workstation PRO work fine but on ESXI have same problem
Hey guys,I’m getting an error regarding email alerts, this is what I can observer on the send-email.log2021-06-17 07:37:03,407Z INFO send-email:242 Not sending emails for first 1 hours of cluster creation. Cluster Age = -16009102.3605 secs 2021-06-17 07:38:03,611Z INFO send-email:242 Not sending emails for first 1 hours of cluster creation. Cluster Age = -16009042.1568 secsCluster was online for 2months already, I tried already to stop and start the cluster but that sec timer is set to around 6 months.
We recently performed a cluster shutdown with hardware power off of an AHV cluster running AOS 126.96.36.199.We powered on all the nodes, and waited 10 minutes for the AHV hypervisor to boot and the CVMs to boot and get ready.Even after 30 minutes the CVMs did not recognise the confirmed correct password for the “nutanix” userID during ssh login attempt to any CVM.Fortunately one SSH key had previously been registered into Prism Element, which allowed SSH via this key. The cluster was NOT configured to be locked down.The key owner successfully connected to a CVM via SSH and performed sudo passwd set of the “nutanix” userID to a confirmed password.Despite setting this password, the same CVM still refused to accept the “nutanix” userID and confirmed correct password during SSH password login. I am suspecting that with the cluster service stopped, but with one SSH key present, that the CVMs operate as if lockdown were enabled.Can someone please confirm for this?This prevented the password hol
Nutanix, AHV, MS Server 2016 installed with IIS, Media Wiki.Out of the blue, the server performs very slow. The MediaWiki site takes forever to load ( at times a Server 500 error appears when going to the MediaWiki site from a desktop). Opening the console shows the Server logon window but very slow. Everything is very slow. Once logged on, it takes almost 20 min for the server management window to open.Currently, vCPU=6, Cores per CPU=4, RAM 16CPU Usage around 10% and Memory usage around 17%Only server that acts like that. Ran DISM /Online /Cleanup-Image /ScanHealth and so on and everything is ok/ no problems. Rebooted all hosts.Not sure what caused this all of a sudden ….
Hello, all. We’re running vCenter 7 with AOS 5.15.x, and I’m learning about how VMware has now decoupled the DRS/HA cluster availability from vCenter appliance and moved that into a three VM cluster (the vCLS VMs).In the interest of trying to update our graceful startup/shutdown documentation and code snippets/scripts, I’m trying to figure out how to handle these vCLS VMs. They reside on the Nutanix shared storage, so I obviously would like to shut them down before gracefully shutting down the Nutanix CVMs/ADSF cluster as well as ensure the CVMs are up and cluster is good before allowing them to power back on using that storage. Evidently, these vCLS VMs are very aggressive about powering back on or recreating themselves once deleted, so I’m a little unsure what to expect.So with regard to powering back on the ESX hosts, I assume when I take them back out of maintenance mode the CVMs will be powered back on (or maybe I have to do that manually?), and after waiting a few minutes, I woul
While it is not as a straightforward process as we would like for it to be there is an option to add a NIC to your Move VM. Login to Prism Element Add New NIC to the Nutanix-Move appliance and select the network Launch the console of Nutanix Move appliance. Switch to root user Use vi editor or any other editor of your choice to open the file /etc/network/interfaces Add the second interface eth1 configuration in the format below based on DHCP/Static IP addressing. Restart the networking service.Please Note: If you are using Move 3.0.3 or above you can skip Step-7 and Step-8. That'll be taken care automatically. There will be an existing script named "start-xtract" under "/opt/xtract/bin". Overwrite that script with the one provided in the KB (see link below). Change the permissions for the script. Stop iptables and restart move services. Verify the new eth1 interface configuration using "ifconfig eth1". See KB7399 - Procedure to add a second NIC interface on Move v3.0.2 for detailed ins
Hello all..is there a list with model number somewhere of the Nutanix supported NICs? I am looking for the cheapest 1Gb dual NICs Nutanix supported that I can get from Amazon or ebay or the cheapest 10Gb..I was reading that supermicro Nic is supported but I don’t see model number so I was wandering if the Supermicro AOC-SG-12 is supported since that card cost about $35..my understanding is the CE and commercial support the same NICs now. My use case is CE for homelab. thanks
Hi All,We have scheduled to perform a Nutanix Cluster upgrade.Existing Setup:3xDell XC730 XD-12 running on AHV 20170830.453 / AOS(5.15.3). This cluster is hosting around 50 VMs.Upgrade to:4* NX 8235 -G7-4215R -CMIs it possible to add the 4*NX to existing cluster and then remove 3xDell XC servers. We are trying to achieve minimum downtime here without major outage.Per Nutanix documentation https://portal.nutanix.com/page/documents/details/?targetId=Hardware-Admin-Ref-AOS-v5_17%3Ahar-product-mixing-restrictions-r.html , it seems mixing hardware isn't feasible. Could you please confirm if there is any work around possible?Regards,
The release-api.nutanix.com is not reachable from my prism central and my prism element .I have valid name servers configured in both PC and PE .I got it verified from network team that the traffic is passing by firewall .Can anyone let me know what exact things do i need to check in my name servers so that this URL will be connected from PC and PE ?
Hello Community, I have some doubts about running Nutanix on VMWare with ESXi with Cisco ACI fabric. To start, we have already married ourselves to a Cisco ACI network fabric (two sites connected with 12x10Gbit fiber (120Gbit)). We are using an IPN / Spine / Leaf topology, with APIC clusters in each site. In each DC there will be 24 Nutanix Nodes. All of my question have to do with the best practices of integrating these technologies. 1) Is it necessary to use VMware NSX to reap the benefits of Cisco ACI? 2) Can we just use a simple VMware installation (without NSX) and allow Nutanix full access to the ACI fabric? 3) Whats the best practices related to these three technologies coexisting together? I have found documents on Nutanix/ACI and Nutanix/VMWare, but I cant find anything on using all three together in terms of hierarchy and how to stitch it all together. Any guidance from those who have experience would be greatly appreciated. Thanks, Michael
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.