Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,169 Topics
- 3,202 Replies
Hey guys,I’m getting an error regarding email alerts, this is what I can observer on the send-email.log2021-06-17 07:37:03,407Z INFO send-email:242 Not sending emails for first 1 hours of cluster creation. Cluster Age = -16009102.3605 secs 2021-06-17 07:38:03,611Z INFO send-email:242 Not sending emails for first 1 hours of cluster creation. Cluster Age = -16009042.1568 secsCluster was online for 2months already, I tried already to stop and start the cluster but that sec timer is set to around 6 months.
We recently performed a cluster shutdown with hardware power off of an AHV cluster running AOS 188.8.131.52.We powered on all the nodes, and waited 10 minutes for the AHV hypervisor to boot and the CVMs to boot and get ready.Even after 30 minutes the CVMs did not recognise the confirmed correct password for the “nutanix” userID during ssh login attempt to any CVM.Fortunately one SSH key had previously been registered into Prism Element, which allowed SSH via this key. The cluster was NOT configured to be locked down.The key owner successfully connected to a CVM via SSH and performed sudo passwd set of the “nutanix” userID to a confirmed password.Despite setting this password, the same CVM still refused to accept the “nutanix” userID and confirmed correct password during SSH password login. I am suspecting that with the cluster service stopped, but with one SSH key present, that the CVMs operate as if lockdown were enabled.Can someone please confirm for this?This prevented the password hol
Nutanix, AHV, MS Server 2016 installed with IIS, Media Wiki.Out of the blue, the server performs very slow. The MediaWiki site takes forever to load ( at times a Server 500 error appears when going to the MediaWiki site from a desktop). Opening the console shows the Server logon window but very slow. Everything is very slow. Once logged on, it takes almost 20 min for the server management window to open.Currently, vCPU=6, Cores per CPU=4, RAM 16CPU Usage around 10% and Memory usage around 17%Only server that acts like that. Ran DISM /Online /Cleanup-Image /ScanHealth and so on and everything is ok/ no problems. Rebooted all hosts.Not sure what caused this all of a sudden ….
Hello, all. We’re running vCenter 7 with AOS 5.15.x, and I’m learning about how VMware has now decoupled the DRS/HA cluster availability from vCenter appliance and moved that into a three VM cluster (the vCLS VMs).In the interest of trying to update our graceful startup/shutdown documentation and code snippets/scripts, I’m trying to figure out how to handle these vCLS VMs. They reside on the Nutanix shared storage, so I obviously would like to shut them down before gracefully shutting down the Nutanix CVMs/ADSF cluster as well as ensure the CVMs are up and cluster is good before allowing them to power back on using that storage. Evidently, these vCLS VMs are very aggressive about powering back on or recreating themselves once deleted, so I’m a little unsure what to expect.So with regard to powering back on the ESX hosts, I assume when I take them back out of maintenance mode the CVMs will be powered back on (or maybe I have to do that manually?), and after waiting a few minutes, I woul
While it is not as a straightforward process as we would like for it to be there is an option to add a NIC to your Move VM. Login to Prism Element Add New NIC to the Nutanix-Move appliance and select the network Launch the console of Nutanix Move appliance. Switch to root user Use vi editor or any other editor of your choice to open the file /etc/network/interfaces Add the second interface eth1 configuration in the format below based on DHCP/Static IP addressing. Restart the networking service.Please Note: If you are using Move 3.0.3 or above you can skip Step-7 and Step-8. That'll be taken care automatically. There will be an existing script named "start-xtract" under "/opt/xtract/bin". Overwrite that script with the one provided in the KB (see link below). Change the permissions for the script. Stop iptables and restart move services. Verify the new eth1 interface configuration using "ifconfig eth1". See KB7399 - Procedure to add a second NIC interface on Move v3.0.2 for detailed ins
Hello all..is there a list with model number somewhere of the Nutanix supported NICs? I am looking for the cheapest 1Gb dual NICs Nutanix supported that I can get from Amazon or ebay or the cheapest 10Gb..I was reading that supermicro Nic is supported but I don’t see model number so I was wandering if the Supermicro AOC-SG-12 is supported since that card cost about $35..my understanding is the CE and commercial support the same NICs now. My use case is CE for homelab. thanks
Hi All,We have scheduled to perform a Nutanix Cluster upgrade.Existing Setup:3xDell XC730 XD-12 running on AHV 20170830.453 / AOS(5.15.3). This cluster is hosting around 50 VMs.Upgrade to:4* NX 8235 -G7-4215R -CMIs it possible to add the 4*NX to existing cluster and then remove 3xDell XC servers. We are trying to achieve minimum downtime here without major outage.Per Nutanix documentation https://portal.nutanix.com/page/documents/details/?targetId=Hardware-Admin-Ref-AOS-v5_17%3Ahar-product-mixing-restrictions-r.html , it seems mixing hardware isn't feasible. Could you please confirm if there is any work around possible?Regards,
The release-api.nutanix.com is not reachable from my prism central and my prism element .I have valid name servers configured in both PC and PE .I got it verified from network team that the traffic is passing by firewall .Can anyone let me know what exact things do i need to check in my name servers so that this URL will be connected from PC and PE ?
Hello Community, I have some doubts about running Nutanix on VMWare with ESXi with Cisco ACI fabric. To start, we have already married ourselves to a Cisco ACI network fabric (two sites connected with 12x10Gbit fiber (120Gbit)). We are using an IPN / Spine / Leaf topology, with APIC clusters in each site. In each DC there will be 24 Nutanix Nodes. All of my question have to do with the best practices of integrating these technologies. 1) Is it necessary to use VMware NSX to reap the benefits of Cisco ACI? 2) Can we just use a simple VMware installation (without NSX) and allow Nutanix full access to the ACI fabric? 3) Whats the best practices related to these three technologies coexisting together? I have found documents on Nutanix/ACI and Nutanix/VMWare, but I cant find anything on using all three together in terms of hierarchy and how to stitch it all together. Any guidance from those who have experience would be greatly appreciated. Thanks, Michael
The reason we deploy SQL Always-on as an enterprise is to have multi-site resiliency for our SQL clusters for operational and business continuity reasons. The issue is that ERA will not register SQL always-on clusters if they are spread between two sites/clusters. From an “API First” solution I would expect differently! I can not introduce risk and pull back DR planning because of the lack of support from ERA. How is the NTX community solving this issue?
I have a server that no longer needs one of its 2TB disk. I know how to remove the disk from the VM but, how do I remove that VMDK from the Nutanix Storage container? Is there a way to browse it and then delete it? 2TB is a lot of space that I would like to reclaim.
I have setup Nutanix CE in a lab environment but I cannot get to the Prism page when browsing to the CVM IP address. I'm getting the message: [b]Prism services have not started yet. Please try again later.[/b] Although I have run [b]cluster status[/b] and the Prism service is up and running: [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/ccea7730-811f-4020-ace9-ce68004a0597.png[/img] It is a single node setup on VMWare. I can ping the CVM IP and SSH onto the CVM. Any assistance would be gratefully received! Thanks.
Hi, I just installed a nested CE 5.18 2020.9.16, and it is OK. single node. vmware esxi 6.7 The host IP is pingable, but the CVM IP is unpingable from outside pc.so cannot open web consoleWhen i login the host, can ping the cvm ip. how to fix the issue? thank you so much.
The Cluster role based access Control (RBAC) or Enhanced Prism Central RBAC feature provides “Prism Admin” and “Prism Viewer” role based access to Prism Central with access restricted to one or more AOS clusters registered to Prism Central. With Cluster RBAC, the Prism Central admin or viewer user is able to access Prism Central and view and act on the entities like VM, host and container from the allowed AOS clusters. The users will also be able to perform the “Launch Prism Element” action on the allowed AOS clusters and manage the cluster with respective Prism Admin or viewer access. Enabling cluster RBAC Pre-checks Verify supported the Prism Center version and the AHV cluster version. The AOS cluster where Prism Central is deployed must be registered to the Prism Central. CMSP must be enabled. Identity and Access Management (IAM) is automatically enabled as part of CMSP enablement. The prerequisites for CMSP and IAM also apply to cluster RBAC Procedure Connect to the Prism
Hi.I’m trying to assign users to the built-in roles in Prism Central, for example Super Admin, but I’m not allowed to.This is the exact same issue as @skeeter reported on 5 months ago in this thread.Is this still considered a bug that requires me to open a support case?
We have already mentioned some helpful resources relevant to SNMP configuration. This time we will take a look at examples, logs location and connectivity testing. Important:Nutanix appliances support only SNMP v2c and v3 (support for v2c in AOS 5.5 and higher). Starting from AOS version 4.1, auth-type MD5 is no longer supported. With SNMP v2c Nutanix support SNMP TRAP instead of SNMP GETFollow trough to KB-1333 Configuring and Troubleshooting SNMP monitoring for commands and examples.The article also reference two more KB articles as well as SNMP related PowerShell commands:KB-2448 How to import MIB in zenossKB-2028 Integrating Nutanix with Solarwinds
Hi! I’ve been getting some reports of users unable to use the numpad part of their 10-key keyboard in Frame… seems to work outside of Frame just fine, and no, numlock isn’t on :) I’ve seen it, though I haven’t been able to reproduce it myself, or isolate a trigger for when it happens to people. Just curious if anyone else has come across something like this?
WE are running on AOS version 184.108.40.206 and planning to upgrade it. When I checked in Upgrade path, the maximum version I can upgrade to is 5.6.2 I need to know, if I want to upgrade to the latest version which is 220.127.116.11, do I have to follow the path like [b]18.104.22.168 >> 5.6.2 >> 5.9.2 >> 22.214.171.124[/b] , basically a 3 step procedure OR is it that I can upgrade to 5.6.2 only. Please share your views.
I had a LCM failure while upgrading firmware on a node. The node entered maintenance mode and I was never able to get it out. I have subsequently removed the node from the cluster so that I can finish the upgrade process. I have successfully completed the firmware upgrade on the other nodes.Now I am trying to upgrade AHV. LCM is failing prechecks with the message Operation failed: Failed to find node <uuid>The <uuid> is the one of the node that is no longer in the cluster. Some how I need to get past this so I can complete the AHV upgrade. I currently have two nodes at the most recent version and one at an old version. Is there someway to bypass the precheck or to modify the LCM data about the removed node so I can press on?
Volumes acts as one or more targets for client Windows or Linux operating systems running on a bare metal server or as guest VMs using iSCSI initiators. Do not use Volumes to create an iSCSI datastore to Hyper-V or ESXi hosts. This configuration is not supported.The iSCSI data services IP acts as an iSCSI target discovery portal and initial connection point. The client only needs this single IP address to help load balance storage requests and provide path optimization in the cluster, preventing bottlenecks.Enable Nutanix Volume Get the client IP address that you will add to a volume group client whitelist. Create an iSCSI data services IP address for the Nutanix cluster. This address cannot be the same as the cluster virtual IP address. Provision storage on the Nutanix cluster by creating a volume group consisting of one or more vDisks. Perform an iSCSI target discovery of the Nutanix cluster from the clients. [Optional] Configure CHAP or Mutual CHAP authentication on the init
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.