Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,113 Topics
- 2,987 Replies
Hey everyone, I am required to upgrade from ESX 6.7 U1 to U2 to resolve a bug that prevents me from going back more than 8-10mins to look at VM performance, however, Nutanix has only just now tested U3. My question is: Has anyone migrated to U3 yet and if so how has it been so far?
Hi, New to the forum, we recently had our Nutanix servers delivered and we are planning to do the install in the coming weeks. We haven’t decided which hypervisor to go with yet its either going to be Hyper-V or AHV. I just wanted to know peoples experiences that they have had? We currently use Hyper-V, so we are comfortable with that but all the nice features seems to be heading to AHV/Prism. I would be grateful for your thoughts and comments Thank you
Hello folks, Recently I was running some Resilency testing, powering down a node using IPMI (Power off server -= Immediate) to ensure VMware High Availability worked as expected for a simulated power outage. When I finished I powered the node back on via IPMI. I was surprised the CVM did not automatically start. Is it expected that the CVM would not restart when a node was powered back on? Thank you for your help.
Hello, Lately I have been adding the Nutanix SCOM Management Pack version 188.8.131.52. From the start on it worked fine as I handed over the Cluster information via Nutanix Cluster Discovery. But now, a few weeks later, SCOM will not display any performance data of the clusters anymore. I am not able to find out where exactely caused this problem. I have different Clusters AHV and ESXI as well as multiple OSVersions running. Currently SCOM displays only one Cluster with it's information. From the other Clusters, there is no performance data shown. All other clusters were discovered correctely (and the same way) but won't show their data anymore. My situation looks like this - this dashboard is showing the clusters information. All other clusters (and their nodes) are not showing data - the dashboard stays empty. [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/c1d55b84-0d1c-4404-b139-9b48e7e361fd.jpg[/img] What could have happend that the data is not reaching SCOM or
I'm looking for some advice.My company recently purchased a used 3460-G4. We're a professional services firm and we want to have it for our lab so we can stand up stuff like ERA, CALM, etc. and bang around on it. You know, lab stuff!We have been trying to Foundation (184.108.40.206 & 220.127.116.11) the block but I am running into a problem with Foundation failing when trying to mount the Phoenix image on the nodes. Here's how things currently stand:BIOS has been upgraded to latest recommended version on Nutanix Support site. (G4G5T6.0)BMC firmware has been upgraded to latest recommended version on Nutanix Support site. (3.64)Each node has x2 SSDs, which this system appears to recognize (I ran an ESXi installer on one of the nodes to test. The installers saw the drives)Each node has 64GB of RAM. The RAM is confirmed to be compatible according to SuperMicro's site.The Motherboard is the X10DRT-P from SuperMicro.IPMI has been set on each node and I can log into the IPMI mgmt page.At first I was wo
Hi guys, I can't seem to perform an inventory on LCM any more. [b]It fails on the check[/b]: [i]Check 'test_upgrade_in_progress' failed with 'Failure reason: Another Upgrade operation is in progress. Please wait for that operation to complete before starting an LCM operation.'[/i] However after running [b]progress_monitor_cli --fetchall[/b] This does not show anything in progress. [b] host list -[/b] All 3 hosts show: [i]false (life_cycle_management)[/i] - Which is as expected. ~/data/logs$ [b]upgrade_status[/b] 2019-09-04 14:47:32 INFO zookeeper_session.py:131 upgrade_status is attempting to connect to Zookeeper 2019-09-04 14:47:32 INFO upgrade_status:38 Target release version: el7.3-release-euphrates-5.10.6-stable-294f5f671ba8982a0199e18b756e8ef3a453af9a 2019-09-04 14:47:32 INFO upgrade_status:43 Cluster upgrade method is set to: automatic rolling upgrade 2019-09-04 14:47:32 INFO upgrade_status:96 SVM 10.x.x.x is up to date 2019-09-04 14:47:32 INFO upgrade_status:96 SVM 10.x.
purchased 7/15/2016 setup/install 10/24/2016 (by Nutanix tech) office move 4/1/2017 (late april, power outage in San Francisco financial district, was on bus on way to work when power went out, never got to shutdown Nutanix cluster properly, next morning powered up cluster and ran fine. Awesome! no issues since then) hardware: 2 NX-8235-G5 1 NX-8135-G5 NOS: 4.7.2 NCC: 2.3.0 BIOS: 20160516 BMC: 03.28 hypervisor: VMware vCenter Server 5.5 Update 2e (vSphere on Windows Server 2008 R2) I need to upgrade vSphere from 5.5 U2e to whatever latest version of 6.5 and also Nutanix NOS 4.7.2 to 5.5.8 Do I: 1 - upgrade VMware to 6.5 then upgrade Nutanix to 5.5.8 or 2 - upgrade Nutanix to 5.5.8 then upgrade VMware to 6.5 or 3 - upgrade VMware to compatible intermediate version and then upgrade Nutanix to compatible intermediate version. upgrade VMware to 6.5 and then upgrade Nutanix to 5.5.8 or 4 - upgrade Nutanix to compatible intermediate version and then upgrade VMwar
Nutanix cluster works with the vDS and you can use the following guidelines and recommendations to configure the vmk and VM interfaces to be part of the vDS. Nutanix recommendations for implementation Keep the vSwitchNutanix, the vmkernel port (vmk-iscsi-pg), and the Nutanix Controller VM's virtual machine port group (svm-iscsi-pg) configuration intact. It should remain as a standard vSwitch and should not be migrated over to the vDS. Migrating the vSwitchNutanix to the vDS causes issues with upgrade, and also controller VM data path communication. Only migrate one host to a dvSwitch at a time. After migrating the host to the dvSwitch, confirm that the Controller VM can communicate with all other Controller VMs in the cluster. This ensures that the cluster services running on all Controller VMs continue to function during the migration. In general, one Controller VM can be off the network at a given time while the others continue to provide access to the datastore.
Is there a KB or documentation providing the JSON format and any additional things that may be coded in? For example, in going through the install.nutanix.com portal to generate the JSON file it doesn't ask for NTP server of the hypervisor. In looking at the file, though, there's this line: "hypervisor_nameserver": "18.104.22.168", I'd like to have our Nutanix deployments in code as much as possible (right now we're looking to various scripts for vSphere configurations). Being able to configure as much as possible up front and possibly make this a template for future deployments.
Syslog or ‘System Logging Protocol’ is used by routers, switches, access points, servers and of course Nutanix. It is used to send events and logs to a remote syslog server that collects, organizes and filters these logs. In Nutanix, we provide different log modules for the core services that can be enabled separately and you can configure the logging level required for them as well. For example, if you wish to forward just the warning log messages for Acropolis, this would be the command: ncli rsyslog-config add-module server-name=<server_name> module-name=ACROPOLIS level=WARNING For a more comprehensive look at the various modules and log levels, check out syslog server. Bonus: Would you like to know who powered off that VM while you were sleeping? Or in general, WHO did WHAT on WHICH OBJECT, at what TIME, from WHERE, and what was the OUTCOME? You can forward these Audit logs to your syslog server as well from Prism Central. More information on this can be foun
We have had two instances where a node detected/reported a fault event and reset rebooting vm on each occasion. There seems no reason for this to have happened. Details Host 192.168.xx.x4 appears to have failed. High Availability is restarting VMs on hosts throughout the cluster. 08-17-16, 02:01:41am Host 192.168.xx.x4 appears to have failed. High Availability is restarting VMs on hosts throughout the cluster.08-11-16, 07:19:48am We updated the AHV and NCC and since had a repeat last night from the first instance last week Is there a potential hw fault with the host that has not yet been detected or checked?
Every production infrastructure knows the importance of load balancing the network traffic to increase efficiency.Let’s say you have multiple links in your environment and want to use the potential of all the links or want to have a backup configuration in case a link fails, load balancing will come to your rescue.Today we will talk about two load-balancing modesActive-Backup Balance-slbTo know more about the load balancing configuration and AHV networking in detail, give the following document a read AHV Networking Best Practices Guide So how to make a decision regarding active-backup and balance slb?This comparison might help youAdvantages of Bond Mode for the active-backupDefault bond mode is active-backup. One interface in the bond carries traffic and all the other interfaces in the bond are used only when the active link fails. Active-backup is the simplest bond mode that easily allows connections to multiple upstream switches without any additional switch configuration. Disadva
While it is not as a straightforward process as we would like for it to be there is an option to add a NIC to your Move VM. Login to Prism Element Add New NIC to the Nutanix-Move appliance and select the network Launch the console of Nutanix Move appliance. Switch to root user Use vi editor or any other editor of your choice to open the file /etc/network/interfaces Add the second interface eth1 configuration in the format below based on DHCP/Static IP addressing. Restart the networking service.Please Note: If you are using Move 3.0.3 or above you can skip Step-7 and Step-8. That'll be taken care automatically. There will be an existing script named "start-xtract" under "/opt/xtract/bin". Overwrite that script with the one provided in the KB (see link below). Change the permissions for the script. Stop iptables and restart move services. Verify the new eth1 interface configuration using "ifconfig eth1". See KB7399 - Procedure to add a second NIC interface on Move v3.0.2 for detailed ins
Hi All , I would like to know the way how can I move vm's from 1 nutanix block into other Nutanix block using vmotion . - Each Nutanix block configured as a stand alone Nutanix Cluster -2 Nutanix Block under same vmware DC and Cluster and managed by 1 vcenter. but when I run vmtion for one virtual machine from vsphere client I could not choose the target nutanix storage. Thanks
I completed an AOS upgrade from 22.214.171.124 to 5.10.8 on two different clusters. On one cluster I have the health page showing a warning symbol (no data) and the message: “An error has occurred. Show me why” When I role over “Show me why” I see the message “java.lang.NullPointerException”. I have a ticket open with Nutanix but haven’t received any solutions that have worked yet. Appreciate help if someone knows how to resolve.
I have encountered a fault when installing the guest tools on Linux. The guest agent fails on start and atempts to start it exhibit the same. Linux is Centos 6.8 and AHV Nutanix 4.6.2 Process to produce problem /media/installer/linux/install_ngt.pyUsing Linux Installer for centos linux distribution.Setting up Nutanix Guest Tools - VM mobility drivers.Successfully set up Nutanix Guest Tools - VM mobility drivers.Installing Nutanix Guest Agent Service.Successfully installed Nutanix Guest Agent Service.Waiting for Nutanix Guest Agent Service to start.Nutanix Guest Agent Service failed to start.Check /usr/local/nutanix/logs/guest_agent_stdout.log for info.more /usr/local/nutanix/logs/guest_agent_stdout.logTraceback (most recent call last): File "/usr/local/nutanix/bin/guest_agent_service.py", line 239, in start() File "/usr/local/nutanix/bin/guest_agent_service.py", line 56, in start service = NgtGuestAgentService() File "/usr/local/nutanix/bin/guest_agent_service.py", line 1
Hi, We purchad Huawei servers end of last year (check compatibility matrix brought by Nutinix itself https://docplayer.net/136878906-Huawei-hardware-compatibility-list.html and due to Covid19, we were not able to install them until now. Currently we have 4 servers with part numbers from compatibility matrix 2288Hv5-12 and 2x10G switches for interconnection. Each server has M.2 SSD drives for OS installation configured in RAID 1, while the rest of the disks (SSD of 1.92TB) are in JBOD mode, since that there are 2 RAID cards, one for M.2 and the other for SSD drives respectively. Virtual drive has status OK. During installation process, we receive error message that says the following: “INFO Performing firmware checks for platform Huawei 2288HV5-12 WARNING Some error occurred during firmware checks. This can be due to some unsupported hardware. Please report this to foundation team.” Final error message is: Unable to find SSD in node. Please, help us resolve this issue in form o
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.