5 Essential Tips for Maximizing Your Experience at Nutanix .NEXT for Bloggers
We have had a couple of instances recently when making network changes that have affected our clusters. This caused a restart on the lead host due to it detecting a network loss and then resulted in system outages. The cluster is configured with dual networks ports in active and passive mode and the understanding was that it would switch if any change or failure was detecetd without producing error events and systems down.
I have encountered a fault when installing the guest tools on Linux. The guest agent fails on start and atempts to start it exhibit the same. Linux is Centos 6.8 and AHV Nutanix 4.6.2 Process to produce problem /media/installer/linux/install_ngt.pyUsing Linux Installer for centos linux distribution.Setting up Nutanix Guest Tools - VM mobility drivers.Successfully set up Nutanix Guest Tools - VM mobility drivers.Installing Nutanix Guest Agent Service.Successfully installed Nutanix Guest Agent Service.Waiting for Nutanix Guest Agent Service to start.Nutanix Guest Agent Service failed to start.Check /usr/local/nutanix/logs/guest_agent_stdout.log for info.more /usr/local/nutanix/logs/guest_agent_stdout.logTraceback (most recent call last): File "/usr/local/nutanix/bin/guest_agent_service.py", line 239, in start() File "/usr/local/nutanix/bin/guest_agent_service.py", line 56, in start service = NgtGuestAgentService() File "/usr/local/nutanix/bin/guest_agent_service.py", line 1
The pasword for nutanix user was reset on the prism central system and now I am unable to login even though I am sure the password is correct. It seems it may have become unknown due to the console setting for special characters. In this case how do we recover login access to the node. I didnt set another sudo or other user for this system either.
We have had two instances where a node detected/reported a fault event and reset rebooting vm on each occasion. There seems no reason for this to have happened. Details Host 192.168.xx.x4 appears to have failed. High Availability is restarting VMs on hosts throughout the cluster. 08-17-16, 02:01:41am Host 192.168.xx.x4 appears to have failed. High Availability is restarting VMs on hosts throughout the cluster.08-11-16, 07:19:48am We updated the AHV and NCC and since had a repeat last night from the first instance last week Is there a potential hw fault with the host that has not yet been detected or checked?
We had an issue when the cluster was first setup which involved the hdd. These were moved and replaced and now we see an error reported. It was initially cleared and now resurfaced, this despite a AHV update from 4.6.2 to 4.6.3 and also update on NCC to 2.6.6 Error Detailed information for disk_online_check:Node 192.168.xx.x4:FAIL: Disk '/home/nutanix/data/stargate-storage/disks/S460AATC' failed on node with ip u'192.168.xx.x3'. Disk '/home/nutanix/data/stargate-storage/disks/S460AATC' failed on node with ip u'192.168.xx.x4'.Refer to KB 1536 for details on disk_online_check or Recheck with: ncc health_checks hardware_checks disk_checks disk_online_check smartctl on the hdd shows no errors There seems tio be an issue with retaining or updating info/config on the cluster nodes
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.