Hello,Seeking for advice, I have 3 nodes cluster having problem with node B DIMM is undetected. I’m planning to reseat my DIMM, if I will just manually move the VMs to node A and C then perform a shutdown on node B without entering maintenance mode does the VM will join back to node B?Can please also share a link to a document to refer for the procedure.
Hi,I cannot expand the cluster when Pre expand-cluster tests were failed and error message was "Failure in pre expand-clustrer tests.Errors:Failed to get vlan tag of node:XXXXX" on Prism Element.CVM and HyperVisor Network use Tagged VLAN and PortChannel in L2 switch ports.Ping to HyperVisor’sManagement IP from other NW segment is OK.Of course, CVM and HyperVisor’sManagement IP are same segment.Should I do any other? As below, each of versions.・DELL XC640・AOS：5.10.6・ESXi：6.7 Update 2 (Build 13006603)
AOS Version 220.127.116.11ESXi 6.7CROME 86.0.4240.75 Opening a VM console I always get connection closed (error 1006):There was an error connecting to the VM. This could be due to an invalid or untrusted certificate chain or the VM being powered off. Please use the latest version of Chrome, Firefox or Internet Explorer if the problem persists.
Hi all, A newbie question. It seems I still have an old crash dump directory on one of my AHV hosts. A ls -lahtr /var/crash shows the single directory from back in April. At the time the issue was resolved, and the faulty DIMM that was causing the issue was replaced. That said, clearly the dump file was not removed. In terms of cleaning this up, is it OK to delete the dump directory within the /var/crash directory and then rerun the ncc health check. Or is there a better method for clearing crash dumps from Nutanix clusters. Many thanks, Rob
Hi, everyone I’m trying to use LCM to update firmware of host machines on my cluster. It all went well until the post action phase. Error message said: ‘Operation failed. Reason: LCM failed performing action reboot_from_phoenix in phase PostActions on ip address xx’. I searched the KB and found KB9177，but it’s about ‘Mixed Hypervisor cluster‘, my cluster uses solely AHV so it doesn’t applied. Anyway I still tried to follow the KB9177’s suggestion and upgraded my cluster’s foundation to 4.5.3 and retried the LCM firmware update process on another host but still got the same ‘ LCM failed performing action reboot_from_phoenix’ error. I used the workaround provided in that KB to make the affected two hosts’ CVM out of maintenance mode. It works and the cluster is back to normal. Then I logged on to the affected hosts’ IMM and found out that actually the primary IMM2 firmware has already been updated by the LCM( the backup IMM2 firmware is not upgraded), and when I go to the LCM section
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.