One CVM Upgrade Stuck in 3 node Cluster from 6.8.1 to 7.0 | Nutanix Community
Skip to main content
Question

One CVM Upgrade Stuck in 3 node Cluster from 6.8.1 to 7.0

  • February 20, 2025
  • 5 replies
  • 83 views

Forum|alt.badge.img+1
  • Trailblazer
  • 11 replies

One CVM upgrade is stuck at “Waiting for reboot and upgrade completion” for almost 3 hours

While other two nodes were upgraded successfully

Any help will be appreciated.

 

This topic has been closed for comments

5 replies

is this production setup? If yes always open a case with support and involve them.

if not, 

which version of AOS you are on, you are going to which version of AOS?

login to one of CVM and get below output

#cluster status

#svmips | wc -w

#nodetool -h 0 ring

#allssh “ls -ltr data/logs/*.FATAL”

#upgrade_status

#host_upgrade_status

 

 


Forum|alt.badge.img+1
  • Author
  • Trailblazer
  • 11 replies
  • February 20, 2025

This is not prod. setup 

AOS upgrading from 6.8.1 to 7.0
#cluster status
UP

#svmips | wc -w

3

#nodetool -h 0 ring

 

 

#upgrade_status

2025-02-20 14:29:36,845Z INFO MainThread zookeeper_session.py:136 Using multithreaded Zookeeper client library: 1
2025-02-20 14:29:36,846Z INFO MainThread zookeeper_session.py:248 Parsed cluster id: 764684476876251384, cluster incarnation id: 1628494248447328
2025-02-20 14:29:36,847Z INFO MainThread zookeeper_session.py:270 upgrade_status is attempting to connect to Zookeeper, host port list zk3:9876,zk1:9876,zk2:9876
2025-02-20 14:29:36,849Z INFO Dummy-1 zookeeper_session.py:840 ZK session establishment complete, sessionId=0x39523b0628a01c9, negotiated timeout=20 secs
2025-02-20 14:29:36,850Z INFO MainThread upgrade_status:49 Target release version: el8.5-release-ganges-7.0-stable-2b780c6331a28149892d8df46287b7827617d906
2025-02-20 14:29:36,850Z INFO MainThread upgrade_status:62 Cluster upgrade method is set to: automatic rolling upgrade
2025-02-20 14:29:36,982Z INFO MainThread upgrade_status:115 SVM 192.168.10.34 is up to date
2025-02-20 14:29:36,984Z INFO MainThread upgrade_status:115 SVM 192.168.10.35 still needs to be upgraded. Installed release version: el8.5-release-fraser-6.8.1-stable-a8aad732dfbfaa2b3bcea0b0c27fbd51d8480f4e
2025-02-20 14:29:36,985Z INFO MainThread upgrade_status:115 SVM 192.168.10.36 is up to date
2025-02-20 14:29:36,986Z INFO Dummy-2 zookeeper_session.py:940 Calling c_impl.close() for session 0x39523b0628a01c9
 

#host_upgrade_status
2025-02-20 14:30:13,863Z INFO MainThread zookeeper_session.py:136 Using multithreaded Zookeeper client library: 1
2025-02-20 14:30:13,865Z INFO MainThread zookeeper_session.py:248 Parsed cluster id: 764684476876251384, cluster incarnation id: 1628494248447328
2025-02-20 14:30:13,865Z INFO MainThread zookeeper_session.py:270 host_upgrade_status is attempting to connect to Zookeeper, host port list zk3:9876,zk1:9876,zk2:9876
2025-02-20 14:30:13,868Z INFO Dummy-1 zookeeper_session.py:840 ZK session establishment complete, sessionId=0x39523b0628a01d2, negotiated timeout=20 secs
Automatic Hypervisor upgrade: Enabled
Target host version: None
2025-02-20 14:30:13,884Z INFO Dummy-2 zookeeper_session.py:940 Calling c_impl.close() for session 0x39523b0628a01d2
 


  • Voyager
  • 1 reply
  • February 20, 2025

Migrate the VMs manually to the other hosts.


Forum|alt.badge.img+1
  • Author
  • Trailblazer
  • 11 replies
  • February 21, 2025

This is resolved Thanks for all the help

one more help required

How i can re-add the blacklisted cluster to Prism Central again

 


JeroenTielen
Forum|alt.badge.img+8
  • Vanguard
  • 1358 replies
  • February 21, 2025

You cant. But if the cluster has support then involve support.