Problems while upgrading with uplinks at 1 Gbps | Nutanix Community
Skip to main content

Recently, we upgraded AOS and AHV in a 6-NX-1065-G5 nodes cluster. Those nodes have only 1 Gbps interfaces and the upgrade process got stuck many times while actively moving guest VMS to other nodes when setting each node to maintenance mode. Tech support told us this happened because new AOS and AHV versions were designed to work at 10 Gbps, which we know is a best practice, but a hardware upgrade would be needed in our scenario then.

Question: I´m guessing configuring the four uplink interfaces as Balance-SLB bond may help, having a total bandwidth of 4 Gbps in optimal conditions, Has anybody gone through this problem and applied this cure? Was it useful or… Not enough?

 

Thanks in advance

Hi, 

Please refer this link for the details 

 

https://portal.nutanix.com/page/documents/solutions/details?targetId=BP-2071-AHV-Networking:BP-2071-AHV-Networking 

 

4x 10 Gb (2 + 2) and 2x 1 Gb separated Use to physically separate CVM traffic such as storage and Nutanix Volumes from user VM traffic while still providing 10 Gb connectivity for both traffic types. The four 10 Gb adapters are divided into two separate pairs. Compatible with any load balancing algorithm. This case is not illustrated in the diagrams below.
4x 10 Gb combined and 2x 1 Gb separated

Use to provide additional bandwidth and failover capacity to the CVM and user VMs sharing four 10 Gb adapters in the same bond. We recommend using LACP with balance-tcp to take advantage of all adapters. This case is not illustrated in the diagrams below.

 


Hi Jlperezdlp,

I am surprised to hear that you do not have 10Gb interfaces as the specs say it should be there. Not that I do not believe you but could you share the output of the command please:

allssh manage_ovs show_interfaces

To utilise the bond in full I would suggest bonding using balance-tcp. It’s admittedly slightly more complicated to implement as it requires configuration of the corresponding ports on the physical external switch as well as on the cluster but this will allow you to use all 4Gbps links per VM while with balance-slb you’d use only 1Gbps per VM.

Explanation and examples can be found in AHV Networking: Load Balancing within Bond Interfaces.


Thanks Alona,

It seems that this configuration was allowed in order to lower the price at that time. Now we know 10 Gbps is a must! Here´s the output requested: