@NutanixNext
Visit Nutanix

Nutanix Connect Blog

Welcome to the Nutanix NEXT community. To get started please read our short welcome post. Thanks!

Showing results for 
Search instead for 
Do you mean 

Network Load Balancing with Acropolis Hypervisor

by Community Manager ‎12-03-2015 10:12 AM - edited ‎12-21-2015 07:00 AM (10,015 Views)

In the first article of our four part Acropolis Networking series we tackled bridges and bonds, so we could split traffic among the multiple network interfaces on a physical Nutanix node.

 

blog2.png

 

Now that the CVM traffic is routed over the 10gb interfaces, and the User VM traffic can be routed either over the 10gb or 1gb adapters, we're ready to address load balancing within the OVS bonds. There are two primary concerns: fault tolerance and throughput.

 

To handle fault tolerance, we ensure that each bond is created with at least two adapters as in the diagram above. Once the bond has two or more adapters we can then move to managing the available throughput provided by the collective interfaces in a single bond. All of the following bond modes provide fault tolerance.

 

For a video walkthrough of the different load balancing modes with the Acropolis Hypervisor and Open vSwitch check out the following nu.school recording. The video shows some extra shortcuts such as "allssh" to speed up deployment of this configuration.

 

 

Within a bond, traffic is distributed between multiple physical interfaces according to the bond mode. The default bond mode is active-backup, where one interface in the bond carries traffic and other interfaces in the bond are used only when the active link fails.

 

View the bond mode and active interface with the following AHV command:

 

nutanix@CVM$ ssh root@192.168.5.1 "ovs-appctl bond/show"

 

In the default configuration of active-backup, output will be similar to the following, where eth2 is the active and eth3 is the backup interface:

 

---- bond0 ----

bond_mode: active-backup

bond-hash-basis: 0

updelay: 0 ms

downdelay: 0 ms

lacp_status: off

 

slave eth2: enabled

      active slave

      may_enable: true

 

slave eth3: enabled

      may_enable: true

 

Active-Backup

Active-backup bond mode is the simplest, easily allowing connections to multiple upstream switches without any additional switch configuration. The downside is that traffic from all VMs use only the single active link within the bond. All backup links remain unused. In a system with dual 10 gigabit Ethernet adapters, the maximum throughput of all VMs running on a Nutanix node is limited to 10 Gbps.

 

 

blog3.png

 

 

Active-backup mode is enabled by default, but can be configured with the following AHV command:

 

nutanix@CVM$ ssh root@192.168.5.1 "ovs-vsctl set port bond0 bond_mode=active-backup" 

 

Balance-slb

To take advantage of the bandwidth provided by multiple upstream switch links, we recommend configuring the bond mode as balance-slb. The balance-slb bond mode in OVS takes advantage of all links in a bond and uses measured traffic load to rebalance VM traffic from highly used to less used interfaces. When the configurable bond-rebalance-interval expires, OVS uses the measured load for each interface and the load for each source MAC hash to spread traffic evenly among links in the bond.

 

Traffic from source MAC hashes may be moved to a less active link to more evenly balance bond member utilization. Perfectly even balancing is not always possible. Each individual virtual machine NIC uses only a single bond member interface, but traffic from multiple virtual machine NICs (multiple source MAC addresses) is distributed across bond member interfaces according to the hashing algorithm. As a result, it is possible for a Nutanix AHV node with two 10 gigabit interfaces to use up to 20 gigabits of network throughput, while individual VMs have a maximum throughput of 10 gigabits per second.

 

 

blog4.png

 

 

The default rebalance interval is 10 seconds, but we recommend setting this to 60 seconds to avoid excessive movement of source MAC address hashes between upstream switches. We've tested this configuration using two separate upstream switches with the Acropolis hypervisor. No additional configuration (such as link aggregation) is required on the switch side, as long as the upstream switches are interconnected.

 

The balance-slb algorithm is configured for each bond on all AHV nodes in the Nutanix cluster with the following commands:

 

nutanix@CVM$ ssh root@192.168.5.1 "ovs-vsctl set port bond0 bond_mode=balance-slb"

 

nutanix@CVM$ ssh root@192.168.5.1 "ovs-vsctl set port bond0 other_config:bond-rebalance-interval=60000"

 

Verify the proper bond mode with the following commands:

 

nutanix@CVM$ ssh root@192.168.5.1 "ovs-appctl bond/show bond0"

---- bond0 ----

bond_mode: balance-slb

bond-hash-basis: 0

updelay: 0 ms

downdelay: 0 ms

next rebalance: 59108 ms

lacp_status: off

 

slave eth2: enabled

      may_enable: true

      hash 120: 138065 kB load

      hash 182: 20 kB load

 

slave eth3: enabled

      active slave

      may_enable: true

      hash 27: 0 kB load

      hash 31: 20 kB load

      hash 104: 1802 kB load

      hash 206: 20 kB load

 

 

LACP and Link Aggregation

Because LACP and balance-tcp require upstream switch configuration, and because network connectivity may be disabled if cables from AHV nodes are moved to incorrectly configured switches, Nutanix does not recommend using link aggregation or LACP.

 

However, to take full advantage of the bandwidth provided by multiple links to upstream switches from a single VM, link aggregation in OVS using Link Aggregation Control Protocol (LACP) and balance-tcp is required. Note that appropriate configuration of the upstream switches is also required. With LACP, multiple links to separate physical switches appear as a single Layer-2 link. Traffic can be split between multiple links in an active-active fashion based on a traffic-hashing algorithm.

 

Traffic can be balanced among members in the link without any regard for switch MAC address tables, because the uplinks appear as a single L2 link. We recommend using balance-tcp when LACP is configured, since multiple Layer-4 streams from a single VM could potentially use all available uplink bandwidth in this configuration. With link aggregation, LACP, and balance-tcp, a single user VM with multiple TCP streams could potentially use up to 20 Gbps of bandwidth in an AHV node with two 10Gbps adapters.

 

 

blog5.png

 

Configure LACP and balance-tcp with the following commands. Upstream switch configuration of LACP is required.

 

nutanix@CVM$ ssh root@192.168.5.1 "ovs-vsctl set port bond0 lacp=active"

 

nutanix@CVM$ ssh root@192.168.5.1 "ovs-vsctl set port bond0 bond_mode=balance-tcp"

 

If upstream LACP negotiation fails, the default configuration is to disable the bond, which would block all traffic. The following command allows fallback to active-backup bond mode in the event of LACP negotiation failure.

 

nutanix@CVM$ ssh root@192.168.5.1 "ovs-vsctl set port bond0 other_config:lacp-fallback-ab=true"

 

Finding the right balance

Use your virtualization requirements to choose the bond mode that's right for you! The following methods are arranged from least complex to most complex configuration. For simple and reliable failover with up to 10Gbps of host throughput with minimal switch configuration, choose active-backup. For instances where more than 10Gbps of throughput is required from the AHV host, use balance-slb. Where more than 10Gbps of throughput is required from a single VM, use LACP and balance-tcp.

 

This post was authored by Jason Burns, Senior Solutions & Performance Engineer at Nutanix

Maximum Performance from Acropolis Hypervisor and Open vSwitch

by Community Manager ‎11-25-2015 03:06 PM - edited ‎12-21-2015 07:01 AM (8,669 Views)

Nutanix appliances leverage the data network as the backplane for storage, and the following is aimed at helping you determine the best way to connect the Acropolis Hypervisor to your data center network. Let's start with some background. The Acropolis Hypervisor (AHV) uses the open source Open vSwitch (OVS) to connect the Controller VM, the hypervisor, and guest VMs to each other and to the physical network. The OVS service runs on each AHV node and the OVS services start automatically.

 

This blog is part of a series on Acropolis Hypervisor, and will cover networking with Open vSwitch bridges and bonds. Later parts in the series will talk about load balancing, VLANs, and Acropolis managed networks, so stay tuned!

 

Within OVS, bonded ports aggregate the physical interfaces on the AHV host. By default, a bond named bond0 is created in bridge br0. After the node imaging process, all interfaces are placed within a single bond, which is a requirement for the Foundation imaging process. Note that the default configuration should be modified during initial deployment to remove the 1 gigabit ports from bond0--only the 10 gigabit ports should remain.

 

The following diagram illustrates the networking configuration of a single host immediately after imaging.

 

 

Picture1.png

 

Take a look at the following Nutanix nu.school video for more information on the default OVS configuration, along with the commands for modifying the default config. You'll also find some handy tips on our CLI tools like aCLI and allssh.

 

 

 

 

The critical point is that the Nutanix Controller Virtual Machine should have access to the 10gb adapters. This ensures that the most bandwidth and lowest possible latency is provided to the CVM. Additionally, we may want to physically separate traffic from the various User VMs. This separation may sometimes be required by a company security policy, or for VMs performing networking functions like routing, firewalling, or load balancing.

 

Here is the recommended AHV OVS configuration, which creates a new bridge including the 1gb network adapters.

 

 

Picture2.png

 

The recommended configuration is to separate the 10g and 1g interfaces into separate bonds to ensure that CVM and user VM traffic always traverse the fastest possible link. Here, the 10g interfaces (eth2 and eth3) are grouped into bond0 and dedicated to the CVM and User VM1. The 1g interfaces are grouped into bond1 and used only by a second link on User VM2. Bond0 and bond1 are added into br0 and br1, respectively.

 

With this configuration, the CVM and user VMs use the 10g interfaces. Bridge br1 is available for VMs that require physical network separation from the CVM and VMs on br0. Devices eth0 and eth1 could alternatively be plugged into a different pair of upstream switches for further separation.

 

Two physical upstream switches are used and each pair of interfaces within a bond is plugged into a separate physical switch for high availability. Within each bond, only one physical interface will be active when using the default active-backup OVS bond mode. See the Load Balancing section for more information and alternate configurations.

 

Perform the following actions for each Nutanix node in the cluster. On each Acropolis host, add bridge br1. The Acropolis hypervisor local to the CVM can be reached with the local 192.168.5.1 interface address.

 

nutanix@CVM$ ssh root@192.168.5.1 "ovs-vsctl add-br br1"

 

From the CVM, remove eth0 and eth1 from the default bridge br0 on all CVMs. These interfaces are removed by specifying that only eth2 and eth3 will remain in the bridge. The 10g shortcut lets you include all 10g interfaces without having to explicitly specify the interfaces by name.

 

nutanix@CVM$ manage_ovs --bridge_name br0 --bond_name bond0 --interfaces 10g update_uplinks

 

Add the eth0 and eth1 uplinks to br1 in the CVM using the 1g interface shortcut.

 

nutanix@CVM$ manage_ovs --bridge_name br1 --bond_name bond1 --interfaces 1g update_uplinks

 

Now that a bridge, br1, exists just for the 1gb interfaces, networks can be created for "User VM2" with the following aCLI commands. Putting the bridge name in the network name is helpful when viewing network in the Prism GUI.

 

nutanix@cvm$ acli net.create br1_vlan99 vswitch_name=br1 vlan=99

 

Now we have successfully configured a single Acropolis Hypervisor to connect the CVM via the 10gb interfaces. User VMs can connect via either 10gb or 1gb. Watch the YouTube video above for tricks on performing these commands on all nodes in the cluster.

 

Download the Acropolis Hypervisor Best Practice Guide for more detailed information and a handy cheat sheet with all of the CLI commands used here.

 

Up next, we'll take a look at configuring load balancing within our OVS bonds!

 

This post was authored by Jason Burns, Senior Solutions & Performance Engineer at Nutanix

Announcements

One of the fun things about participating in an online community is developing a community identity. One way to do that is with a personalized avatar.

Read More: How to Change Your Community Profile Avatar
Labels