Maximum Performance from Acropolis Hypervisor and Open vSwitch

  • 26 November 2015
  • 4 replies
Maximum Performance from Acropolis Hypervisor and Open vSwitch
Userlevel 7
Badge +34
Nutanix appliances leverage the data network as the backplane for storage, and the following is aimed at helping you determine the best way to connect the Acropolis Hypervisor to your data center network. Let's start with some background. The Acropolis Hypervisor (AHV) uses the open source Open vSwitch (OVS) to connect the Controller VM, the hypervisor, and guest VMs to each other and to the physical network. The OVS service runs on each AHV node and the OVS services start automatically.

This blog is part of a series on Acropolis Hypervisor, and will cover networking with Open vSwitch bridges and bonds. Later parts in the series will talk about load balancing, VLANs, and Acropolis managed networks, so stay tuned!

Within OVS, bonded ports aggregate the physical interfaces on the AHV host. By default, a bond named bond0 is created in bridge br0. After the node imaging process, all interfaces are placed within a single bond, which is a requirement for the Foundation imaging process. Note that the default configuration should be modified during initial deployment to remove the 1 gigabit ports from bond0--only the 10 gigabit ports should remain.

The following diagram illustrates the networking configuration of a single host immediately after imaging.

Take a look at the following Nutanix video for more information on the default OVS configuration, along with the commands for modifying the default config. You'll also find some handy tips on our CLI tools like aCLI and allssh.

The critical point is that the Nutanix Controller Virtual Machine should have access to the 10gb adapters. This ensures that the most bandwidth and lowest possible latency is provided to the CVM. Additionally, we may want to physically separate traffic from the various User VMs. This separation may sometimes be required by a company security policy, or for VMs performing networking functions like routing, firewalling, or load balancing.

Here is the recommended AHV OVS configuration, which creates a new bridge including the 1gb network adapters.

The recommended configuration is to separate the 10g and 1g interfaces into separate bonds to ensure that CVM and user VM traffic always traverse the fastest possible link. Here, the 10g interfaces (eth2 and eth3) are grouped into bond0 and dedicated to the CVM and User VM1. The 1g interfaces are grouped into bond1 and used only by a second link on User VM2. Bond0 and bond1 are added into br0 and br1, respectively.

With this configuration, the CVM and user VMs use the 10g interfaces. Bridge br1 is available for VMs that require physical network separation from the CVM and VMs on br0. Devices eth0 and eth1 could alternatively be plugged into a different pair of upstream switches for further separation.

Two physical upstream switches are used and each pair of interfaces within a bond is plugged into a separate physical switch for high availability. Within each bond, only one physical interface will be active when using the default active-backup OVS bond mode. See the Load Balancing section for more information and alternate configurations.

Perform the following actions for each Nutanix node in the cluster. On each Acropolis host, add bridge br1. The Acropolis hypervisor local to the CVM can be reached with the local interface address.

nutanix@CVM$ ssh root@ "ovs-vsctl add-br br1"

From the CVM, remove eth0 and eth1 from the default bridge br0 on all CVMs. These interfaces are removed by specifying that only eth2 and eth3 will remain in the bridge. The 10g shortcut lets you include all 10g interfaces without having to explicitly specify the interfaces by name.

nutanix@CVM$ manage_ovs --bridge_name br0 --bond_name bond0 --interfaces 10g update_uplinks

Add the eth0 and eth1 uplinks to br1 in the CVM using the 1g interface shortcut.

nutanix@CVM$ manage_ovs --bridge_name br1 --bond_name bond1 --interfaces 1g update_uplinks

Now that a bridge, br1, exists just for the 1gb interfaces, networks can be created for "User VM2" with the following aCLI commands. Putting the bridge name in the network name is helpful when viewing network in the Prism GUI.

nutanix@cvm$ acli net.create br1_vlan99 vswitch_name=br1 vlan=99

Now we have successfully configured a single Acropolis Hypervisor to connect the CVM via the 10gb interfaces. User VMs can connect via either 10gb or 1gb. Watch the YouTube video above for tricks on performing these commands on all nodes in the cluster.

Download the Acropolis Hypervisor Best Practice Guide for more detailed information and a handy cheat sheet with all of the CLI commands used here.

Up next, we'll take a look at configuring load balancing within our OVS bonds!

This post was authored by Jason Burns, Senior Solutions & Performance Engineer at Nutanix

This topic has been closed for comments

4 replies