Nutanix Connect Blog

Welcome to the Nutanix NEXT community. To get started please read our short welcome post. Thanks!

cancel
Showing results for 
Search instead for 
Did you mean: 
Community Manager

Maximum Performance from Acropolis Hypervisor and Open vSwitch

Nutanix appliances leverage the data network as the backplane for storage, and the following is aimed at helping you determine the best way to connect the Acropolis Hypervisor to your data center network. Let's start with some background. The Acropolis Hypervisor (AHV) uses the open source Open vSwitch (OVS) to connect the Controller VM, the hypervisor, and guest VMs to each other and to the physical network. The OVS service runs on each AHV node and the OVS services start automatically.

 

This blog is part of a series on Acropolis Hypervisor, and will cover networking with Open vSwitch bridges and bonds. Later parts in the series will talk about load balancing, VLANs, and Acropolis managed networks, so stay tuned!

 

Within OVS, bonded ports aggregate the physical interfaces on the AHV host. By default, a bond named bond0 is created in bridge br0. After the node imaging process, all interfaces are placed within a single bond, which is a requirement for the Foundation imaging process. Note that the default configuration should be modified during initial deployment to remove the 1 gigabit ports from bond0--only the 10 gigabit ports should remain.

 

The following diagram illustrates the networking configuration of a single host immediately after imaging.

 

 

Picture1.png

 

Take a look at the following Nutanix nu.school video for more information on the default OVS configuration, along with the commands for modifying the default config. You'll also find some handy tips on our CLI tools like aCLI and allssh.

 

 

 

 

The critical point is that the Nutanix Controller Virtual Machine should have access to the 10gb adapters. This ensures that the most bandwidth and lowest possible latency is provided to the CVM. Additionally, we may want to physically separate traffic from the various User VMs. This separation may sometimes be required by a company security policy, or for VMs performing networking functions like routing, firewalling, or load balancing.

 

Here is the recommended AHV OVS configuration, which creates a new bridge including the 1gb network adapters.

 

 

Picture2.png

 

The recommended configuration is to separate the 10g and 1g interfaces into separate bonds to ensure that CVM and user VM traffic always traverse the fastest possible link. Here, the 10g interfaces (eth2 and eth3) are grouped into bond0 and dedicated to the CVM and User VM1. The 1g interfaces are grouped into bond1 and used only by a second link on User VM2. Bond0 and bond1 are added into br0 and br1, respectively.

 

With this configuration, the CVM and user VMs use the 10g interfaces. Bridge br1 is available for VMs that require physical network separation from the CVM and VMs on br0. Devices eth0 and eth1 could alternatively be plugged into a different pair of upstream switches for further separation.

 

Two physical upstream switches are used and each pair of interfaces within a bond is plugged into a separate physical switch for high availability. Within each bond, only one physical interface will be active when using the default active-backup OVS bond mode. See the Load Balancing section for more information and alternate configurations.

 

Perform the following actions for each Nutanix node in the cluster. On each Acropolis host, add bridge br1. The Acropolis hypervisor local to the CVM can be reached with the local 192.168.5.1 interface address.

 

nutanix@CVM$ ssh root@192.168.5.1 "ovs-vsctl add-br br1"

 

From the CVM, remove eth0 and eth1 from the default bridge br0 on all CVMs. These interfaces are removed by specifying that only eth2 and eth3 will remain in the bridge. The 10g shortcut lets you include all 10g interfaces without having to explicitly specify the interfaces by name.

 

nutanix@CVM$ manage_ovs --bridge_name br0 --bond_name bond0 --interfaces 10g update_uplinks

 

Add the eth0 and eth1 uplinks to br1 in the CVM using the 1g interface shortcut.

 

nutanix@CVM$ manage_ovs --bridge_name br1 --bond_name bond1 --interfaces 1g update_uplinks

 

Now that a bridge, br1, exists just for the 1gb interfaces, networks can be created for "User VM2" with the following aCLI commands. Putting the bridge name in the network name is helpful when viewing network in the Prism GUI.

 

nutanix@cvm$ acli net.create br1_vlan99 vswitch_name=br1 vlan=99

 

Now we have successfully configured a single Acropolis Hypervisor to connect the CVM via the 10gb interfaces. User VMs can connect via either 10gb or 1gb. Watch the YouTube video above for tricks on performing these commands on all nodes in the cluster.

 

Download the Acropolis Hypervisor Best Practice Guide for more detailed information and a handy cheat sheet with all of the CLI commands used here.

 

Up next, we'll take a look at configuring load balancing within our OVS bonds!

 

This post was authored by Jason Burns, Senior Solutions & Performance Engineer at Nutanix

4 Comments
Pathfinder

In recommended AHV OVS configuration, diagram is showing that only br0 is connected to AHV and br1 is not connected to AHV. Is it representing right?

 

Capture.JPG 

Will br1 doesn't require connection with AHV host? If so how it will work, How VM will be connecting to br1 without intervention of AHV host?

Vanguard

Hi,

It is correct, just consider br0 as VMware vswitch0 and br1 as vswitch1. The CVM will be connected to the external network for management and cvm-cvm connectivity using br0. By default in AHV all the NICs are part of br0 (1gb and 10g), it is recommended to have homogeneous adapters in the virtual switch, hence u will keep br0 with 10g NICs and split 1gb NICs with br1 or if you have additional 10g NICs if required.

Hope that clears your concerns.

F>P

 

Pathfinder

Hi Farhan,

 

Thank you for clearing my query.

 

I have 1 more query.

 

In Active Backup load balancing how the link failure is detected? How it is detected for failovering it to backup link? How can we know that if ethernet card is failed or physical switch is failed? 

 

Capture1.JPG

 

Thanks in advance.

 

Regards,

Krishna

Vanguard

Hi,

To monitor it uses physical link status with few additional arp checks, to identify interface is active or not. Please note that active-backup is only redundancy, no load balance. It is recommended to use balanced-slb as it will give load balanced traffic and high availability but again depends on your network topology.

 

have a look at AHV networking BP, and no not hesitate to post back if u need further assistance.

https://www.nutanix.com/go/ahv-networking.html

 

F>P

 

 

 

Labels
Top Kudoed Authors
User Kudos Count
1
1