@NutanixNext
Visit Nutanix

Nutanix Connect Blog

Welcome to the Nutanix NEXT community. To get started please read our short welcome post. Thanks!

Showing results for 
Search instead for 
Do you mean 

Virtual Networks for Virtual Machines in Acropolis Hypervisor

by Community Manager ‎12-18-2015 09:04 AM - edited ‎12-21-2015 06:59 AM (7,143 Views)

Today we'll look at VLANs and networks for the VMs running on AHV. We'll focus on managed and unmanaged networks, two different ways of providing VM connectivity. With unmanaged networks, VMs get a direct connection to their VLAN of choice. With managed networks, AHV can perform IP Address Management for VMs, handing out IP Addresses via configurable DHCP pools.

 

Before we get started, here's a look at what we've covered so far in the series:

 

AHV makes network management for VMs incredibly simple, connecting VMs with just a few clicks. Check out the following YouTube video for a light board walkthrough of AHV VM networking concepts including CLI and Prism examples:

 

 

Read on for a description of what's covered in the video, along with a few screen shots.

 

For user VMs, the networks that a virtual NIC uses can be created and managed in the Prism GUI, Acropolis CLI (aCLI), or using REST. Each virtual network that Acropolis creates is bound to a single VLAN. A virtual NIC created and assigned to a VM is associated with a single network and hence a single VLAN. Multiple virtual NICs (each with a single VLAN or network) can be provisioned for a user VM.

 

Virtual networks can be viewed and created under the VM page by selecting Network Config:

 

blog41.png

 

Under Create Network, friendly names and VLANs can be assigned such as the following unmanaged network in VLAN 27 named Production.

 

blog42.png

 

You can see individual VM NIC and network details under the Table view on the VM page by selecting the desired VM and choosing Update:

 

blog43.png

 

SSH access to the CVM also allows network configuration via aCLI as follows. Try out the <tab> character in a CLI after typing net. for a complete list of context sensitive options.

 

 nutanix@CVM$ acli

<acropolis> net.list

Network name      Network UUID                              Type  Identifier
Production        ea8468ec-c1ca-4220-bc51-714483c6a266      VLAN  27
vlan.0            a1850d8a-a4e0-4dc9-b247-1849ec97b1ba      VLAN  0

<acropolis> net.list_vms vlan.0

VM UUID                                   VM name     MAC address
7956152a-ce08-468f-89a7-e377040d5310      VM1         52:54:00:db:2d:11
47c3a7a2-a7be-43e4-8ebf-c52c3b26c738      VM2         52:54:00:be:ad:bc
501188a6-faa7-4be0-9735-0e38a419a115      VM3         52:54:00:0c:15:35

 

In addition to simple network creation and VLAN management, Acropolis Hypervisor also supports IP address management (IPAM). IPAM enables AHV to automatically assign IP addresses to virtual machines using DHCP. Each virtual network and associated VLAN can be configured with a specific IP subnet, associated domain settings, and IP address pools available for assignment. Acropolis uses VXLAN and OpenFlow rules in OVS to intercept outbound DHCP requests from user VMs so that the configured IP address pools and settings are provided to VMs.

 

blog44.png

 

An IP address is assigned from the pool of addresses when a managed VM NIC is created; the address is released back to the pool when the VM NIC or VM is deleted. Be sure to work with your network team to reserve a range of addresses for VMs before enabling the IPAM feature to avoid address overlap.

 

Administrators can use Acropolis with IPAM to deliver a complete virtualization deployment, including network management, from the Prism interface. This radically simplifies the traditionally complex network management associated with provisioning virtual machines.

 

This wraps up my four-part Acropolis networking series. Hopefully the information presented here will help you design and implement a full-featured virtual environment, with the ability to configure both the physical and virtual networks to suit your needs. For more information remember to check out the Acropolis Hypervisor Best Practices Guide and follow the nu.school YouTube channel. 

 

This post was authored by Jason Burns, Senior Solutions & Performance Engineer at Nutanix

Virtual LANs for your Acropolis Hypervisor Virtual Machines

by Community Manager ‎12-14-2015 08:14 AM - edited ‎12-21-2015 07:00 AM (7,839 Views)

 

In the first article of our four part Acropolis Networking series we tackled bridges and bonds. The second part of the series addressed Load Balancing. Today we'll look at placing the Acropolis Hypervisor and Controller Virtual Machine in the correct VLAN for traffic segmentation.

 

Storage and management traffic are typically separated from user virtual machine traffic, and with Nutanix AHV this is no exception. VLANs provide a convenient way to segment the different traffic types and even segment between types of User VMs. With virtualization, many VLANs are often trunked to the physical servers to account for the different networks used by virtual machines.

 

In Nutanix, the recommended VLAN configuration is to place the CVM and Acropolis Hypervisor in the default "untagged" (or native) VLAN as shown below.

 

 

Picture1.png

 

 

Note that in this default configuration VLANs 101 and 102 are still trunked to the AHV host for user VMs. Configuration of VM networks will be covered in our next blog post and video using both Prism and aCLI!

 

Traffic destined to AHV and the CVM will not contain a VLAN tag. If the default configuration of sending untagged traffic to the AHV and CVM is not desired, or is disallowed by security policy, VLAN tags can be added to the host and the CVM with the following configuration.

 

 

Picture2.png

 

 

Configure VLAN tags on br0 on every AHV host in the cluster. Repeat this config on all hosts.

 

nutanix@CVM$ ssh root@192.168.5.1 "ovs-vsctl set port br0 tag=10"

nutanix@CVM$ ssh root@192.168.5.1 "ovs-vsctl list port br0"

 

Configure VLAN tags for the CVM on every CVM in the Nutanix cluster. Repeat on all hosts. I prefer to do this manually per host (one at a time) rather than using the "allssh" command, just in case my network admin hasn't actually trunked the switch ports properly!

 

nutanix@CVM$ change_cvm_vlan 10

 

In this design, the AHV host and CVM traffic will be tagged with VLAN ID of 10. Again, user VM traffic will be tagged in the network as configured in Prism or aCLI.

 

Storage data and management traffic for the CVM will all be carried together in VLAN 10 in the previous example. If network segmentation is required between storage data and management traffic to meet security requirements, please see KB article KB-2748.

 

Now the CVM and AHV hosts can communicate on their own network, separate from user VM traffic. Make sure to follow up in our next blog post for information on how to bring VLANs to VMs on Nutanix AHV.

 

This post was authored by Jason Burns, Senior Solutions & Performance Engineer at Nutanix

Network Load Balancing with Acropolis Hypervisor

by Community Manager ‎12-03-2015 10:12 AM - edited ‎12-21-2015 07:00 AM (10,016 Views)

In the first article of our four part Acropolis Networking series we tackled bridges and bonds, so we could split traffic among the multiple network interfaces on a physical Nutanix node.

 

blog2.png

 

Now that the CVM traffic is routed over the 10gb interfaces, and the User VM traffic can be routed either over the 10gb or 1gb adapters, we're ready to address load balancing within the OVS bonds. There are two primary concerns: fault tolerance and throughput.

 

To handle fault tolerance, we ensure that each bond is created with at least two adapters as in the diagram above. Once the bond has two or more adapters we can then move to managing the available throughput provided by the collective interfaces in a single bond. All of the following bond modes provide fault tolerance.

 

For a video walkthrough of the different load balancing modes with the Acropolis Hypervisor and Open vSwitch check out the following nu.school recording. The video shows some extra shortcuts such as "allssh" to speed up deployment of this configuration.

 

 

Within a bond, traffic is distributed between multiple physical interfaces according to the bond mode. The default bond mode is active-backup, where one interface in the bond carries traffic and other interfaces in the bond are used only when the active link fails.

 

View the bond mode and active interface with the following AHV command:

 

nutanix@CVM$ ssh root@192.168.5.1 "ovs-appctl bond/show"

 

In the default configuration of active-backup, output will be similar to the following, where eth2 is the active and eth3 is the backup interface:

 

---- bond0 ----

bond_mode: active-backup

bond-hash-basis: 0

updelay: 0 ms

downdelay: 0 ms

lacp_status: off

 

slave eth2: enabled

      active slave

      may_enable: true

 

slave eth3: enabled

      may_enable: true

 

Active-Backup

Active-backup bond mode is the simplest, easily allowing connections to multiple upstream switches without any additional switch configuration. The downside is that traffic from all VMs use only the single active link within the bond. All backup links remain unused. In a system with dual 10 gigabit Ethernet adapters, the maximum throughput of all VMs running on a Nutanix node is limited to 10 Gbps.

 

 

blog3.png

 

 

Active-backup mode is enabled by default, but can be configured with the following AHV command:

 

nutanix@CVM$ ssh root@192.168.5.1 "ovs-vsctl set port bond0 bond_mode=active-backup" 

 

Balance-slb

To take advantage of the bandwidth provided by multiple upstream switch links, we recommend configuring the bond mode as balance-slb. The balance-slb bond mode in OVS takes advantage of all links in a bond and uses measured traffic load to rebalance VM traffic from highly used to less used interfaces. When the configurable bond-rebalance-interval expires, OVS uses the measured load for each interface and the load for each source MAC hash to spread traffic evenly among links in the bond.

 

Traffic from source MAC hashes may be moved to a less active link to more evenly balance bond member utilization. Perfectly even balancing is not always possible. Each individual virtual machine NIC uses only a single bond member interface, but traffic from multiple virtual machine NICs (multiple source MAC addresses) is distributed across bond member interfaces according to the hashing algorithm. As a result, it is possible for a Nutanix AHV node with two 10 gigabit interfaces to use up to 20 gigabits of network throughput, while individual VMs have a maximum throughput of 10 gigabits per second.

 

 

blog4.png

 

 

The default rebalance interval is 10 seconds, but we recommend setting this to 60 seconds to avoid excessive movement of source MAC address hashes between upstream switches. We've tested this configuration using two separate upstream switches with the Acropolis hypervisor. No additional configuration (such as link aggregation) is required on the switch side, as long as the upstream switches are interconnected.

 

The balance-slb algorithm is configured for each bond on all AHV nodes in the Nutanix cluster with the following commands:

 

nutanix@CVM$ ssh root@192.168.5.1 "ovs-vsctl set port bond0 bond_mode=balance-slb"

 

nutanix@CVM$ ssh root@192.168.5.1 "ovs-vsctl set port bond0 other_config:bond-rebalance-interval=60000"

 

Verify the proper bond mode with the following commands:

 

nutanix@CVM$ ssh root@192.168.5.1 "ovs-appctl bond/show bond0"

---- bond0 ----

bond_mode: balance-slb

bond-hash-basis: 0

updelay: 0 ms

downdelay: 0 ms

next rebalance: 59108 ms

lacp_status: off

 

slave eth2: enabled

      may_enable: true

      hash 120: 138065 kB load

      hash 182: 20 kB load

 

slave eth3: enabled

      active slave

      may_enable: true

      hash 27: 0 kB load

      hash 31: 20 kB load

      hash 104: 1802 kB load

      hash 206: 20 kB load

 

 

LACP and Link Aggregation

Because LACP and balance-tcp require upstream switch configuration, and because network connectivity may be disabled if cables from AHV nodes are moved to incorrectly configured switches, Nutanix does not recommend using link aggregation or LACP.

 

However, to take full advantage of the bandwidth provided by multiple links to upstream switches from a single VM, link aggregation in OVS using Link Aggregation Control Protocol (LACP) and balance-tcp is required. Note that appropriate configuration of the upstream switches is also required. With LACP, multiple links to separate physical switches appear as a single Layer-2 link. Traffic can be split between multiple links in an active-active fashion based on a traffic-hashing algorithm.

 

Traffic can be balanced among members in the link without any regard for switch MAC address tables, because the uplinks appear as a single L2 link. We recommend using balance-tcp when LACP is configured, since multiple Layer-4 streams from a single VM could potentially use all available uplink bandwidth in this configuration. With link aggregation, LACP, and balance-tcp, a single user VM with multiple TCP streams could potentially use up to 20 Gbps of bandwidth in an AHV node with two 10Gbps adapters.

 

 

blog5.png

 

Configure LACP and balance-tcp with the following commands. Upstream switch configuration of LACP is required.

 

nutanix@CVM$ ssh root@192.168.5.1 "ovs-vsctl set port bond0 lacp=active"

 

nutanix@CVM$ ssh root@192.168.5.1 "ovs-vsctl set port bond0 bond_mode=balance-tcp"

 

If upstream LACP negotiation fails, the default configuration is to disable the bond, which would block all traffic. The following command allows fallback to active-backup bond mode in the event of LACP negotiation failure.

 

nutanix@CVM$ ssh root@192.168.5.1 "ovs-vsctl set port bond0 other_config:lacp-fallback-ab=true"

 

Finding the right balance

Use your virtualization requirements to choose the bond mode that's right for you! The following methods are arranged from least complex to most complex configuration. For simple and reliable failover with up to 10Gbps of host throughput with minimal switch configuration, choose active-backup. For instances where more than 10Gbps of throughput is required from the AHV host, use balance-slb. Where more than 10Gbps of throughput is required from a single VM, use LACP and balance-tcp.

 

This post was authored by Jason Burns, Senior Solutions & Performance Engineer at Nutanix

Announcements

One of the fun things about participating in an online community is developing a community identity. One way to do that is with a personalized avatar.

Read More: How to Change Your Community Profile Avatar
Labels