Blog

Virtual Networks for Virtual Machines in Acropolis Hypervisor

  • 18 December 2015
  • 8 replies
  • 10074 views
Virtual Networks for Virtual Machines in Acropolis Hypervisor
Userlevel 7
Badge +35
This post was authored by Jason Burns, Senior Solutions & Performance Engineer at Nutanix
Today we'll look at VLANs and networks for the VMs running on AHV. We'll focus on managed and unmanaged networks, two different ways of providing VM connectivity. With unmanaged networks, VMs get a direct connection to their VLAN of choice. With managed networks, AHV can perform IP Address Management for VMs, handing out IP Addresses via configurable DHCP pools.
Before we get started, here's a look at what we've covered so far in the series:
Bridges and Bonds
Load Balancing
VLANs for AHV Host
AHV makes network management for VMs incredibly simple, connecting VMs with just a few clicks. Check out the following YouTube video for a light board walkthrough of AHV VM networking concepts including CLI and Prism examples:
Read on for a description of what's covered in the video, along with a few screen shots.
For user VMs, the networks that a virtual NIC uses can be created and managed in the Prism GUI, Acropolis CLI (aCLI), or using REST. Each virtual network that Acropolis creates is bound to a single VLAN. A virtual NIC created and assigned to a VM is associated with a single network and hence a single VLAN. Multiple virtual NICs (each with a single VLAN or network) can be provisioned for a user VM.
Virtual networks can be viewed and created under the VM page by selecting Network Config:


Under Create Network, friendly names and VLANs can be assigned such as the following unmanaged network in VLAN 27 named Production.

You can see individual VM NIC and network details under the Table view on the VM page by selecting the desired VM and choosing Update:


SSH access to the CVM also allows network configuration via aCLI as follows. Try out the character in a CLI after typing net. for a complete list of context sensitive options.
nutanix@CVM$ acli net.listNetwork name Network UUID Type IdentifierProduction ea8468ec-c1ca-4220-bc51-714483c6a266 VLAN 27vlan.0 a1850d8a-a4e0-4dc9-b247-1849ec97b1ba VLAN 0 net.list_vms vlan.0VM UUID VM name MAC address7956152a-ce08-468f-89a7-e377040d5310 VM1 52:54:00:db:2d:1147c3a7a2-a7be-43e4-8ebf-c52c3b26c738 VM2 52:54:00🇧🇪ad:bc501188a6-faa7-4be0-9735-0e38a419a115 VM3 52:54:00:0c:15:35
In addition to simple network creation and VLAN management, Acropolis Hypervisor also supports IP address management (IPAM). IPAM enables AHV to automatically assign IP addresses to virtual machines using DHCP. Each virtual network and associated VLAN can be configured with a specific IP subnet, associated domain settings, and IP address pools available for assignment. Acropolis uses VXLAN and OpenFlow rules in OVS to intercept outbound DHCP requests from user VMs so that the configured IP address pools and settings are provided to VMs.

An IP address is assigned from the pool of addresses when a managed VM NIC is created; the address is released back to the pool when the VM NIC or VM is deleted. Be sure to work with your network team to reserve a range of addresses for VMs before enabling the IPAM feature to avoid address overlap.
Administrators can use Acropolis with IPAM to deliver a complete virtualization deployment, including network management, from the Prism interface. This radically simplifies the traditionally complex network management associated with provisioning virtual machines.
This wraps up my four-part Acropolis networking series. Hopefully the information presented here will help you design and implement a full-featured virtual environment, with the ability to configure both the physical and virtual networks to suit your needs. For more information remember to check out the Acropolis Hypervisor Best Practices Guide and follow the nu.school YouTube channel.
This post was authored by Jason Burns, Senior Solutions & Performance Engineer at Nutanix

This topic has been closed for comments

8 replies

Hi there!

I´d like to know how I can enable the IP ADDRESS MANAGEMENT when the vLAN ID is already created.

Thank you

Hello, I also like to know how to enable the IP ADDRESS MANAGEMENT when the vLAN ID is already created.

More than that, via the Prism Interface it is only able to create VLANs on br0 and there to enable IPAM.

I would like to do this on a second bridge (br1) we have enabled for 1Gb Network Interfaces.

Thank you for your support.

Regards Kai

Userlevel 2
Badge +14

Argenis, today it’s not possible to enable IPAM on a network that has already been created. You can create a NEW network that uses the same VLAN ID and enable IPAM while creating the new network if desired.

Userlevel 2
Badge +14

@kai.urbschat-66520 To create networks for bridges other than br0, you’ll need to use acli net.create. For example:

acli

net.create <network-name> vlan=100 vswitch_name=br1

 

<acropolis> net.create br1.Vlan100 vlan=100 vswitch_name=br1

It’s helpful to include the bridge name inside the network so you’re able to tell in the GUI what bridge the network belongs to.

@bbbburns Thank you for the reply. 

But how do I enable IPAM for VLANs on bridges other than br0?

I could not find any acli command for this.

So I conclude, it is not possible to enable IPAM for VLANs not on br0. 

Hi

Try this:

<acropolis> net.create mynet vlan=0 ip_config=192.168.5.254/22 vswitch_name=prod

https://portal.nutanix.com/page/documents/details/?targetId=Command-Ref-AOS-v510:man-acli-c.html

If you have two AHV clusters with the same vLAN/subnet/VM network configured on each, then how would you recommend configuring the IP address pool on each of the managed networks on these two clusters?

Would AOS/AHV manage the situation where the same pool of addresses was allocaed on each cluster/subnet etc? Or would it require discrete address ranges to be allocated to each?

Assuming each cluster/network had the same IP pool range configured, then I imagine it would be possible for each cluster to allocate the same IP to a VM on its respective cluster so the potential for IP address duplication.

So instead, in a similar scenario where we assign different IP address pools on each cluster, this sounds perfect until we look at a situation where we want to migrate a VM (whth a pool assigned address) from cluster A to cluster B. Imagine this is a DR scenario and we are making a Protection Domain active on Cluster B, which results in the creation of the VM on cluster B with its original configuration. The VM after migration still has the same IP address it was allocated from the IP pool configured on cluster A. Less chance of a duplicate IP address with other VMs on cluster B in this scenario but then if the original VM was deleted on cluster A there is a chance that its original IP address could be reallocated to a new VM on that cluster and once again potential for IP address clash.

So I don’t see any perfect scenario where this is being fully managed. It appears that there is still manual management of IP addresses required in any even where a VM is migrated. Does Nutanix provide any ability for clusters/networks to share a cross-cluster network configuration? If not - maybe something for a future enhancement?

 

Userlevel 2
Badge +14

“in a similar scenario where we assign different IP address pools on each cluster, this sounds perfect until we look at a situation where we want to migrate a VM (whth a pool assigned address) from cluster A to cluster B. “

“Does Nutanix provide any ability for clusters/networks to share a cross-cluster network configuration? If not - maybe something for a future enhancement?”

 

Paul,

Let me address each of these points. You’re definitely right in your understanding.

First, in the scenario with two clusters acting as DR backup targets AHV IPAM is not a good solution if you want to keep the same IP addresses after the failover. You should use an external IPAM method for this use case until AHV can support multi-cluster IPAM.

Second, no Nutanix AHV does not support multi-cluster IPAM today. This is definitely something we have considered, but don’t have yet.