What’s New in AHV Networking - Part 2

  • 5 September 2017
  • 0 replies
What’s New in AHV Networking - Part 2
Userlevel 7
Badge +35
This post was authored by Kate Guillemette Program Manager & Jason Burns Solutions Architect at Nutanix.

This blog series highlights the AHV features announced at Nutanix .NEXT that can help you build a one-click network, emphasizing visualization, automation, and security.

Part 1: AHV Network Visualization
Part 2: AHV Network Automation and Integration
Part 3: AHV Network Microsegmentation
Part 4: AHV Network Function Chains

AHV Network Automation and Integration

In our previous post, we looked at how network visualization gives us insight into our application connections and traffic flows. Before we can start looking at traffic flows and connections, though, we have to connect our application to the network. We also have to make sure that the application stays connected in a rapidly changing virtual environment.

Let’s consider VLAN connectivity first. With virtualization, the server team has to configure the required networks and VLANs on the hypervisor and pass this exact same set of requirements to the physical network team for provisioning. Each new network request is another back and forth exchange between the two groups. We can’t connect our application until this back and forth is completed.

Worse, we can’t always predict where a given VM will run. Sometimes the network team may trunk all VLANs to all switch ports because it’s easier than responding to each individual request. Network best practices dictate that we trunk only the required VLANs to a physical switch port to limit our broadcast domains and increase security, but when best practices get in the way of deployment, they’re often ignored.

There is a better way.

Nutanix VM life cycle event notification passes a VM’s details directly to the network controller, so the controller can take appropriate action. In the scenario diagrammed below, a mailbox VM boots on AHV node 1, requiring access to our mail network with VLAN 100. The network controller has subscribed to VM events, so AHV uses a standard webhooks API to notify the network controller of this VM event. Because this process involves a standard API, there is no specific vendor requirement or lock-in. Any network control vendor can subscribe to VM events in AHV.

Even better, when the VM is powered off or migrated to another node in the cluster, we can remove VLAN 100 from the switch port. This capability means no more manual VLAN provisioning on switch ports, and no more overprovisioning trunked VLANs to handle VM migration. Add only the VLANs you need, and remove them the moment they’re no longer required. With this feature, following the best practice actually means less work for you.

Network controllers aren’t the only consumers of VM events. Firewalls and load balancers also benefit from knowledge of VM events. Consider a farm of mail server VMs in front of a load balancer and firewall. When a new mail server powers on, we have to find some way to add it to the load balancing pool and update the firewall rules with its address. With VM life cycle event notification, the load balancer can add the VM to the pool when it’s powered on and remove it when it’s powered off. A firewall subscribing to these VM events can update rules to match the exact address of the new mail VM.

These firewalls and load balancers can be either physical or virtual devices. If they’re virtual devices, we can provision them automatically on Nutanix using a Calm blueprint, which we’ll cover in the last part of our series.

In our next post, we'll explore how to create application policies to allow traffic flows. This flexible policy model enables us to implement microsegmentation, which secures the network side of your application.

© 2017 Nutanix, Inc. All rights reserved. Nutanix, the Enterprise Cloud Platform, and the Nutanix logo are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries.

This topic has been closed for comments