@NutanixNext
Visit Nutanix

Nutanix Connect Blog

Welcome to the Nutanix NEXT community. To get started please read our short welcome post. Thanks!

Showing results for 
Search instead for 
Do you mean 

VMware NSX on Nutanix: Build a Software-Defined Datacenter

by Community Manager ‎02-23-2016 10:07 AM - edited ‎02-23-2016 10:44 AM (9,566 Views)

VMware NSX for vSphere and the Nutanix Xtreme Computing Platform (XCP) combine to make the software-defined datacenter a reality. XCP lets administrators build an invisible virtualization infrastructure free from the limits of traditional compute, storage, and storage networking architectures. The VMware NSX and Nutanix XCP solution ensures that VMs always have access to fast local storage and compute, as well as consistent network addressing and security, all without the burden of physical infrastructure constraints.

 

Nutanix has tested two crucial NSX deployment scenarios to validate that VMware NSX for vSphere operates seamlessly in a Nutanix cluster. The first scenario has the Nutanix Controller VM (CVM) connected to a traditional VLAN network, with user VMs inside NSX virtual networks. In the second scenario, both the CVM and user VMs are connected to NSX virtual networks. Connecting the CVM to an NSX virtual network increases configuration complexity, but it also provides access to features such as isolation and microsegmentation for the Nutanix cluster.

 

Use Cases and Benefits

Before we dive into the different scenarios, lets take a look at a commonly used example that shows up in the VMware NSX design guide. We'll examine a few of the benefits of software-defined networking using this example. Here we see three virtual machine tiers: web, application, and database. Traffic from these VMs is carried in three separate VXLAN-based virtual networks (VNI 5001 - 5003).

 

nsx1.png

 

Each isolated virtual network is abstracted from the physical network thanks to VXLAN encapsulation. The distributed logical router (DLR) on each hypervisor can connect disparate layer-3 virtual networks without having to hairpin traffic to a physical router. Routing happens right at the hypervisor. Further, the distributed firewall (DFW) can apply security policy at the VM virtual NIC-level that follows the VM regardless of the location of physical firewalls or layer-3 boundaries.

 

nsx2.png

 

The DFW and DLR operate at the hypervisor level and exist in every hypervisor within the compute cluster. Because these components are distributed, routing and firewall actions happen close to the VMs without depending on the underlying network infrastructure. The only two requirements of the physical network are to enable connectivity between hypervisors and allow jumbo frames to accommodate for VXLAN overhead.

 

Separation from the physical network means that VMs can be addressed and managed without impact to the physical network. Isolated network enclaves can be created within virtual networks that can be quickly duplicated as required. One example would be building a completely new environment for each developer that is created on demand. Another example would be restoring a snapshot of the production environment into a full-featured backup virtual network without having to worry about impacting production or having to change VM IP addresses.

 

nsx3.png

 

For further information on each of these use cases check out the full solution note (INSERT LP LINK HERE).

 

Scenario 1 - NSX for User Virtual Machines

The recommended configuration for NSX on Nutanix is to enable NSX for user VMs, such as the web, application, and database in our three-tiered application example. The Nutanix CVM is connected to a regular VLAN backed port group to keep configuration as simple as possible.

 

Storage traffic is transmitted between nodes in VLAN 101 shown here. User VM traffic can take advantage of NSX benefits that we've highlighted above. In our example we ensure that the distributed firewall allows traffic between CVMs and ESXi hosts by either adding explicit rules, or adding a default allow policy to the rule set.

 

nsx4.png

 

Scenario 2 - NSX for the Nutanix CVM and User VMs

The next example shows the alternate scenario, where both the Nutanix CVM and user VMs are connected to an NSX virtual network. An additional storage VMk adapter in virtual network 5000 ensures that L2 and L3 connectivity requirements between CVM and ESXi cluster hosts are met. The management and VXLAN VMk addresses in the figure illustrate addressing two nodes in separate racks with an L3 boundary separating the racks. The VXLAN-encapsulated storage network traffic in VNI 5000 spans this L3 boundary.

 

nsx5.png

 

With a little added complexity, this configuration allows the Nutanix CVM to take advantage of features such as microsegmentation and isolation. You can isolate the CVM and storage network in a single virtual network that can span physical network layer-3 boundaries while still keeping the CVM and storage VMk adapters in the same layer-3 network.

 

It is critical to note that leaf-spine topology recommendations for the physical network still hold true between Nutanix nodes. Addressing between ESXi hosts in separate racks may cross a layer-3 boundary, but the network must still meet the requirement for high throughput and low latency between Nutanix nodes.

 

Conclusion

Running VMware NSX on Nutanix enables administrators to architect powerful and agile solutions free from traditional physical storage and networking constraints. The Xtreme Computing Platform (XCP) delivers the invisible compute and storage infrastructure, while NSX emulates network functions abstracted from the underlying physical network. Nutanix has verified that these tools integrate seamlessly to achieve all of the advantages associated with a software-defined datacenter, including logical separation from the storage infrastructure, isolated virtual networks that can span physical networks, security policies that follow ever-moving virtual machines, and true workflow automation.

 

Deploying Nutanix with VMware NSX lets administrators focus on building scalable applications, confident that, wherever a VM resides, it will have access to essential compute, storage, and network resources. A stable and robust physical infrastructure provides the underlay for a malleable and responsive virtual overlay ready to meet challenges on demand.

 

For more information on VMware NSX use cases and testing with Nutanix, please check out the full solution note here and continue the conversation on the community forums.

 

This post was authored by Jason Burns, Senior Solutions & Performance Engineer at Nutanix 

We're happy to announce the release of a much-requested solution note describing VMware NSX for vSphere on Nutanix. Talking about the benefits of virtualized infrastructure using Nutanix leads naturally to discussion of network virtualization. In this solution note we take a look at common customer use cases and advantages that VMware NSX software-defined networking brings to the table. We also test and validate two virtual network deployment scenarios with NSX and Nutanix: one places the CVM in a traditional VLAN-based network, and the other locates the CVM in an NSX virtual network.

 

For complete configuration details, recommendations, and NSX use cases, download the solution note. Find out how to make the software defined datacenter a reality with Nutanix and VMware NSX for vSphere.

 

Stay tuned to the NEXT community blog for an upcoming post that zooms in on the most important parts of our NSX solution note and continue the conversation in the community forums

 

This post was authored by Jason Burns, Senior Solutions & Performance Engineer at Nutanix

Maximum Performance from Acropolis Hypervisor and Open vSwitch

by Community Manager ‎11-25-2015 03:06 PM - edited ‎12-21-2015 07:01 AM (8,660 Views)

Nutanix appliances leverage the data network as the backplane for storage, and the following is aimed at helping you determine the best way to connect the Acropolis Hypervisor to your data center network. Let's start with some background. The Acropolis Hypervisor (AHV) uses the open source Open vSwitch (OVS) to connect the Controller VM, the hypervisor, and guest VMs to each other and to the physical network. The OVS service runs on each AHV node and the OVS services start automatically.

 

This blog is part of a series on Acropolis Hypervisor, and will cover networking with Open vSwitch bridges and bonds. Later parts in the series will talk about load balancing, VLANs, and Acropolis managed networks, so stay tuned!

 

Within OVS, bonded ports aggregate the physical interfaces on the AHV host. By default, a bond named bond0 is created in bridge br0. After the node imaging process, all interfaces are placed within a single bond, which is a requirement for the Foundation imaging process. Note that the default configuration should be modified during initial deployment to remove the 1 gigabit ports from bond0--only the 10 gigabit ports should remain.

 

The following diagram illustrates the networking configuration of a single host immediately after imaging.

 

 

Picture1.png

 

Take a look at the following Nutanix nu.school video for more information on the default OVS configuration, along with the commands for modifying the default config. You'll also find some handy tips on our CLI tools like aCLI and allssh.

 

 

 

 

The critical point is that the Nutanix Controller Virtual Machine should have access to the 10gb adapters. This ensures that the most bandwidth and lowest possible latency is provided to the CVM. Additionally, we may want to physically separate traffic from the various User VMs. This separation may sometimes be required by a company security policy, or for VMs performing networking functions like routing, firewalling, or load balancing.

 

Here is the recommended AHV OVS configuration, which creates a new bridge including the 1gb network adapters.

 

 

Picture2.png

 

The recommended configuration is to separate the 10g and 1g interfaces into separate bonds to ensure that CVM and user VM traffic always traverse the fastest possible link. Here, the 10g interfaces (eth2 and eth3) are grouped into bond0 and dedicated to the CVM and User VM1. The 1g interfaces are grouped into bond1 and used only by a second link on User VM2. Bond0 and bond1 are added into br0 and br1, respectively.

 

With this configuration, the CVM and user VMs use the 10g interfaces. Bridge br1 is available for VMs that require physical network separation from the CVM and VMs on br0. Devices eth0 and eth1 could alternatively be plugged into a different pair of upstream switches for further separation.

 

Two physical upstream switches are used and each pair of interfaces within a bond is plugged into a separate physical switch for high availability. Within each bond, only one physical interface will be active when using the default active-backup OVS bond mode. See the Load Balancing section for more information and alternate configurations.

 

Perform the following actions for each Nutanix node in the cluster. On each Acropolis host, add bridge br1. The Acropolis hypervisor local to the CVM can be reached with the local 192.168.5.1 interface address.

 

nutanix@CVM$ ssh root@192.168.5.1 "ovs-vsctl add-br br1"

 

From the CVM, remove eth0 and eth1 from the default bridge br0 on all CVMs. These interfaces are removed by specifying that only eth2 and eth3 will remain in the bridge. The 10g shortcut lets you include all 10g interfaces without having to explicitly specify the interfaces by name.

 

nutanix@CVM$ manage_ovs --bridge_name br0 --bond_name bond0 --interfaces 10g update_uplinks

 

Add the eth0 and eth1 uplinks to br1 in the CVM using the 1g interface shortcut.

 

nutanix@CVM$ manage_ovs --bridge_name br1 --bond_name bond1 --interfaces 1g update_uplinks

 

Now that a bridge, br1, exists just for the 1gb interfaces, networks can be created for "User VM2" with the following aCLI commands. Putting the bridge name in the network name is helpful when viewing network in the Prism GUI.

 

nutanix@cvm$ acli net.create br1_vlan99 vswitch_name=br1 vlan=99

 

Now we have successfully configured a single Acropolis Hypervisor to connect the CVM via the 10gb interfaces. User VMs can connect via either 10gb or 1gb. Watch the YouTube video above for tricks on performing these commands on all nodes in the cluster.

 

Download the Acropolis Hypervisor Best Practice Guide for more detailed information and a handy cheat sheet with all of the CLI commands used here.

 

Up next, we'll take a look at configuring load balancing within our OVS bonds!

 

This post was authored by Jason Burns, Senior Solutions & Performance Engineer at Nutanix

Announcements

One of the fun things about participating in an online community is developing a community identity. One way to do that is with a personalized avatar.

Read More: How to Change Your Community Profile Avatar
Labels