VMware NSX on Nutanix: Build a Software-Defined Datacenter
VMware NSX for vSphere and the Nutanix Xtreme Computing Platform (XCP) combine to make the software-defined datacenter a reality. XCP lets administrators build an invisible virtualization infrastructure free from the limits of traditional compute, storage, and storage networking architectures. The VMware NSX and Nutanix XCP solution ensures that VMs always have access to fast local storage and compute, as well as consistent network addressing and security, all without the burden of physical infrastructure constraints.
Nutanix has tested two crucial NSX deployment scenarios to validate that VMware NSX for vSphere operates seamlessly in a Nutanix cluster. The first scenario has the Nutanix Controller VM (CVM) connected to a traditional VLAN network, with user VMs inside NSX virtual networks. In the second scenario, both the CVM and user VMs are connected to NSX virtual networks. Connecting the CVM to an NSX virtual network increases configuration complexity, but it also provides access to features such as isolation and microsegmentation for the Nutanix cluster.
Use Cases and Benefits
Before we dive into the different scenarios, lets take a look at a commonly used example that shows up in the VMware NSX design guide. We'll examine a few of the benefits of software-defined networking using this example. Here we see three virtual machine tiers: web, application, and database. Traffic from these VMs is carried in three separate VXLAN-based virtual networks (VNI 5001 - 5003).
Each isolated virtual network is abstracted from the physical network thanks to VXLAN encapsulation. The distributed logical router (DLR) on each hypervisor can connect disparate layer-3 virtual networks without having to hairpin traffic to a physical router. Routing happens right at the hypervisor. Further, the distributed firewall (DFW) can apply security policy at the VM virtual NIC-level that follows the VM regardless of the location of physical firewalls or layer-3 boundaries.
The DFW and DLR operate at the hypervisor level and exist in every hypervisor within the compute cluster. Because these components are distributed, routing and firewall actions happen close to the VMs without depending on the underlying network infrastructure. The only two requirements of the physical network are to enable connectivity between hypervisors and allow jumbo frames to accommodate for VXLAN overhead.
Separation from the physical network means that VMs can be addressed and managed without impact to the physical network. Isolated network enclaves can be created within virtual networks that can be quickly duplicated as required. One example would be building a completely new environment for each developer that is created on demand. Another example would be restoring a snapshot of the production environment into a full-featured backup virtual network without having to worry about impacting production or having to change VM IP addresses.
For further information on each of these use cases check out the full solution note (INSERT LP LINK HERE).
Scenario 1 - NSX for User Virtual Machines
The recommended configuration for NSX on Nutanix is to enable NSX for user VMs, such as the web, application, and database in our three-tiered application example. The Nutanix CVM is connected to a regular VLAN backed port group to keep configuration as simple as possible.
Storage traffic is transmitted between nodes in VLAN 101 shown here. User VM traffic can take advantage of NSX benefits that we've highlighted above. In our example we ensure that the distributed firewall allows traffic between CVMs and ESXi hosts by either adding explicit rules, or adding a default allow policy to the rule set.
Scenario 2 - NSX for the Nutanix CVM and User VMs
The next example shows the alternate scenario, where both the Nutanix CVM and user VMs are connected to an NSX virtual network. An additional storage VMk adapter in virtual network 5000 ensures that L2 and L3 connectivity requirements between CVM and ESXi cluster hosts are met. The management and VXLAN VMk addresses in the figure illustrate addressing two nodes in separate racks with an L3 boundary separating the racks. The VXLAN-encapsulated storage network traffic in VNI 5000 spans this L3 boundary.
With a little added complexity, this configuration allows the Nutanix CVM to take advantage of features such as microsegmentation and isolation. You can isolate the CVM and storage network in a single virtual network that can span physical network layer-3 boundaries while still keeping the CVM and storage VMk adapters in the same layer-3 network.
It is critical to note that leaf-spine topology recommendations for the physical network still hold true between Nutanix nodes. Addressing between ESXi hosts in separate racks may cross a layer-3 boundary, but the network must still meet the requirement for high throughput and low latency between Nutanix nodes.
Running VMware NSX on Nutanix enables administrators to architect powerful and agile solutions free from traditional physical storage and networking constraints. The Xtreme Computing Platform (XCP) delivers the invisible compute and storage infrastructure, while NSX emulates network functions abstracted from the underlying physical network. Nutanix has verified that these tools integrate seamlessly to achieve all of the advantages associated with a software-defined datacenter, including logical separation from the storage infrastructure, isolated virtual networks that can span physical networks, security policies that follow ever-moving virtual machines, and true workflow automation.
Deploying Nutanix with VMware NSX lets administrators focus on building scalable applications, confident that, wherever a VM resides, it will have access to essential compute, storage, and network resources. A stable and robust physical infrastructure provides the underlay for a malleable and responsive virtual overlay ready to meet challenges on demand.
For more information on VMware NSX use cases and testing with Nutanix, please check out the full solution note here and continue the conversation on the community forums.
This post was authored by Jason Burns, Senior Solutions & Performance Engineer at Nutanix