Elevate

Nutanix Invisible Infrastructure and Avaya Networking

  • 6 July 2015
  • 0 replies
  • 13035 views
Nutanix Invisible Infrastructure and Avaya Networking
Userlevel 7
Badge +35
This post was authored by Jason Burns, Senior Solutions & Performance Engineer at Nutanix

Over the last few months, I’ve had a chance to work with the Avaya Virtual Services Platform (VSP) switches. In this blog I wanted to capture the best practices when using the Nutanix Xtreme Computing Platform with Avaya VSP 7000 and 8000 network switches.

The Avaya VSP provides high-throughput, low-latency, top-of-rack (TOR) switching for Nutanix clusters utilizing 10 Gb Ethernet interfaces. In addition to performance testing, I have also tested node addition in the Avaya switching environment, which is seamless due to Nutanix auto discovery.


The Avaya Virtual Services Platform introduces three features relevant to Nutanix.

  • Fabric Connect (FC) - Provides L2 and L3 network virtualization by using Shortest Path Bridging (SPB) in topologies where spanning tree would limit bandwidth and redundancy.

  • Fabric Interconnect (FI) - Allows high-speed connections between TOR switches.

  • Link Aggregation (LAG) - In most cases, Nutanix recommends avoiding link aggregation at the host level to reduce configuration complexity. However, in some instances it may be desired. Link aggregation allows multiple physical interfaces to be combined and presented as a single logical interface. This allows bandwidth pooling from the combined links, as well as the ability to treat multiple links as a single Layer 2 entity.
Figure 1 below illustrates possible network connections to a Nutanix node.



Figure 2 shows the Avaya network and Nutanix node topologies I’ve tested.



The right side uses traditional leaf-spine architecture with a maximum of three hops between Nutanix nodes. The left side shows Nutanix nodes joined to the cluster, but separated by several low-latency hops across the Avaya Fabric Connect backbone.

Avaya Fabric Connect

Avaya Fabric Connect allows greater network flexibility with lower administrative overhead than typically associated with a real world VM-centric network. In a traditional spanning tree environment, multiple redundant links would be blocked to prevent loops, decreasing available bandwidth. Fabric Connect enables redundant links by using the IS-IS protocol and Shortest Path Bridging (SPB) to prevent loops while keeping bandwidth available. Even better, new Virtual Service Networks (VSN) and VLANs added to the edge of the network are dynamically available without any extra configuration in the core network. In the example here, VSNs were extended for VM and management traffic so the new nodes on the left could be quickly discovered and added to the Nutanix cluster. No more error prone hop-by-hop switch configuration required!

In Figure 2, every network device is part of the FC backbone, and the Layer 2 Virtual Service Networks (L2-VSNs) bring the required VLANs to the access layer.

Avaya Fabric Connect provides data center and network administrators the flexibility to expand beyond leaf-spine architectures while still providing optimal performance. Since Fabric Connect is a core network technology between switches, no additional configuration considerations are required at the Nutanix node level.

Avaya Fabric Interconnect

Avaya Fabric Interconnect (FI) is a mechanism to connect multiple TOR switches into a single stack using purpose-built ports on the back of the VSP. Fabric Interconnect leverages Fabric Connect to eliminate loops and provide low-latency and high-throughput without spanning tree limitations. Figure 3 shows the simple two-switch Fabric Interconnect Mesh network used to test Nutanix cluster throughput.





Connecting a Nutanix cluster to an Avaya FI Mesh switch fabric does not require any additional VMware or Nutanix configuration, as long as administrators do not use link aggregation. When using FI Mesh, the switches are separate devices at the MAC layer. Network links from Nutanix nodes can be active/active connected to separate VSP switches and standard vSphere load balancing recommendations will apply.

Avaya Link Aggregation and InterSwitch Trunking

Using Split Multi-Link Trunking (SMLT) between two VSP TOR switches causes the switches to appear as a single device performing link aggregation. A link between switches passes information back and forth to maintain the appearance of a single switch at the MAC layer.



Nutanix does not recommend link aggregation between Nutanix nodes and VSP switches, but in certain situations, such as when the bandwidth of multiple links is needed for one virtual machine, it may be a useful tool. See VMware vSphere Networking on Nutanix for a more detailed explanation of recommended networking configuration.


Additional vSphere hypervisor networking configuration is required when using link aggregation technologies. Administrators must select the IP Hash load balancing method and use the vSphere Distributed Switch on the ESXi host. This allows the VMkernel to accept inbound connections to the same MAC address on both physical links, which is not possible in certain versions of vSphere when using the default load balancing methods. See this VMware KB article for full configuration details and requirements for supporting link aggregation in vSphere.

Here’s a summary of recommendations when using Avaya VSP switches.

Feature
Recommendation

Fabric Connect
- Leaf-spine architecture recommended for maximum throughput and lowest latency

- More complex architectures and topologies are possible with Avaya FC SPB to provide increased resiliency and flexibility


Fabric Interconnect
- No special considerations unless link aggregation is configured

- See VMware vSphere Networking on Nutanix



Link Aggregation
SMLT, MLT, IST, vIST
- Use vSphere Distributed Switch

- Use IP Hash Load Balancing



I’d love to hear from you. Please email us at info@nutanix.com or reach out to me on Twitter at  with feedback on this blog or to setup a customized 1:1 technical briefing on leveraging the power of simple network design and web-scale architecture for workloads such as unified communication, call center, and enterprise applications.


For more information, download the Nutanix Best Practices Guides for Avaya IP Office and Avaya Aura or download the Nutanix solution brief for Avaya UC solutions.

0 replies

Be the first to reply!

Reply