Solved

Do you have to use both NICS and SFP ports in a NX-6035-G4

  • 2 August 2022
  • 4 replies
  • 210 views

This one may be an obvious question, but its probably too obvious to the point that I am overthinking it.

Model: NX-6035-G4

I came into an environment where there are 5 clusters. Each clusters. Each one has both NIC’s on each node connected directly to the core switch. Each specifying the VLAN access on the core switch.

Now the SFP ports are running DAC cables to a 10 Gig switch that is just in trunk mode. That switch then Trunks to the core switch.

So are you suppose to use both? I thought you only had to use one or the other, normally this doesn't  matter, but the Core switch is over saturated and I could close out 10 Unnecessary wires.

I drew a diagram encase this was wordy.

 

icon

Best answer by bcaballero 3 August 2022, 11:04

View original

4 replies

Userlevel 4
Badge +5

Hi @bravoj 

Saw your post on reddit, I’ll be answering here.

I assume that the diagram that you drew can be close to this one below. Am I right?

I assume that you have a pair of SG550-16 10 Gig SFP+ “Stacked” “Linked” or whatever fancy name the technology has and also you have a pair of Base-T Core Switches running at Gig.

 

How are your virtual switches configured? (Active-Backup, Balance-Slb or Balance-TCP)

 

By default CVM’s use VS0 and they have to run at 10Gig for the storage replication to other nodes. This will be you VS0 connected to the SG550-16 switches.

 

Regarding to your diagram you should also have a VS1 configured Connected to core switches for the UVM traffic.

 

At this point your storage replica and management traffic is flowing through SG550-16 at 10 Gig and your VMs are connecting to “external” world through Core Switches at 1 Gig. I had a customer with the same configuration running for a long time because he wanted to split the UVM traffic from the CVMs and he also had G4’s (2 x SFP+ 10Gig and 2 x Base-T Gig on each node).

 

The point of caution here is for your nodes. NX-6035-G4 have the End of Support Date on February 2022 (they are 7 years old) so be careful if you are running production workloads on them.

https://portal.nutanix.com/page/documents/eol/list?type=platform

 

If there are no other political/design decisions and you want to save cables on core switches the way to go is to configure VLANs 1,2,3 and 4 on VS0 connected to the 10 Gig switches. At the end you will havd a cleaner design and less points of failure.

Right now if you loose your core switches your cluster will run with no issues because CVMs are connected to SG550-16 but your VMs can’t connect to the “external” world

 

Hope this helps

 

Regards!

 

@bcaballero  Thank you so much1 that actually helps. Yes I’m ware of the EOL for this Nutanix, and have already bought one to migrate the VM’s.

 

What added to the confusion is that the new Nutanix only has the 10Gig DAC cables going over to the SG550-16 in a trunk mode, and has 1 IPMI cable going to the core switch.

 

To add, the old Nutanix with TX cables going to the core switch do have VLAN access, but none of those VLAN’s are needed except for one which is the VLAN that we keep all of our servers on.

As for the diagram we only have one Core Switch.

All VM’s seem to be connected to the just one VLAN, the VLAN I mentioned as our server VLAN.

CVM Network just contains the CVM VM’s

Back plain Network just has the CVM VM’s as well

 

 

Userlevel 4
Badge +5

Hi @bravoj 

I’m glad it helped.

Regards!

can we install windows 2016,2019, and 2022 os

Reply