Solved

NIC Usage on Nutanix Servers

  • 17 October 2020
  • 17 replies
  • 3951 views

Badge +2

Hi, Im planning to add 2 more blocks NX-3060-G6 with 2 nodes on each block. Currently i have 2 Block Nutanix NX-3060-G6 with 2 Nodes on each block.

Current topology is 2 Block nutanix connected to Cisco 2960X with all eth interfaces connected to it. My question is, In NX-3060-G6 there are 3 Ethernet interfaces, 1 for MGMT (IPMI) and there are other 2 eth interfaces and 2 SFP+. For networking purpose,

  1. Do we need to use those 2 eth interfaces from each node in order to work? Or do we need to use only 1 eth interface only on each node?
  2. We’re adding 1 more switch Juniper EX4600, so if we need to use those 2 eth interfaces, can we separate them? 1 Eth interface to Cisco 2960X and the other goes to EX4600?
  3. Can i have different VLAN on each interfaces (exclude MGMT) that i connect to switch?

Sorry if my question isn’t clear enough. But i hope you do understand it. Thank you.

NX-3060-G6
Networking Topology (Just for Eth Interfaces)

 

icon

Best answer by UPX 18 October 2020, 19:30

View original

This topic has been closed for comments

17 replies

Userlevel 3
Badge +4

Hi KarangDika,

the first question that comes to my mind is, just for my curiosity, why are you using NX 3060-G6 with only 2 nodes per block when you could have 4 nodes per block? Rack fault tolerance?

About your network configuration:

  1. assuming you can cable all the nics, my suggestion is:

for redundancy purpose use both 2 SFP+ per node on the first bond (usually br0-up) for intra cluster network (management, storage, zookeeper communications) (i would suggest LACP but dont know how to configure your two vendor switches, you could instead safely use and active-backup configuration)

For guest vm traffic use both 2 eth per node configuring a separate bond (usually br1-up) with trunking (is the default for ahv but you have to check the switch side configuration) and allow there oll the VLAN tags you need

  1. Otherwise, If you cant connect all nics but only the eth i guess you have two choices

the first (i suggest) you can connect and bond both eths, configure the trunk switch side, tag the management lan on host and cvm and allow all other tags you need for guests vms (this way all the traffic will go through the single bond but at least you have redundancy)

second choice you can create two separate bonds (using a single eth per node), one for intra nodes traffic and the other for guest vm traffic (with trunk), you will get the same result with a little bit easier configuration but you will lost redundancy.

Note:Configuring a cluster to use only 1G connection is not a best practice for production environments and it lends to some system limits

Hope this helps

 

 

Badge +2

Hi KarangDika,

the first question that comes to my mind is, just for my curiosity, why are you using NX 3060-G6 with only 2 nodes per block when you could have 4 nodes per block? Rack fault tolerance?

About your network configuration:

  1. assuming you can cable all the nics, my suggestion is:

for redundancy purpose use both 2 SFP+ per node on the first bond (usually br0-up) for intra cluster network (management, storage, zookeeper communications) (i would suggest LACP but dont know how to configure your two vendor switches, you could instead safely use and active-backup configuration)

For guest vm traffic use both 2 eth per node configuring a separate bond (usually br1-up) with trunking (is the default for ahv but you have to check the switch side configuration) and allow there oll the VLAN tags you need

  1. Otherwise, If you cant connect all nics but only the eth i guess you have two choices

the first (i suggest) you can connect and bond both eths, configure the trunk switch side, tag the management lan on host and cvm and allow all other tags you need for guests vms (this way all the traffic will go through the single bond but at least you have redundancy)

second choice you can create two separate bonds (using a single eth per node), one for intra nodes traffic and the other for guest vm traffic (with trunk), you will get the same result with a little bit easier configuration but you will lost redundancy.

Note:Configuring a cluster to use only 1G connection is not a best practice for production environments and it lends to some system limits

Hope this helps

 

 

Hi @UPX 

 

Thanks for the insight.

Answering your curiousity, we are using 2 Nodes per block for rack fault tolerance. Since there will be rack migration few times in the future, so we just want to use the HA feature from Nutanix by migrating 1 block at the time.

 

  1. For intra cluster network, can we use only the eth port itself? Since for all this time, those 2 SFP+ Port was never being used since we have no idea what it is for and how to configure it. Is it for FCoE type or for 10Gig eth port?? Sorry, newbie on networking.
  2. Looks like for the plan you suggesting, im planning to do with the first plan (bonding both eth). But just for making sure, can we have different VLAN on each eth port? Im planning to make few more segmentation network on guest vm.

Thanks for the sharing, it helps me understanding so much. Sorry if it is a silly question, just learning virtualization and networking also.

Badge +2

Hi @UPX adding my comment. 

 

Are those 2 Eth port should have 10Gig speed? Or can we just use 1G speed and we use 2 SFP+ 10G port?

 

If thats the case, which one will be the Guest VMs traffic? The 1G Eth port or 10G SFP+ FCoE port?

 

Sorry for adding more questions. Just comes to my mind.

Userlevel 3
Badge +4

Hi KarangDika,

no question is silly.

The rack fault tolerance is a really good choice.

Replying to your questions:

  1. For intra cluster network, can we use only the eth port itself? Since for all this time, those 2 SFP+ Port was never being used since we have no idea what it is for and how to configure it. Is it for FCoE type or for 10Gig eth port?? Sorry, newbie on networking.
    1. you can for sure use only eth (1G Copper) but i suggest to use the SFP+. The SFP+ NIC is a normal ethernet NIC that give you the advantage of using 10G connection (10G is the best practice for production envirnoment) but, of course, you will need capable switches (10G) and all the GBICs/DACs needed for cabling.

To be clear you can use:

  1. Patch cords RJ45 Cat5e/6 to cable 1G eth
  2. GBIC SFP+ and LC-LC fiber patch or DAC to cable 10G SFP+ ports 

In both cases you can use the eth or the SFP+ the same way as NICs (for bonding purpose for example) but not in the same bond (never bond 1G with 10G ports)

  1. Looks like for the plan you suggesting, im planning to do with the first plan (bonding both eth). But just for making sure, can we have different VLAN on each eth port? Im planning to make few more segmentation network on guest vm.
    1. When you bond two ports, the VLAN TAGs will be allowed on the entire bond not on the single NIC so my suggestion is to allow all the VLAN TAGs on a dedicated bond for guest VMs traffic. Otherwise, If you need more phisycal segmentation, you can for sure use a single NIC and separate the TAGs you need but, as you use a single NIC, you’ll loose redundancy.

I’ve another question about the switches you are planning to use.

As i can see the Cisco 2960 supports 1G RJ45 interfaces while the Juniper 4600 supports 10G with SFP+.

Could not be better to use 2 identical Juniper Switches for TOR purpose and the Cisco for OOBM(IPMI)?

 

Userlevel 3
Badge +4

Hi @UPX adding my comment. 

 

Are those 2 Eth port should have 10Gig speed? Or can we just use 1G speed and we use 2 SFP+ 10G port?

 

If thats the case, which one will be the Guest VMs traffic? The 1G Eth port or 10G SFP+ FCoE port?

 

Sorry for adding more questions. Just comes to my mind.

As far as i remember on NX 3060-G6 you have 4 standard ports:

2 x RJ45 1G Eth

2 x SFP+ 10G Eth

if you plan to use the 10G SFP+ ports @1G speed you can use a Transceiver and a standard patch cord Cat 5e/6 for cabling

 

The way in which you bond the interfaces will lead to the use you will do of Network traffic.

It’s up to you how to use the bonds. The only point you have to remember is to not bond interfaces with different speed (1G or 10G)

For example:

  1. bond 1 (br0-up) with 2 eth 1G for guest traffic
  2. bond 2 (br1-up) with 2 SFP+ 10G for intra cluster traffic and Management (Prism) 
Badge +2

Hi KarangDika,

no question is silly.

The rack fault tolerance is a really good choice.

Replying to your questions:

  1. For intra cluster network, can we use only the eth port itself? Since for all this time, those 2 SFP+ Port was never being used since we have no idea what it is for and how to configure it. Is it for FCoE type or for 10Gig eth port?? Sorry, newbie on networking.
    1. you can for sure use only eth (1G Copper) but i suggest to use the SFP+. The SFP+ NIC is a normal ethernet NIC that give you the advantage of using 10G connection (10G is the best practice for production envirnoment) but, of course, you will need capable switches (10G) and all the GBICs/DACs needed for cabling.

To be clear you can use:

  1. Patch cords RJ45 Cat5e/6 to cable 1G eth
  2. GBIC SFP+ and LC-LC fiber patch or DAC to cable 10G SFP+ ports 

In both cases you can use the eth or the SFP+ the same way as NICs (for bonding purpose for example) but not in the same bond (never bond 1G with 10G ports)

  1. Looks like for the plan you suggesting, im planning to do with the first plan (bonding both eth). But just for making sure, can we have different VLAN on each eth port? Im planning to make few more segmentation network on guest vm.
    1. When you bond two ports, the VLAN TAGs will be allowed on the entire bond not on the single NIC so my suggestion is to allow all the VLAN TAGs on a dedicated bond for guest VMs traffic. Otherwise, If you need more phisycal segmentation, you can for sure use a single NIC and separate the TAGs you need but, as you use a single NIC, you’ll loose redundancy.

I’ve another question about the switches you are planning to use.

As i can see the Cisco 2960 supports 1G RJ45 interfaces while the Juniper 4600 supports 10G with SFP+.

Could not be better to use 2 identical Juniper Switches for TOR purpose and the Cisco for OOBM(IPMI)?

 

Thank you so much for clearing things up for me.

 

After reading your comment few times, looks like im finally understand what should i setup.

 

  1. Okay, if i use single NIC and separate TAGs i will loose redundancy. But what if i dont separate the TAGs, so both eth port use the same VLAN, but inside nutanix, for the guest VMs, i created different VLAN and different segmentation network, will it work? I think no, how is it?
  2. Can we use 1G for those 2 Eth + MGMT ports? If that can work, i will connect all the Eth + MGMT port to my Cisco 2960X and the SFP+ connected to Juniper 4600.
  3. What this interfaces mean? Currently im connecting only in MGMT port and 2 Eth port, so if im connecting those 2 SFP+ to my Juniper 4600, the eth2 and eth3 will shown “true”? or are these eth2 and eth3 are different port with the SFP+?
    One of my CVM network interfaces configuration

     

For your question, since im starting to understand what should i do, Im planning to connect 1G Cisco 2960X (for OOB) to the Eth port and MGMT nutanix and connect the SFP+ using FCoE to Juniper 4600. And we’ll be planning to add 1 more Juniper switches.

Userlevel 3
Badge +4

Hi KarangDika,

no question is silly.

The rack fault tolerance is a really good choice.

Replying to your questions:

  1. For intra cluster network, can we use only the eth port itself? Since for all this time, those 2 SFP+ Port was never being used since we have no idea what it is for and how to configure it. Is it for FCoE type or for 10Gig eth port?? Sorry, newbie on networking.
    1. you can for sure use only eth (1G Copper) but i suggest to use the SFP+. The SFP+ NIC is a normal ethernet NIC that give you the advantage of using 10G connection (10G is the best practice for production envirnoment) but, of course, you will need capable switches (10G) and all the GBICs/DACs needed for cabling.

To be clear you can use:

  1. Patch cords RJ45 Cat5e/6 to cable 1G eth
  2. GBIC SFP+ and LC-LC fiber patch or DAC to cable 10G SFP+ ports 

In both cases you can use the eth or the SFP+ the same way as NICs (for bonding purpose for example) but not in the same bond (never bond 1G with 10G ports)

  1. Looks like for the plan you suggesting, im planning to do with the first plan (bonding both eth). But just for making sure, can we have different VLAN on each eth port? Im planning to make few more segmentation network on guest vm.
    1. When you bond two ports, the VLAN TAGs will be allowed on the entire bond not on the single NIC so my suggestion is to allow all the VLAN TAGs on a dedicated bond for guest VMs traffic. Otherwise, If you need more phisycal segmentation, you can for sure use a single NIC and separate the TAGs you need but, as you use a single NIC, you’ll loose redundancy.

I’ve another question about the switches you are planning to use.

As i can see the Cisco 2960 supports 1G RJ45 interfaces while the Juniper 4600 supports 10G with SFP+.

Could not be better to use 2 identical Juniper Switches for TOR purpose and the Cisco for OOBM(IPMI)?

 

Thank you so much for clearing things up for me.

 

After reading your comment few times, looks like im finally understand what should i setup.

 

  1. Okay, if i use single NIC and separate TAGs i will loose redundancy. But what if i dont separate the TAGs, so both eth port use the same VLAN, but inside nutanix, for the guest VMs, i created different VLAN and different segmentation network, will it work? I think no, how is it?
  2. Can we use 1G for those 2 Eth + MGMT ports? If that can work, i will connect all the Eth + MGMT port to my Cisco 2960X and the SFP+ connected to Juniper 4600.
  3. What this interfaces mean? Currently im connecting only in MGMT port and 2 Eth port, so if im connecting those 2 SFP+ to my Juniper 4600, the eth2 and eth3 will shown “true”? or are these eth2 and eth3 are different port with the SFP+?
    One of my CVM network interfaces configuration

     

For your question, since im starting to understand what should i do, Im planning to connect 1G Cisco 2960X (for OOB) to the Eth port and MGMT nutanix and connect the SFP+ using FCoE to Juniper 4600. And we’ll be planning to add 1 more Juniper switches.

 

Seeing the pic, there are good news, you have all the NICc capable of 10G speed and this is good for the scope you need to achieve.

As you have connected both RJ45 NICs we can assume: 

eth0 and eth1 are RJ45 (link true)

eth2 eth3 SFP+ (link false)

What could you do now, 

Separate the nics in two different bridges and bonds using:

#The command below put only eth0 and eth1 in the first bond br0-up and set the other nics as free

allssh "manage_ovs --bridge_name br0 --bond_name br0-up --interfaces eth0,eth1 --require_link=false update_uplinks"

Remember, if you configure the switches ports in trunk mode you will have to set the VLAN tag for hosts and cvms

Now it’s up to you how to use the free NICs

Download this doc, here you can find a lot of useful information about how to configure the network with AHV

https://www.nutanix.com/it/go/ahv-networking

 

 

 

Badge +2

Hi KarangDika,

no question is silly.

The rack fault tolerance is a really good choice.

Replying to your questions:

  1. For intra cluster network, can we use only the eth port itself? Since for all this time, those 2 SFP+ Port was never being used since we have no idea what it is for and how to configure it. Is it for FCoE type or for 10Gig eth port?? Sorry, newbie on networking.
    1. you can for sure use only eth (1G Copper) but i suggest to use the SFP+. The SFP+ NIC is a normal ethernet NIC that give you the advantage of using 10G connection (10G is the best practice for production envirnoment) but, of course, you will need capable switches (10G) and all the GBICs/DACs needed for cabling.

To be clear you can use:

  1. Patch cords RJ45 Cat5e/6 to cable 1G eth
  2. GBIC SFP+ and LC-LC fiber patch or DAC to cable 10G SFP+ ports 

In both cases you can use the eth or the SFP+ the same way as NICs (for bonding purpose for example) but not in the same bond (never bond 1G with 10G ports)

  1. Looks like for the plan you suggesting, im planning to do with the first plan (bonding both eth). But just for making sure, can we have different VLAN on each eth port? Im planning to make few more segmentation network on guest vm.
    1. When you bond two ports, the VLAN TAGs will be allowed on the entire bond not on the single NIC so my suggestion is to allow all the VLAN TAGs on a dedicated bond for guest VMs traffic. Otherwise, If you need more phisycal segmentation, you can for sure use a single NIC and separate the TAGs you need but, as you use a single NIC, you’ll loose redundancy.

I’ve another question about the switches you are planning to use.

As i can see the Cisco 2960 supports 1G RJ45 interfaces while the Juniper 4600 supports 10G with SFP+.

Could not be better to use 2 identical Juniper Switches for TOR purpose and the Cisco for OOBM(IPMI)?

 

Thank you so much for clearing things up for me.

 

After reading your comment few times, looks like im finally understand what should i setup.

 

  1. Okay, if i use single NIC and separate TAGs i will loose redundancy. But what if i dont separate the TAGs, so both eth port use the same VLAN, but inside nutanix, for the guest VMs, i created different VLAN and different segmentation network, will it work? I think no, how is it?
  2. Can we use 1G for those 2 Eth + MGMT ports? If that can work, i will connect all the Eth + MGMT port to my Cisco 2960X and the SFP+ connected to Juniper 4600.
  3. What this interfaces mean? Currently im connecting only in MGMT port and 2 Eth port, so if im connecting those 2 SFP+ to my Juniper 4600, the eth2 and eth3 will shown “true”? or are these eth2 and eth3 are different port with the SFP+?
    One of my CVM network interfaces configuration

     

For your question, since im starting to understand what should i do, Im planning to connect 1G Cisco 2960X (for OOB) to the Eth port and MGMT nutanix and connect the SFP+ using FCoE to Juniper 4600. And we’ll be planning to add 1 more Juniper switches.

 

Seeing the pic, there are good news, you have all the NICc capable of 10G speed and this is good for the scope you need to achieve.

As you have connected both RJ45 NICs we can assume: 

eth0 and eth1 are RJ45 (link true)

eth2 eth3 SFP+ (link false)

What could you do now, 

Separate the nics in two different bridges and bonds using:

#The command below put only eth0 and eth1 in the first bond br0-up and set the other nics as free

allssh "manage_ovs --bridge_name br0 --bond_name br0-up --interfaces eth0,eth1 --require_link=false update_uplinks"

Remember, if you configure the switches ports in trunk mode you will have to set the VLAN tag for hosts and cvms

Now it’s up to you how to use the free NICs

Download this doc, here you can find a lot of useful information about how to configure the network with AHV

https://www.nutanix.com/it/go/ahv-networking

 

 

 

So, since both of my NX-3060-G6 RJ45 port support up to 10G speed, my question is, do we have to fullfill that? I mean, can i keep on using 1G speed for those 2 RJ45 port? Since current topology is using 1G and im thinking to get switch with 10G Speed capabilities is expensive enough. Maybe in the future we can get 10G switch to fulfill the needs of both RJ45 10G port.

 

So, there will be 2 bridges and bonds, 1 for the 1G bridges/bonds for guest vm and the other one for 10G for the internal management. Is that correct?

Oh yeah 1 more question, since there is a warning about my CVM didnt get 10G uplink, is it because

1. My 10G RJ45 Port got connected to 1G switch

2. My 2 SFP+ port didnt connected to anywhere

 

Are those 2 points correct? And can i fix it only by resolving 1 of them?

 

Thanks anyway for helping me understand more for nutanix networking and also for the references. Now i just need to look for how to configure FCoE for SFP+ Nutanix port. 

Userlevel 3
Badge +4

You can definitely keep using rj45 @ 1G speed instead of 10G until you get a 10G switch, bearing in mind that using 1G is not the best practice in production. As I said, the AHV cluster is designed to be connected via 10G at least for the intra cluster and management side.


So, there will be 2 bridges and bonds, 1 for 1G bridges / bonds for the guest virtual machine and the other for 10G for internal management. It's correct?"

Very correct!

"Another question, as there is a warning that my CVM did not receive a 10G uplink, is why"

The caveat you are facing is due to unmatched best practice for connecting and it will be fixed automatically as soon as you use 10G speed on the interfaces (RJ45 @ 10G or SFP + @ 10G it doesn't matter) for the internal management network.

You are now using 1G and then the alert is triggered

If you have access to the "Nutanix Support Portal" you will find lots of documentation on best practices and more. Even if you don't have such access, you can find many AHV networking stuff related sites / blogs.

Of course practicing on a real-world environment like yours, where you can get your hands on switches, NICs, and Open Virtual Switch setup, would be the best teacher you could have.

Have fun with your AHV cluster!

 

p.s.: if your questions are satisfied please mark this thread as answered so that others with your doubts can find the answers they are looking for.

 

Thank you, stay safe and do well!

 

Badge +2

Dear @UPX ahh thank you so much for clearing things up.

 

Now im just have to look for a guide for configuring FCoE for SFP+ Nutanix towards Juniper EX4600.

 

Im so grateful that you can give me insight and clear things up for my newbie problem. Thank you so much.

Userlevel 3
Badge +4

Dear @UPX ahh thank you so much for clearing things up.

 

Now im just have to look for a guide for configuring FCoE for SFP+ Nutanix towards Juniper EX4600.

 

Im so grateful that you can give me insight and clear things up for my newbie problem. Thank you so much.

Since Nutanix is ​​dependent on standard GigE technology, there is no need to search for the FCoE configuration.

You can configure your switch as you would for a normal network setup and treat the connections as simple ethernet ports.

The Juniper configuration will likely be identical to the one you have on the Cisco

Have a nice day!

Badge +2

Dear @UPX ahh thank you so much for clearing things up.

 

Now im just have to look for a guide for configuring FCoE for SFP+ Nutanix towards Juniper EX4600.

 

Im so grateful that you can give me insight and clear things up for my newbie problem. Thank you so much.

Since Nutanix is ​​dependent on standard GigE technology, there is no need to search for the FCoE configuration.

You can configure your switch as you would for a normal network setup and treat the connections as simple ethernet ports.

The Juniper configuration will likely be identical to the one you have on the Cisco

Have a nice day!

Yeah, i mean i need to look for configuring xe interfaces on juniper. But the kind of cable that nutanix used on SFP+ port is FCoE right?

Userlevel 3
Badge +4

The type of cable is not related to the protocol you will use but to the interfaces to be connected.

To wire from NX nodes to Juniper switch you can choose:

  1. (my suggestion) DAC Cables SFP+ to SFP+
  2. GBIC SFP+ 10G (Finisar will work) + LC-LC FC Patch cord

The first is easier and cheaper 

Badge +2

If im not mistaken, there are already SFP+ plugged in the SFP+ port. So if we use the 1st option, we need to take out the original SFP+ from the node right?

 

And for the 2nd option, actually we can use that since we have SFP+ 10G Finisar to be plugged in for juniper ex4600.

 

Oh ya since you mentioning protocol, what is the best protocol for intra cluster network? Sorry for the question.

Userlevel 3
Badge +4

If im not mistaken, there are already SFP+ plugged in the SFP+ port. So if we use the 1st option, we need to take out the original SFP+ from the node right?

 

And for the 2nd option, actually we can use that since we have SFP+ 10G Finisar to be plugged in for juniper ex4600.

 

Oh ya since you mentioning protocol, what is the best protocol for intra cluster network? Sorry for the question.

 

You are lucky, if you have GBICs you need only LC-LC FC patch cords that are really cheap.

About the protocol you will use ethernet in both cases (1 or 10G), i guess you don’t need to care about that

Badge +2

 

 

You are lucky, if you have GBICs you need only LC-LC FC patch cords that are really cheap.

About the protocol you will use ethernet in both cases (1 or 10G), i guess you don’t need to care about that

Yeah, thank you so much for the insight and solutions.

 

And can i ask another question?

Since you’ve mentioned that if i want to make physical segmentation network, it will make me lose redundancy right?

So if i want to make another physical segmentation network, it would be best if i create another cluster on different segment and integrate them using prism central right?

Userlevel 3
Badge +4

The physical segmentation depends on the number of physical interfaces you have.

For example you have 3 nodes with 4 10G eth each, here you would have several choices:

  1. no phisycal segmentation with redundancy
    1. you will bond all the 4 eth and put them in trunk, every network traffic will go through the single bond with TAGs
  2. 2 physical segmentation with redundancy
    1. you will create 2 different bridges (on different physical networks) and bond 2 eth per bridge
  3. Since AOS 5.10.4 and newer you can setup a bond with a single NIC. This way you can setup every NIC on a different bridge and a different network but you will loose redundancy 

Of course the more NICs you have available per node, the best