Solved

NX-3060-G4 - Cable to Core Switch

  • 2 January 2020
  • 9 replies
  • 2298 views

Badge

Hi,

We have a NX-3060-G4 with 3 VMware hosts on that is currently connected to our Cisco core switch cluster using 10GB twinax cables. A long story short, I need to remove these and connect the NTX to a single 1GB Cisco switch (WS-C3850-12S-S).

My question is: can I connect a 1GB SFP link between the NTX and new core switch using the 10GB NTX ethernet port? If so, will the NTX detect that the cable connected can only run at 1GB?

 

Thanks in advance.

icon

Best answer by sbarab 7 January 2020, 16:40

@glay Thanks for your reply.

In that case, it appears to me that you are not using SPF. You just use the RJ45 connection and the connection with the SPF is just a backup connection which is not used presently

If you are using the 1G nic as active and 10G nic (SPF connected) as back up, you still have the same problem of whether the physical switch will work with 10G Nics connected via SPF when you need it (when the backup connection will need to become active)

As for why you cannot connect to the ESX hosts, my assumption is that you might have had a different vSwitch and/or port group for ESX console connectivity (the management network), and when you moved the interface from that vSwitch or port group to accommodate the Nutanix network communication, you lost the network connectivity to the console of the ESX nodes.  You will need to make sure the port group you use for ESX console access has a NIC associated with it.  My guess is at this point if you don't have console connectivity, you may have to use the IPMI to access the console of the ESXi host remotely and run manual network commands to do this.

 

Let me know if that is the case.

 

-Said

View original

This topic has been closed for comments

9 replies

Userlevel 3
Badge +3

@gjay I was reviewing a public Nutanix KB and I thought it may help you with your question:

https://portal.nutanix.com/#/page/kbs/details?targetId=kA060000000TSG1CAO

 

From there you can see:

  • If LEDs for both NICs are lit and report as up status, then the compatibility issue is between the SFP module and the upstream switch. Ask the switch vendor to recommend a compatible SFP module.

  • If only the 1Gig LED lights up or shows up status in ESXi and the 10Gig NIC shows down status, then the compatibility issue is between the SFP module and the 10Gig adapter.

Also:

10Gig NICs shipped with Nutanix nodes (most of them) are equipped to work with SFP+ cables. There is no concept of negotiation.

and

Nutanix recommends that the speed/duplex settings be the same, else the 10Gig NIC shows an incorrect speed or may report a down status in ESXi.

 

Based on the above this may not end up working, but it won't hurt communicating with switch vendor (Cisco) as well, providing them the information about the SFP and NIC on the Nutanix side. and those of the switch side.

 

Hope this helps a little,

 

-Said

Badge

Thanks for the reply, @sbarab I was made aware of an easier way in linking the spare 1Gb NIC to the same standard vSwitch and set it as the active adapter on VMware which simplifies things!

My only concern is if, after changing the NIC to the 1Gb one, I have issues with network connectivity to the ESXi hosts, how could I regain access? Is there an alternative way through the Nutanix? I’m not a NTX expert so apologies if this is a basic question.

Userlevel 3
Badge +3

@glay Thanks for your reply.

In that case, it appears to me that you are not using SPF. You just use the RJ45 connection and the connection with the SPF is just a backup connection which is not used presently

If you are using the 1G nic as active and 10G nic (SPF connected) as back up, you still have the same problem of whether the physical switch will work with 10G Nics connected via SPF when you need it (when the backup connection will need to become active)

As for why you cannot connect to the ESX hosts, my assumption is that you might have had a different vSwitch and/or port group for ESX console connectivity (the management network), and when you moved the interface from that vSwitch or port group to accommodate the Nutanix network communication, you lost the network connectivity to the console of the ESX nodes.  You will need to make sure the port group you use for ESX console access has a NIC associated with it.  My guess is at this point if you don't have console connectivity, you may have to use the IPMI to access the console of the ESXi host remotely and run manual network commands to do this.

 

Let me know if that is the case.

 

-Said

Badge

Hi @sbarab We will be using the RJ45 from the 1Gb NIC ports. I’ve verified the approach to take but am now having an issue connecting to the ESXi host via the IPMI remote SOL viewer. It sits there blinking and doesn’t respond to any input? I can open another thread if necessary. Thanks again!

Userlevel 3
Badge +3

@gjay Ok, can you use ikvm/html5 option to access the console instead of the remote SOL?

If not you may want to go to the “maintenance” menu and run “unit reset”. It is harmless to the VMs it just resets the ipmi and you can log back in (it takes around 60 seconds or less to come back).

 

-Said

Badge

@sbarab I’ve just reset the unit but still the same. What browser would you recommend? I’m using Chrome 79 at the moment.

Badge

@sbarab Just had some success with it. Thanks a lot!

Userlevel 3
Badge +3

@gjay are you saying none of the “maintenance” options working. Have you tried launching  HTML5 or console redirection (which is really Java console)?

Also, any relatively recent version of Chrome should be fine.

You can try to go the CVM running on this node and then do ssh to nutanix@192.168.5.1 to see if you can reach there.

Let me know.

 

-Said

 

Userlevel 3
Badge +3

@gjay I am glad that is the case. let me know how it goes.

 

-Said