I set up a new cluster and I don’t know why “Uplink Configuration” shows “No Uplinks”. What can be the reason for that? (See: “PRISM/console/#page/network” » "+ Uplink Configuration" » “NIC Configuration” » Dropdown shows only one unselectable option: “No Uplinks”).
Page 1 / 1
Hi @Ibenamar If it is an AHV hypervisor then could you please run the following command from any CVM and share the output
allssh “manage_ovs show_interfaces”
allssh “manage_ovs show_uplinks”
@Neel Kotak See the following screens I was preparing in parallel:
I tried to connect to the physical switch (Cisco SG350X) via Trunk/LACP but then I can’t access Nutanix Hosts/CVMs/Prism. Access works using “Access” configuration:
I would like to use “Trunk” to put VMs in different vLANs.
Hi @Ibinema
By default AHV hosts and CVM have to be in the same VLAN. I usually set that vlan as default in the trunk, with that configuration CVM’s and hosts works with no tag and i can still separate the other VLAN’s for the VM’s but i don’t know if it’s possible on your switch or how to set it up.
Anyways, if you need to use a tagged vlan for any reason you can just run this commands from the cvm on each host
change_cvm_vlan VLAN_TAG this will change the CVM VLAN
ssh root@192.168.5.1 ovs-vsctl set port bro tag=VLAN_TAGthis will change the AHV host VLAN
I don’t know why your are using a bridge for mgmt on node1 you only need the br0 everything is managed through br0 (CVM and AHV host)
Hope this helps
Regards!
@bcaballero Thank you for your input I did not configure any VLAN on hosts or CVMs (I think I read in the docs that this is the preferred way). Therefore I expect host/CVMs/Prism to be in the default VLAN 1 (or is it called 0?). How do I check if this assumption is correct? I have all other ports (connected devices) on the physical switch in explicit vLANs and therefore only the Nutanix cluster in the default vLAN.
I want to configure the ports for br0 (4×1Gb x 4 hosts) on the physical switch as "trunk" with LACP and "all vLANs". But then I can't connect to Nutanix hosts/CVMs/Prism via physical switch. How can I achieve that or what am I missing?
The bridge "mgmt" was just a test that i was not able to delete at the time. Shouldn't be a problem for now, right?
I have a concept in my head where the 4x1Gb per host go from physical switch "directly" to a main virtual switch that connects hosts/CVM/Prism. If I trunk two physical switches vLANs work between those switches but I can only get Nutanix access to work via the 4x1Gb connections by configuring "access" with one specific vLAN for those ports on the physical switch instead of the desired trunk (all vLANs) connection.
Hi @Ibinema
First things first. To check the CVM VLAN you need to run the command ssh root@192.168.5.1 ovs-vsctl show from the CVM. Search in the output for “vnet0” you should not see any tag (in my case i have it because my CVM is on tagged VLAN 105). The same applies to the AHV host but in this case is “br0”
To delete the “test” bridge you can take a look to this link it explains the different swithces of manage_ovs
You are right the “best practice” is to let AHV host and CVM on an “untagged” VLAN. If you set your port in Access mode on whatever VLAN it works, of course, but i suspect that when you set your ports on Trunk mode there’s no default VLAN so untagged traffic is dropped what i recommend you to do is to set a trunk with the desired vlans and set up a native vlan for the CVM and AHV host on your LAG ports. Whatever VLAN you’ve created before.
Let’s take for example an evil VLAN 666 for AHV and CVM and suppose that your switch has VLANS 10,20,30 created
This command can be a good example on your IOS switch sorry if it’s not very accurate long time no see a Cisco Switch
(Interface to set up)interface “LAG1234” (Interface mode)switchport mode trunk (By default all VLANS will be inside the Trunk 10,20 and 30) (Set the native VLAN)switchport trunk native vlan 666
Hope this helps
Regards!
Hi @bcaballero
Thank you for your inputs!
As desired I don’t see any tags (exept with “vnet3”):
Is there any more configuration needed on Nutanix side (like a native VLAN ID for the virtual switch or something like that)?
I would like to use VLAN 1 as the native vLAN and that is what I tried:
LAG (it looks like "Po" stands for the link aggregation) interface Po1 switchport trunk native vlan 1 interface Po2 switchport trunk native vlan 1 …
I didn’t use “switchport mode trunk” because I configured the ports already as “trunk” via the GUI. I don’t know why I am able to configure this per port and per LAG. In my eyes if LAG is enabled I shouldn’t be allowed to configure it per port of a LAG.
I’m not sure if I have to store the configuration or bring it to effect when using the command line and I didn’t find out how to read those settings (to check if I did it right).
What I know is that I again can’t access the cluster. Do I need to change something on the Nutanix hosts (NIC config)? What is the reason that “Uplink Configuration” shows “No Uplinks” in Nutanix?
Hi @Ibinema
No you don’t need to configure anything on Nutanix it will work by default on the default untagged vlan of the switch port unless you setup a tagged vlan for your host and cvm.
My bet is that there is a switch misconfiguration anywhere on the switch, as i’ve seen you did configured the bond like Balance-TCP is the LAG on the switch set up to LACP 802.3ad?
I don’t have an “enterprise” switch at home but my Netgear SX10 maybe can help you a bit.
The default VLAN is the one with the asterisk (in this case VLAN 1) i’ve set it up as PVID that means that all untagged traffic that get in/out the port will be on VLAN 1 without configuration on the Nutanix side. If i were using VLAN 1 (remember I’m using 105) for hosts and CVM I would have to do NOTHING on the Nutanix side it should work like a normal host
On the other hand, regarding at LAG i’ve set up LAG2 with ports 1 and 2 as LACP, not static, check that on your switch i’ve seen some of them which defaults on “Static” instead of “LACP 802.3ad” you need this one for Balance-TCP mode
I hope this clarify yourself a bit
Regards!
I feel like there is a lot going on here. And I will start from the beginning so please forgive me if any or all of the below is too easy for you. My intention is to help only.
I would suggest having a diagram of your setup. Even if it’s on a sheet of paper or in your notebook. I always have that. It helps me with checking if I have applied config everywhere and understand how the traffic will flow and how it won’t. Maybe it’s just me but I do not have enough rendering capacity in my mind to keep a steady and focused picture of a setup.
LAG is a link aggregation that can use more than one algorithm. For Nutaniz it’s LACP.
For LACP to work (correct me if I’m wrong here) it must be configured on both ends, the timers must match and at least one side must be an initiator (be in Active mode). In the output that you included lacp_speed is set to fast. Is that backed up on the switch side?
In your original post the screenshot has Active-Backup radio button on. Active-Backup mode does not require link aggregation because it uses one link at a time. Active-Active with MAC pinning still does not require link aggregation because it uses one link at a time to forward traffic with the same source MAC address. If you want to engage all links then Active-Active with LAG (the last option on the screenshot) is the way to go.
Change the configuration to default and simple. Make sure things work. No LACP, Access ports. Confirm all ports are up and running on both sides: Nutanix and the switch.
Change configuration of the LACP with no change of speed and failback on. See if port channel interface comes up, it is not blocked by spanning-tree.
@Neel Kotak@bcaballero@Alona thank you for your help!
I can report back with a classic: “did you try to turn it off and on again?”. It was a lot of hassle just because I didn’t reboot the hosts after reconfiguring the NICs. What a fool.
Now I have to find out how to fix the routing and I am ready for some VMs.