Hi community,
I have some Qlogic 10G/25G NIC cards in my cluster which as I understand are not “supported” however seem to be detected correctly to some extent.
Manage_ovs show-interfaces displays the output below which leads me to belive that the driver is loaded. However, currently those ports are connected to a 10G switch and so the speed and the operational status are not correctly working. Is there a way to set these by hand?
manage_ovs show_interfaces
name mode link speed
enp5s0f0 25000 False None
enp5s0f1 25000 False None
eth0 1000 False None
eth1 1000 False None
eth2 1000 False None
eth3 1000 False None
eth4 1000 False None
eth5 1000 False None
eth6 1000 False None
eth7 1000 True 1000
When I add those cards to a vritual bridge I get the error message below.
manage_ovs --bridge_name br1 --bond_name bond1 --interfaces 25g update_uplinks
2021-03-31 18:05:50,402Z WARNING manage_ovs:437 Interface enp5s0f1 does not have link state
2021-03-31 18:05:50,403Z WARNING manage_ovs:437 Interface enp5s0f0 does not have link state
And manage_ovs show_uplinks shows me those ports have not been added.
manage_ovs show_uplinks
Bridge: br0
Bond: eth7
bond_mode: active-backup
interfaces: eth7
lacp: off
lacp-fallback: false
lacp_speed: slow
Bridge: br1
I’m running CE2020.09.16
Your help is appreciated.