Enable RSS for virtio application ( dpdk version 21.11) | Nutanix Community
Skip to main content

I'm using a Nutanix virtual machine to run a DPDK(Version 21.11)-based application.
Application is failing during rte_eth_dev_configure . For our application, RSS support is required.

eth_config.rxmode.mq_mode = ETH_MQ_RX_RSS;
static uint8_t hashKey[] = {
            0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
            0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
            0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
            0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
            0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
        };

        eth_config.rx_adv_conf.rss_conf.rss_key = hashKey;
        eth_config.rx_adv_conf.rss_conf.rss_key_len = sizeof(hashKey);
eth_config.rx_adv_conf.rss_conf.rss_hf = 260



With the aforementioned RSS configuration, the application is not coming up. The same application runs without any issues on a VMware virtual machine.  

When I set

    eth_config.rxmode.mq_mode = ETH_MQ_RX_NONE
eth_config.rx_adv_conf.rss_conf.rss_hf = 0

Application starts working fine. Since we need RSS support for our application I cannot set eth_config.rxmode.mq_mode = ETH_MQ_RX_NONE.

I looked at the DPDK 21.11 release notes, and it mentions that virtio_net supports RSS support.


In this application traffic is tapped to capture port. I have also created two queues using ACLI comments.  

<acropolis> vm.nic_create nutms1-ms type=kNetworkFunctionNic network_function_nic_type=kTap queues=2

<acropolis> vm.nic_get testvm
xx:xx:xx:xx:xx:xx {
  mac_addr: "xx:xx:xx:xx:xx:xx"
  network_function_nic_type: "kTap"
  network_type: "kNativeNetwork"
  queues: 2
  type: "kNetworkFunctionNic"
  uuid: "9c26c704-bcb3-4483-bdaf-4b64bb9233ef"
}


Additionally, I've turned on dpdk logging. PFB the dpdk log's output.

EAL: PCI device 0000:00:05.0 on NUMA socket 0
EAL:   probe driver: 1af4:1000 net_virtio
EAL: Probe PCI driver: net_virtio (1af4:1000) device: 0000:00:05.0 (socket 0)
EAL:   PCI memory mapped at 0x940000000
EAL:   PCI memory mapped at 0x940001000
virtio_read_caps(): 98] skipping non VNDR cap id: 11
virtio_read_caps(): 84] cfg type: 5, bar: 0, offset: 0000, len: 0
virtio_read_caps(): 70] cfg type: 2, bar: 4, offset: 3000, len: 4096
virtio_read_caps(): r60] cfg type: 4, bar: 4, offset: 2000, len: 4096
virtio_read_caps(): r50] cfg type: 3, bar: 4, offset: 1000, len: 4096
virtio_read_caps(): r40] cfg type: 1, bar: 4, offset: 0000, len: 4096
virtio_read_caps(): found modern virtio pci device.
virtio_read_caps(): common cfg mapped at: 0x940001000
virtio_read_caps(): device cfg mapped at: 0x940003000
virtio_read_caps(): isr cfg mapped at: 0x940002000
virtio_read_caps(): notify base: 0x940004000, notify off multiplier: 4
vtpci_init(): modern virtio pci detected.
virtio_ethdev_negotiate_features(): guest_features before negotiate = 8000005f10ef8028
virtio_ethdev_negotiate_features(): host_features before negotiate = 130ffffa7
virtio_ethdev_negotiate_features(): features after negotiate = 110ef8020
virtio_init_device(): PORT MAC: 50:6B:8D:A9:09:62
virtio_init_device(): link speed = -1, duplex = 1
virtio_init_device(): config->max_virtqueue_pairs=2
virtio_init_device(): config->status=1
virtio_init_device(): PORT MAC: 50:6B:8D:A9:09:62
virtio_init_queue(): setting up queue: 0 on NUMA node 0
virtio_init_queue(): vq_size: 256
virtio_init_queue(): vring_size: 10244, rounded_vring_size: 12288
virtio_init_queue(): vq->vq_ring_mem: 0x7fffab000
virtio_init_queue(): vq->vq_ring_virt_mem: 0x17ffab000
virtio_init_vring():  >>
modern_setup_queue(): queue 0 addresses:
modern_setup_queue():    desc_addr: 7fffab000
modern_setup_queue():    aval_addr: 7fffac000
modern_setup_queue():    used_addr: 7fffad000
modern_setup_queue():    notify addr: 0x940004000 (notify offset: 0)
virtio_init_queue(): setting up queue: 1 on NUMA node 0
virtio_init_queue(): vq_size: 256
virtio_init_queue(): vring_size: 10244, rounded_vring_size: 12288
virtio_init_queue(): vq->vq_ring_mem: 0x7fffa6000
virtio_init_queue(): vq->vq_ring_virt_mem: 0x17ffa6000
virtio_init_vring():  >>
modern_setup_queue(): queue 1 addresses:
modern_setup_queue():    desc_addr: 7fffa6000
modern_setup_queue():    aval_addr: 7fffa7000
modern_setup_queue():    used_addr: 7fffa8000
modern_setup_queue():    notify addr: 0x940004004 (notify offset: 1)
virtio_init_queue(): setting up queue: 2 on NUMA node 0
virtio_init_queue(): vq_size: 256
virtio_init_queue(): vring_size: 10244, rounded_vring_size: 12288
virtio_init_queue(): vq->vq_ring_mem: 0x7fff98000
virtio_init_queue(): vq->vq_ring_virt_mem: 0x17ff98000
virtio_init_vring():  >>
modern_setup_queue(): queue 2 addresses:
modern_setup_queue():    desc_addr: 7fff98000
modern_setup_queue():    aval_addr: 7fff99000
modern_setup_queue():    used_addr: 7fff9a000
modern_setup_queue():    notify addr: 0x940004008 (notify offset: 2)
virtio_init_queue(): setting up queue: 3 on NUMA node 0
virtio_init_queue(): vq_size: 256
virtio_init_queue(): vring_size: 10244, rounded_vring_size: 12288
virtio_init_queue(): vq->vq_ring_mem: 0x7fff93000
virtio_init_queue(): vq->vq_ring_virt_mem: 0x17ff93000
virtio_init_vring():  >>
modern_setup_queue(): queue 3 addresses:
modern_setup_queue():    desc_addr: 7fff93000
modern_setup_queue():    aval_addr: 7fff94000
modern_setup_queue():    used_addr: 7fff95000
modern_setup_queue():    notify addr: 0x94000400c (notify offset: 3)
virtio_init_queue(): setting up queue: 4 on NUMA node 0
virtio_init_queue(): vq_size: 64
virtio_init_queue(): vring_size: 4612, rounded_vring_size: 8192
virtio_init_queue(): vq->vq_ring_mem: 0x7fff87000
virtio_init_queue(): vq->vq_ring_virt_mem: 0x17ff87000
virtio_init_vring():  >>
modern_setup_queue(): queue 4 addresses:
modern_setup_queue():    desc_addr: 7fff87000
modern_setup_queue():    aval_addr: 7fff87400
modern_setup_queue():    used_addr: 7fff88000
modern_setup_queue():    notify addr: 0x940004010 (notify offset: 4)
eth_virtio_pci_init(): port 0 vendorID=0x1af4 deviceID=0x1000
EAL: Module /sys/module/vfio not found! error 2 (No such file or directory)
EAL: lib.telemetry log level changed from disabled to debug
TELEMETRY: Attempting socket bind to path '/var/run/dpdk/rte/dpdk_telemetry.v2'
TELEMETRY: Initial bind to socket '/var/run/dpdk/rte/dpdk_telemetry.v2' failed.
TELEMETRY: Attempting unlink and retrying bind
TELEMETRY: Socket creation and binding ok
TELEMETRY: Telemetry initialized ok
TELEMETRY: No legacy callbacks, legacy socket not created
MWed Jul 26 04:44:42 2023]ems_dpi: 28098] DPDK Initialised
cWed Jul 26 04:44:42 2023]ams_dpi: 28098] Finished DPDK logging session


The following result is produced when testpmd runs the RSS configuration command.

testpmd> port config all rss all
Port 0 modified RSS hash function based on hardware support,requested:0x17f83fffc configured:0
Multi-queue RSS mode isn't enabled.
Configuration of RSS hash at ethernet port 0 failed with error (95): Operation not supported.


Any suggestions on how to enable RSS support in this situation would be greatly appreciated.

Thank you for your assistance. 

Hi, Shiv,

You should open case with Nutanix support, this needs engineering support, may be combined efforts are required to qualify/certify this application on AHV. 

@bbbburns can guide you, Nutanix always welcomes new applications onboard.

Is it PaloAlto by any chance?

F>P


@bbbburns yes service chain is already in place for tapping  the traffic to the capture port.The application is made to work with the Tapped interface.Same works on VM ware platform.


@sl.farhanparkar

Under lying OS IS Linux (Kernel : 4.9)

The virtio driver version 1.0.0 is included in the Linux kernel version 4.9 that powers VM.

ethtool -i eth1
driver: virtio_net
version: 1.0.0
firmware-version:
expansion-rom-version:
bus-info: 0000:00:04.0

Nutanix document stats that "Ensure the AHV UVM is running the latest Nutanix VirtIO driver package. Nutanix VirtIO 1.1.6 or higher is required for RSS support. " Linux kernel version: 5.4 and later will have Virtio 1.1.6.

Since the programme is built on the dpdk, the PMD driver will use the eth interface rather than the one that the kernel provides. I apologise if I'm mistaken. RSS is supported by the dpdk PMD version in use.

Because of the client-centric nature of this application, upgrading the kernel will be challenging.


I see you’re using a NIC type of kNetworkFunctionNic. Do you have this hooked up to a bridge chain for traffic redirection? 

Have you tried this same operation with a kNormalNic to see if it behaves any differently there?


Hi Shiv,

What the OS you are using and what’s the application?

You can always open case with Nutanix, SRE can help you for sure.

F>P


I appreciate your response. I've already looked over these. I created vir queues using the described process. Even though I was able to construct the queues, nothing worked.


Hi,

 

Check the following can help you

https://portal.nutanix.com/page/documents/kbs/details?targetId=kA00e000000Cqu6CAC

https://portal.nutanix.com/page/documents/details?targetId=AHV-Admin-Guide-v6_6:ahv-virtio-net-multi-queue-enable-t.html

 

F>P