0% found this document useful (0 votes)
653 views808 pages

DPDK

dpdk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
653 views808 pages

DPDK

dpdk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 808

DPDK documentation

Release 17.05.0-rc0

February 18, 2017


Contents

1 Getting Started Guide for Linux 1

2 Getting Started Guide for FreeBSD 26

3 Sample Applications User Guides 37

4 Programmer’s Guide 255

5 HowTo Guides 465

6 DPDK Tools User Guides 493

7 Testpmd Application User Guide 504

8 Network Interface Controller Drivers 554

9 Crypto Device Drivers 651

10 Xen Guide 671

11 Contributor’s Guidelines 679

12 Release Notes 724

13 FAQ 802

i
CHAPTER 1

Getting Started Guide for Linux

Introduction

This document contains instructions for installing and configuring the Data Plane Development
Kit (DPDK) software. It is designed to get customers up and running quickly. The document
describes how to compile and run a DPDK application in a Linux application (linuxapp) envi-
ronment, without going deeply into detail.

Documentation Roadmap

The following is a list of DPDK documents in the suggested reading order:


• Release Notes: Provides release-specific information, including supported features, lim-
itations, fixed issues, known issues and so on. Also, provides the answers to frequently
asked questions in FAQ format.
• Getting Started Guide (this document): Describes how to install and configure the DPDK;
designed to get users up and running quickly with the software.
• Programmer’s Guide: Describes:
– The software architecture and how to use it (through examples), specifically in a
Linux application (linuxapp) environment
– The content of the DPDK, the build system (including the commands that can be
used in the root DPDK Makefile to build the development kit and an application) and
guidelines for porting an application
– Optimizations used in the software and those that should be considered for new
development
A glossary of terms is also provided.
• API Reference: Provides detailed information about DPDK functions, data structures and
other programming constructs.
• Sample Applications User Guide: Describes a set of sample applications. Each chap-
ter describes a sample application that showcases specific functionality and provides
instructions on how to compile, run and use the sample application.

1
DPDK documentation, Release 17.05.0-rc0

System Requirements

This chapter describes the packages required to compile the DPDK.

Note: If the DPDK is being used on an Intel® Communications Chipset 89xx Series platform,
please consult the Intel® Communications Chipset 89xx Series Software for Linux Getting
Started Guide.

BIOS Setting Prerequisite on x86

For the majority of platforms, no special BIOS settings are needed to use basic DPDK func-
tionality. However, for additional HPET timer and power management functionality, and high
performance of small packets on 40G NIC, BIOS setting changes may be needed. Consult the
section on Enabling Additional Functionality for more information on the required changes.

Compilation of the DPDK

Required Tools:

Note: Testing has been performed using Fedora 18. The setup commands and installed
packages needed on other systems may be different. For details on other Linux distributions
and the versions tested, please consult the DPDK Release Notes.

• GNU make.
• coreutils: cmp, sed, grep, arch, etc.
• gcc: versions 4.9 or later is recommended for all platforms. On some distributions, some
specific compiler flags and linker flags are enabled by default and affect performance
(-fstack-protector, for example). Please refer to the documentation of your distri-
bution and to gcc -dumpspecs.
• libc headers, often packaged as gcc-multilib (glibc-devel.i686 /
libc6-dev-i386; glibc-devel.x86_64 / libc6-dev for 64-bit compilation
on Intel architecture; glibc-devel.ppc64 for 64 bit IBM Power architecture;)
• Linux kernel headers or sources required to build kernel modules. (kernel - devel.x86_64;
kernel - devel.ppc64)
• Additional packages required for 32-bit compilation on 64-bit systems are:
– glibc.i686, libgcc.i686, libstdc++.i686 and glibc-devel.i686 for Intel i686/x86_64;
– glibc.ppc64, libgcc.ppc64, libstdc++.ppc64 and glibc-devel.ppc64 for IBM ppc_64;

Note: x86_x32 ABI is currently supported with distribution packages only on Ubuntu
higher than 13.10 or recent Debian distribution. The only supported compiler is gcc 4.9+.

• Python, version 2.7+ or 3.2+, to use various helper scripts included in the DPDK package.

1.2. System Requirements 2


DPDK documentation, Release 17.05.0-rc0

Optional Tools:
• Intel® C++ Compiler (icc). For installation, additional libraries may be required. See the
icc Installation Guide found in the Documentation directory under the compiler installa-
tion.
• IBM® Advance ToolChain for Powerlinux. This is a set of open source development
tools and runtime libraries which allows users to take leading edge advantage of IBM’s
latest POWER hardware features on Linux. To install it, see the IBM official installation
document.
• libpcap headers and libraries (libpcap-devel) to compile and use the libpcap-based
poll-mode driver. This driver is disabled by default and can be enabled by setting
CONFIG_RTE_LIBRTE_PMD_PCAP=y in the build time config file.
• libarchive headers and library are needed for some unit tests using tar to get their re-
sources.

Running DPDK Applications

To run an DPDK application, some customization may be required on the target machine.

System Software

Required:
• Kernel version >= 2.6.34
The kernel version in use can be checked using the command:
uname -r

• glibc >= 2.7 (for features related to cpuset)


The version can be checked using the ldd --version command.
• Kernel configuration
In the Fedora OS and other common distributions, such as Ubuntu, or Red Hat Enter-
prise Linux, the vendor supplied kernel configurations can be used to run most DPDK
applications.
For other kernel builds, options which should be enabled for DPDK include:
– UIO support
– HUGETLBFS
– PROC_PAGE_MONITOR support
– HPET and HPET_MMAP configuration options should also be enabled if HPET sup-
port is required. See the section on High Precision Event Timer (HPET) Functional-
ity for more details.

1.2. System Requirements 3


DPDK documentation, Release 17.05.0-rc0

Use of Hugepages in the Linux Environment

Hugepage support is required for the large memory pool allocation used for packet buffers (the
HUGETLBFS option must be enabled in the running kernel as indicated the previous section).
By using hugepage allocations, performance is increased since fewer pages are needed, and
therefore less Translation Lookaside Buffers (TLBs, high speed translation caches), which re-
duce the time it takes to translate a virtual page address to a physical page address. Without
hugepages, high TLB miss rates would occur with the standard 4k page size, slowing perfor-
mance.

Reserving Hugepages for DPDK Use

The allocation of hugepages should be done at boot time or as soon as possible after system
boot to prevent memory from being fragmented in physical memory. To reserve hugepages at
boot time, a parameter is passed to the Linux kernel on the kernel command line.
For 2 MB pages, just pass the hugepages option to the kernel. For example, to reserve 1024
pages of 2 MB, use:
hugepages=1024

For other hugepage sizes, for example 1G pages, the size must be specified explicitly and can
also be optionally set as the default hugepage size for the system. For example, to reserve 4G
of hugepage memory in the form of four 1G pages, the following options should be passed to
the kernel:
default_hugepagesz=1G hugepagesz=1G hugepages=4

Note: The hugepage sizes that a CPU supports can be determined from the CPU flags on Intel
architecture. If pse exists, 2M hugepages are supported; if pdpe1gb exists, 1G hugepages are
supported. On IBM Power architecture, the supported hugepage sizes are 16MB and 16GB.

Note: For 64-bit applications, it is recommended to use 1 GB hugepages if the platform


supports them.

In the case of a dual-socket NUMA system, the number of hugepages reserved at boot time is
generally divided equally between the two sockets (on the assumption that sufficient memory
is present on both sockets).
See the Documentation/kernel-parameters.txt file in your Linux source tree for further details
of these and other kernel options.
Alternative:
For 2 MB pages, there is also the option of allocating hugepages after the system has booted.
This is done by echoing the number of hugepages required to a nr_hugepages file in the
/sys/devices/ directory. For a single-node system, the command to use is as follows (as-
suming that 1024 pages are required):
echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages

On a NUMA machine, pages should be allocated explicitly on separate nodes:

1.2. System Requirements 4


DPDK documentation, Release 17.05.0-rc0

echo 1024 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages


echo 1024 > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages

Note: For 1G pages, it is not possible to reserve the hugepage memory after the system has
booted.

Using Hugepages with the DPDK

Once the hugepage memory is reserved, to make the memory available for DPDK use, perform
the following steps:
mkdir /mnt/huge
mount -t hugetlbfs nodev /mnt/huge

The mount point can be made permanent across reboots, by adding the following line to the
/etc/fstab file:
nodev /mnt/huge hugetlbfs defaults 0 0

For 1GB pages, the page size must be specified as a mount option:
nodev /mnt/huge_1GB hugetlbfs pagesize=1GB 0 0

Xen Domain0 Support in the Linux Environment

The existing memory management implementation is based on the Linux kernel hugepage
mechanism. On the Xen hypervisor, hugepage support for DomainU (DomU) Guests means
that DPDK applications work as normal for guests.
However, Domain0 (Dom0) does not support hugepages. To work around this limitation, a new
kernel module rte_dom0_mm is added to facilitate the allocation and mapping of memory via
IOCTL (allocation) and MMAP (mapping).

Enabling Xen Dom0 Mode in the DPDK

By default, Xen Dom0 mode is disabled in the DPDK build configuration files. To support
Xen Dom0, the CONFIG_RTE_LIBRTE_XEN_DOM0 setting should be changed to “y”, which
enables the Xen Dom0 mode at compile time.
Furthermore, the CONFIG_RTE_EAL_ALLOW_INV_SOCKET_ID setting should also be
changed to “y” in the case of the wrong socket ID being received.

Loading the DPDK rte_dom0_mm Module

To run any DPDK application on Xen Dom0, the rte_dom0_mm module must be loaded into
the running kernel with rsv_memsize option. The module is found in the kmod sub-directory
of the DPDK target directory. This module should be loaded using the insmod command as
shown below (assuming that the current directory is the DPDK target directory):
sudo insmod kmod/rte_dom0_mm.ko rsv_memsize=X

The value X cannot be greater than 4096(MB).

1.2. System Requirements 5


DPDK documentation, Release 17.05.0-rc0

Configuring Memory for DPDK Use

After the rte_dom0_mm.ko kernel module has been loaded, the user must configure the mem-
ory size for DPDK usage. This is done by echoing the memory size to a memsize file in the
/sys/devices/ directory. Use the following command (assuming that 2048 MB is required):
echo 2048 > /sys/kernel/mm/dom0-mm/memsize-mB/memsize

The user can also check how much memory has already been used:
cat /sys/kernel/mm/dom0-mm/memsize-mB/memsize_rsvd

Xen Domain0 does not support NUMA configuration, as a result the --socket-mem command
line option is invalid for Xen Domain0.

Note: The memsize value cannot be greater than the rsv_memsize value.

Running the DPDK Application on Xen Domain0

To run the DPDK application on Xen Domain0, an extra command line option --xen-dom0 is
required.

Compiling the DPDK Target from Source

Note: Parts of this process can also be done using the setup script described in the Quick
Start Setup Script section of this document.

Install the DPDK and Browse Sources

First, uncompress the archive and move to the uncompressed DPDK source directory:
tar xJf dpdk-<version>.tar.xz
cd dpdk-<version>

The DPDK is composed of several directories:


• lib: Source code of DPDK libraries
• drivers: Source code of DPDK poll-mode drivers
• app: Source code of DPDK applications (automatic tests)
• examples: Source code of DPDK application examples
• config, buildtools, mk: Framework-related makefiles, scripts and configuration

Installation of DPDK Target Environments

The format of a DPDK target is:


ARCH-MACHINE-EXECENV-TOOLCHAIN

1.3. Compiling the DPDK Target from Source 6


DPDK documentation, Release 17.05.0-rc0

where:
• ARCH can be: i686, x86_64, ppc_64
• MACHINE can be: native, power8
• EXECENV can be: linuxapp, bsdapp
• TOOLCHAIN can be: gcc, icc
The targets to be installed depend on the 32-bit and/or 64-bit packages and compilers installed
on the host. Available targets can be found in the DPDK/config directory. The defconfig_ prefix
should not be used.

Note: Configuration files are provided with the RTE_MACHINE optimization level set. Within
the configuration files, the RTE_MACHINE configuration value is set to native, which means that
the compiled software is tuned for the platform on which it is built. For more information on this
setting, and its possible values, see the DPDK Programmers Guide.

When using the Intel® C++ Compiler (icc), one of the following commands should be invoked
for 64-bit or 32-bit use respectively. Notice that the shell scripts update the $PATH variable and
therefore should not be performed in the same session. Also, verify the compiler’s installation
directory since the path may be different:
source /opt/intel/bin/iccvars.sh intel64
source /opt/intel/bin/iccvars.sh ia32

To install and make targets, use the make install T=<target> command in the top-level
DPDK directory.
For example, to compile a 64-bit target using icc, run:
make install T=x86_64-native-linuxapp-icc

To compile a 32-bit build using gcc, the make command should be:
make install T=i686-native-linuxapp-gcc

To prepare a target without building it, for example, if the configuration changes need to be
made before compilation, use the make config T=<target> command:
make config T=x86_64-native-linuxapp-gcc

Warning: Any kernel modules to be used, e.g. igb_uio, kni, must be compiled with
the same kernel as the one running on the target. If the DPDK is not being built on the
target machine, the RTE_KERNELDIR environment variable should be used to point the
compilation at a copy of the kernel version to be used on the target machine.

Once the target environment is created, the user may move to the target environment directory
and continue to make code changes and re-compile. The user may also make modifications to
the compile-time DPDK configuration by editing the .config file in the build directory. (This is a
build-local copy of the defconfig file from the top- level config directory).
cd x86_64-native-linuxapp-gcc
vi .config
make

In addition, the make clean command can be used to remove any existing compiled files for a
subsequent full, clean rebuild of the code.

1.3. Compiling the DPDK Target from Source 7


DPDK documentation, Release 17.05.0-rc0

Browsing the Installed DPDK Environment Target

Once a target is created it contains all libraries, including poll-mode drivers, and header files
for the DPDK environment that are required to build customer applications. In addition, the test
and testpmd applications are built under the build/app directory, which may be used for testing.
A kmod directory is also present that contains kernel modules which may be loaded if needed.

Loading Modules to Enable Userspace IO for DPDK

To run any DPDK application, a suitable uio module can be loaded into the running kernel.
In many cases, the standard uio_pci_generic module included in the Linux kernel can
provide the uio capability. This module can be loaded using the command
sudo modprobe uio_pci_generic

As an alternative to the uio_pci_generic, the DPDK also includes the igb_uio module which
can be found in the kmod subdirectory referred to above. It can be loaded as shown below:
sudo modprobe uio
sudo insmod kmod/igb_uio.ko

Note: For some devices which lack support for legacy interrupts, e.g. virtual function (VF)
devices, the igb_uio module may be needed in place of uio_pci_generic.

Since DPDK release 1.7 onward provides VFIO support, use of UIO is optional for platforms
that support using VFIO.

Loading VFIO Module

To run an DPDK application and make use of VFIO, the vfio-pci module must be loaded:
sudo modprobe vfio-pci

Note that in order to use VFIO, your kernel must support it. VFIO kernel modules have been
included in the Linux kernel since version 3.6.0 and are usually present by default, however
please consult your distributions documentation to make sure that is the case.
Also, to use VFIO, both kernel and BIOS must support and be configured to use IO virtualiza-
tion (such as Intel® VT-d).
For proper operation of VFIO when running DPDK applications as a non-privileged user, cor-
rect permissions should also be set up. This can be done by using the DPDK setup script
(called dpdk-setup.sh and located in the usertools directory).

Binding and Unbinding Network Ports to/from the Kernel Modules

As of release 1.4, DPDK applications no longer automatically unbind all supported network
ports from the kernel driver in use. Instead, all ports that are to be used by an DPDK ap-
plication must be bound to the uio_pci_generic, igb_uio or vfio-pci module before
the application is run. Any network ports under Linux* control will be ignored by the DPDK
poll-mode drivers and cannot be used by the application.

1.3. Compiling the DPDK Target from Source 8


DPDK documentation, Release 17.05.0-rc0

Warning: The DPDK will, by default, no longer automatically unbind network ports from
the kernel driver at startup. Any ports to be used by an DPDK application must be unbound
from Linux* control and bound to the uio_pci_generic, igb_uio or vfio-pci module
before the application is run.

To bind ports to the uio_pci_generic, igb_uio or vfio-pci module for DPDK use, and
then subsequently return ports to Linux* control, a utility script called dpdk_nic _bind.py is
provided in the usertools subdirectory. This utility can be used to provide a view of the current
state of the network ports on the system, and to bind and unbind those ports from the different
kernel modules, including the uio and vfio modules. The following are some examples of how
the script can be used. A full description of the script and its parameters can be obtained by
calling the script with the --help or --usage options. Note that the uio or vfio kernel modules
to be used, should be loaded into the kernel before running the dpdk-devbind.py script.

Warning: Due to the way VFIO works, there are certain limitations to which devices can
be used with VFIO. Mainly it comes down to how IOMMU groups work. Any Virtual Function
device can be used with VFIO on its own, but physical devices will require either all ports
bound to VFIO, or some of them bound to VFIO while others not being bound to anything at
all.
If your device is behind a PCI-to-PCI bridge, the bridge will then be part of the IOMMU
group in which your device is in. Therefore, the bridge driver should also be unbound from
the bridge PCI device for VFIO to work with devices behind the bridge.

Warning: While any user can run the dpdk-devbind.py script to view the status of the
network ports, binding or unbinding network ports requires root privileges.

To see the status of all network ports on the system:


./usertools/dpdk-devbind.py --status

Network devices using DPDK-compatible driver


============================================
0000:82:00.0 '82599EB 10-GbE NIC' drv=uio_pci_generic unused=ixgbe
0000:82:00.1 '82599EB 10-GbE NIC' drv=uio_pci_generic unused=ixgbe

Network devices using kernel driver


===================================
0000:04:00.0 'I350 1-GbE NIC' if=em0 drv=igb unused=uio_pci_generic *Active*
0000:04:00.1 'I350 1-GbE NIC' if=eth1 drv=igb unused=uio_pci_generic
0000:04:00.2 'I350 1-GbE NIC' if=eth2 drv=igb unused=uio_pci_generic
0000:04:00.3 'I350 1-GbE NIC' if=eth3 drv=igb unused=uio_pci_generic

Other network devices


=====================
<none>

To bind device eth1,‘‘04:00.1‘‘, to the uio_pci_generic driver:


./usertools/dpdk-devbind.py --bind=uio_pci_generic 04:00.1

or, alternatively,
./usertools/dpdk-devbind.py --bind=uio_pci_generic eth1

To restore device 82:00.0 to its original kernel binding:


./usertools/dpdk-devbind.py --bind=ixgbe 82:00.0

1.3. Compiling the DPDK Target from Source 9


DPDK documentation, Release 17.05.0-rc0

Compiling and Running Sample Applications

The chapter describes how to compile and run applications in an DPDK environment. It also
provides a pointer to where sample applications are stored.

Note: Parts of this process can also be done using the setup script described the Quick Start
Setup Script section of this document.

Compiling a Sample Application

Once an DPDK target environment directory has been created (such as


x86_64-native-linuxapp-gcc), it contains all libraries and header files required to
build an application.
When compiling an application in the Linux* environment on the DPDK, the following variables
must be exported:
• RTE_SDK - Points to the DPDK installation directory.
• RTE_TARGET - Points to the DPDK target environment directory.
The following is an example of creating the helloworld application, which runs in the DPDK
Linux environment. This example may be found in the ${RTE_SDK}/examples directory.
The directory contains the main.c file. This file, when combined with the libraries in the
DPDK target environment, calls the various functions to initialize the DPDK environment, then
launches an entry point (dispatch application) for each core to be utilized. By default, the binary
is generated in the build directory.
cd examples/helloworld/
export RTE_SDK=$HOME/DPDK
export RTE_TARGET=x86_64-native-linuxapp-gcc

make
CC main.o
LD helloworld
INSTALL-APP helloworld
INSTALL-MAP helloworld.map

ls build/app
helloworld helloworld.map

Note: In the above example, helloworld was in the directory structure of the DPDK. How-
ever, it could have been located outside the directory structure to keep the DPDK structure
intact. In the following case, the helloworld application is copied to a new directory as a
new starting point.
export RTE_SDK=/home/user/DPDK
cp -r $(RTE_SDK)/examples/helloworld my_rte_app
cd my_rte_app/
export RTE_TARGET=x86_64-native-linuxapp-gcc

make
CC main.o
LD helloworld

1.4. Compiling and Running Sample Applications 10


DPDK documentation, Release 17.05.0-rc0

INSTALL-APP helloworld
INSTALL-MAP helloworld.map

Running a Sample Application

Warning: The UIO drivers and hugepages must be setup prior to running an application.

Warning: Any ports to be used by the application must be already bound to an appropriate
kernel module, as described in Binding and Unbinding Network Ports to/from the Kernel
Modules, prior to running the application.

The application is linked with the DPDK target environment’s Environmental Abstraction Layer
(EAL) library, which provides some options that are generic to every DPDK application.
The following is the list of options that can be given to the EAL:
./rte-app -c COREMASK [-n NUM] [-b <domain:bus:devid.func>] \
[--socket-mem=MB,...] [-m MB] [-r NUM] [-v] [--file-prefix] \
[--proc-type <primary|secondary|auto>] [-- xen-dom0]

The EAL options are as follows:


• -c COREMASK: An hexadecimal bit mask of the cores to run on. Note that core number-
ing can change between platforms and should be determined beforehand.
• -n NUM: Number of memory channels per processor socket.
• -b <domain:bus:devid.func>: Blacklisting of ports; prevent EAL from using speci-
fied PCI device (multiple -b options are allowed).
• --use-device: use the specified Ethernet device(s) only. Use comma-separate
[domain:]bus:devid.func values. Cannot be used with -b option.
• --socket-mem: Memory to allocate from hugepages on specific sockets.
• -m MB: Memory to allocate from hugepages, regardless of processor socket. It is rec-
ommended that --socket-mem be used instead of this option.
• -r NUM: Number of memory ranks.
• -v: Display version information on startup.
• --huge-dir: The directory where hugetlbfs is mounted.
• --file-prefix: The prefix text used for hugepage filenames.
• --proc-type: The type of process instance.
• --xen-dom0: Support application running on Xen Domain0 without hugetlbfs.
• --vmware-tsc-map: Use VMware TSC map instead of native RDTSC.
• --base-virtaddr: Specify base virtual address.
• --vfio-intr: Specify interrupt type to be used by VFIO (has no effect if VFIO is not
used).

1.4. Compiling and Running Sample Applications 11


DPDK documentation, Release 17.05.0-rc0

The -c and option is mandatory; the others are optional.


Copy the DPDK application binary to your target, then run the application as follows (assuming
the platform has four memory channels per processor socket, and that cores 0-3 are present
and are to be used for running the application):
./helloworld -c f -n 4

Note: The --proc-type and --file-prefix EAL options are used for running multiple
DPDK processes. See the “Multi-process Sample Application” chapter in the DPDK Sample
Applications User Guide and the DPDK Programmers Guide for more details.

Logical Core Use by Applications

The coremask parameter is always mandatory for DPDK applications. Each bit of the mask
corresponds to the equivalent logical core number as reported by Linux. Since these logical
core numbers, and their mapping to specific cores on specific NUMA sockets, can vary from
platform to platform, it is recommended that the core layout for each platform be considered
when choosing the coremask to use in each case.
On initialization of the EAL layer by an DPDK application, the logical cores to be used and their
socket location are displayed. This information can also be determined for all cores on the
system by examining the /proc/cpuinfo file, for example, by running cat /proc/cpuinfo.
The physical id attribute listed for each processor indicates the CPU socket to which it belongs.
This can be useful when using other processors to understand the mapping of the logical cores
to the sockets.

Note: A more graphical view of the logical core layout may be obtained using the lstopo
Linux utility. On Fedora Linux, this may be installed and run using the following command:
sudo yum install hwloc
./lstopo

Warning: The logical core layout can change between different board layouts and should
be checked before selecting an application coremask.

Hugepage Memory Use by Applications

When running an application, it is recommended to use the same amount of memory as that
allocated for hugepages. This is done automatically by the DPDK application at startup, if no
-m or --socket-mem parameter is passed to it when run.
If more memory is requested by explicitly passing a -m or --socket-mem value, the applica-
tion fails. However, the application itself can also fail if the user requests less memory than the
reserved amount of hugepage-memory, particularly if using the -m option. The reason is as
follows. Suppose the system has 1024 reserved 2 MB pages in socket 0 and 1024 in socket 1.
If the user requests 128 MB of memory, the 64 pages may not match the constraints:
• The hugepage memory by be given to the application by the kernel in socket 1 only. In
this case, if the application attempts to create an object, such as a ring or memory pool

1.4. Compiling and Running Sample Applications 12


DPDK documentation, Release 17.05.0-rc0

in socket 0, it fails. To avoid this issue, it is recommended that the --socket-mem option
be used instead of the -m option.
• These pages can be located anywhere in physical memory, and, although the DPDK EAL
will attempt to allocate memory in contiguous blocks, it is possible that the pages will not
be contiguous. In this case, the application is not able to allocate big memory pools.
The socket-mem option can be used to request specific amounts of memory for specific sock-
ets. This is accomplished by supplying the --socket-mem flag followed by amounts of mem-
ory requested on each socket, for example, supply --socket-mem=0,512 to try and reserve
512 MB for socket 1 only. Similarly, on a four socket system, to allocate 1 GB memory on
each of sockets 0 and 2 only, the parameter --socket-mem=1024,0,1024 can be used.
No memory will be reserved on any CPU socket that is not explicitly referenced, for example,
socket 3 in this case. If the DPDK cannot allocate enough memory on each socket, the EAL
initialization fails.

Additional Sample Applications

Additional sample applications are included in the ${RTE_SDK}/examples directory. These


sample applications may be built and run in a manner similar to that described in earlier sec-
tions in this manual. In addition, see the DPDK Sample Applications User Guide for a descrip-
tion of the application, specific instructions on compilation and execution and some explanation
of the code.

Additional Test Applications

In addition, there are two other applications that are built when the libraries are created. The
source files for these are in the DPDK/app directory and are called test and testpmd. Once the
libraries are created, they can be found in the build/app directory.
• The test application provides a variety of specific tests for the various functions in the
DPDK.
• The testpmd application provides a number of different packet throughput tests and ex-
amples of features such as how to use the Flow Director found in the Intel® 82599 10
Gigabit Ethernet Controller.

Enabling Additional Functionality

High Precision Event Timer HPET) Functionality

BIOS Support

The High Precision Timer (HPET) must be enabled in the platform BIOS if the HPET is to be
used. Otherwise, the Time Stamp Counter (TSC) is used by default. The BIOS is typically
accessed by pressing F2 while the platform is starting up. The user can then navigate to
the HPET option. On the Crystal Forest platform BIOS, the path is: Advanced -> PCH-IO
Configuration -> High Precision Timer -> (Change from Disabled to Enabled if necessary).
On a system that has already booted, the following command can be issued to check if HPET
is enabled:

1.5. Enabling Additional Functionality 13


DPDK documentation, Release 17.05.0-rc0

grep hpet /proc/timer_list

If no entries are returned, HPET must be enabled in the BIOS (as per the instructions above)
and the system rebooted.

Linux Kernel Support

The DPDK makes use of the platform HPET timer by mapping the timer counter into the pro-
cess address space, and as such, requires that the HPET_MMAP kernel configuration option be
enabled.

Warning: On Fedora, and other common distributions such as Ubuntu, the HPET_MMAP
kernel option is not enabled by default. To recompile the Linux kernel with this option en-
abled, please consult the distributions documentation for the relevant instructions.

Enabling HPET in the DPDK

By default, HPET support is disabled in the DPDK build configuration files. To use HPET,
the CONFIG_RTE_LIBEAL_USE_HPET setting should be changed to y, which will enable the
HPET settings at compile time.
For an application to use the rte_get_hpet_cycles() and rte_get_hpet_hz() API
calls, and optionally to make the HPET the default time source for the rte_timer library, the
new rte_eal_hpet_init() API call should be called at application initialization. This API
call will ensure that the HPET is accessible, returning an error to the application if it is not, for
example, if HPET_MMAP is not enabled in the kernel. The application can then determine what
action to take, if any, if the HPET is not available at run-time.

Note: For applications that require timing APIs, but not the HPET timer specifically, it is recom-
mended that the rte_get_timer_cycles() and rte_get_timer_hz() API calls be used
instead of the HPET-specific APIs. These generic APIs can work with either TSC or HPET time
sources, depending on what is requested by an application call to rte_eal_hpet_init(),
if any, and on what is available on the system at runtime.

Running DPDK Applications Without Root Privileges

Although applications using the DPDK use network ports and other hardware resources di-
rectly, with a number of small permission adjustments it is possible to run these applications
as a user other than “root”. To do so, the ownership, or permissions, on the following Linux file
system objects should be adjusted to ensure that the Linux user account being used to run the
DPDK application has access to them:
• All directories which serve as hugepage mount points, for example, /mnt/huge
• The userspace-io device files in /dev, for example, /dev/uio0, /dev/uio1, and so on
• The userspace-io sysfs config and resource files, for example for uio0:
/sys/class/uio/uio0/device/config
/sys/class/uio/uio0/device/resource*

1.5. Enabling Additional Functionality 14


DPDK documentation, Release 17.05.0-rc0

• If the HPET is to be used, /dev/hpet

Note: On some Linux installations, /dev/hugepages is also a hugepage mount point cre-
ated by default.

Power Management and Power Saving Functionality

Enhanced Intel SpeedStep® Technology must be enabled in the platform BIOS if the
power management feature of DPDK is to be used. Otherwise, the sys file folder
/sys/devices/system/cpu/cpu0/cpufreq will not exist, and the CPU frequency- based
power management cannot be used. Consult the relevant BIOS documentation to determine
how these settings can be accessed.
For example, on some Intel reference platform BIOS variants, the path to Enhanced Intel
SpeedStep® Technology is:
Advanced
-> Processor Configuration
-> Enhanced Intel SpeedStep® Tech

In addition, C3 and C6 should be enabled as well for power management. The path of C3 and
C6 on the same platform BIOS is:
Advanced
-> Processor Configuration
-> Processor C3 Advanced
-> Processor Configuration
-> Processor C6

Using Linux Core Isolation to Reduce Context Switches

While the threads used by an DPDK application are pinned to logical cores on the system,
it is possible for the Linux scheduler to run other tasks on those cores also. To help prevent
additional workloads from running on those cores, it is possible to use the isolcpus Linux
kernel parameter to isolate them from the general Linux scheduler.
For example, if DPDK applications are to run on logical cores 2, 4 and 6, the following should
be added to the kernel parameter list:
isolcpus=2,4,6

Loading the DPDK KNI Kernel Module

To run the DPDK Kernel NIC Interface (KNI) sample application, an extra kernel module (the
kni module) must be loaded into the running kernel. The module is found in the kmod sub-
directory of the DPDK target directory. Similar to the loading of the igb_uio module, this
module should be loaded using the insmod command as shown below (assuming that the
current directory is the DPDK target directory):
insmod kmod/rte_kni.ko

1.5. Enabling Additional Functionality 15


DPDK documentation, Release 17.05.0-rc0

Note: See the “Kernel NIC Interface Sample Application” chapter in the DPDK Sample Appli-
cations User Guide for more details.

Using Linux IOMMU Pass-Through to Run DPDK with Intel® VT-d

To enable Intel® VT-d in a Linux kernel, a number of kernel configuration options must be set.
These include:
• IOMMU_SUPPORT
• IOMMU_API
• INTEL_IOMMU
In addition, to run the DPDK with Intel® VT-d, the iommu=pt kernel parameter must be
used when using igb_uio driver. This results in pass-through of the DMAR (DMA Remap-
ping) lookup in the host. Also, if INTEL_IOMMU_DEFAULT_ON is not set in the kernel, the
intel_iommu=on kernel parameter must be used too. This ensures that the Intel IOMMU is
being initialized as expected.
Please note that while using iommu=pt is compulsory for igb_uio driver, the vfio-pci
driver can actually work with both iommu=pt and iommu=on.

High Performance of Small Packets on 40G NIC

As there might be firmware fixes for performance enhancement in latest version of firmware
image, the firmware update might be needed for getting high performance. Check with the
local Intel’s Network Division application engineers for firmware updates. Users should consult
the release notes specific to a DPDK release to identify the validated firmware version for a
NIC using the i40e driver.

Use 16 Bytes RX Descriptor Size

As i40e PMD supports both 16 and 32 bytes RX descriptor sizes, and 16 bytes
size can provide helps to high performance of small packets. Configuration of
CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC in config files can be changed to use 16
bytes size RX descriptors.

High Performance and per Packet Latency Tradeoff

Due to the hardware design, the interrupt signal inside NIC is needed for per packet de-
scriptor write-back. The minimum interval of interrupts could be set at compile time by
CONFIG_RTE_LIBRTE_I40E_ITR_INTERVAL in configuration files. Though there is a default
configuration, the interval could be tuned by the users with that configuration item depends on
what the user cares about more, performance or per packet latency.

1.5. Enabling Additional Functionality 16


DPDK documentation, Release 17.05.0-rc0

Quick Start Setup Script

The dpdk-setup.sh script, found in the usertools subdirectory, allows the user to perform the
following tasks:
• Build the DPDK libraries
• Insert and remove the DPDK IGB_UIO kernel module
• Insert and remove VFIO kernel modules
• Insert and remove the DPDK KNI kernel module
• Create and delete hugepages for NUMA and non-NUMA cases
• View network port status and reserve ports for DPDK application use
• Set up permissions for using VFIO as a non-privileged user
• Run the test and testpmd applications
• Look at hugepages in the meminfo
• List hugepages in /mnt/huge
• Remove built DPDK libraries
Once these steps have been completed for one of the EAL targets, the user may compile their
own application that links in the EAL libraries to create the DPDK image.

Script Organization

The dpdk-setup.sh script is logically organized into a series of steps that a user performs in
sequence. Each step provides a number of options that guide the user to completing the
desired task. The following is a brief synopsis of each step.
Step 1: Build DPDK Libraries
Initially, the user must select a DPDK target to choose the correct target type and compiler
options to use when building the libraries.
The user must have all libraries, modules, updates and compilers installed in the system prior
to this, as described in the earlier chapters in this Getting Started Guide.
Step 2: Setup Environment
The user configures the Linux* environment to support the running of DPDK applications.
Hugepages can be set up for NUMA or non-NUMA systems. Any existing hugepages will
be removed. The DPDK kernel module that is needed can also be inserted in this step, and
network ports may be bound to this module for DPDK application use.
Step 3: Run an Application
The user may run the test application once the other steps have been performed. The test
application allows the user to run a series of functional tests for the DPDK. The testpmd appli-
cation, which supports the receiving and sending of packets, can also be run.
Step 4: Examining the System
This step provides some tools for examining the status of hugepage mappings.

1.6. Quick Start Setup Script 17


DPDK documentation, Release 17.05.0-rc0

Step 5: System Cleanup


The final step has options for restoring the system to its original state.

Use Cases

The following are some example of how to use the dpdk-setup.sh script. The script should be
run using the source command. Some options in the script prompt the user for further data
before proceeding.

Warning: The dpdk-setup.sh script should be run with root privileges.

source usertools/dpdk-setup.sh

------------------------------------------------------------------------

RTE_SDK exported as /home/user/rte

------------------------------------------------------------------------

Step 1: Select the DPDK environment to build

------------------------------------------------------------------------

[1] i686-native-linuxapp-gcc

[2] i686-native-linuxapp-icc

[3] ppc_64-power8-linuxapp-gcc

[4] x86_64-native-bsdapp-clang

[5] x86_64-native-bsdapp-gcc

[6] x86_64-native-linuxapp-clang

[7] x86_64-native-linuxapp-gcc

[8] x86_64-native-linuxapp-icc

------------------------------------------------------------------------

Step 2: Setup linuxapp environment

------------------------------------------------------------------------

[11] Insert IGB UIO module

[12] Insert VFIO module

[13] Insert KNI module

[14] Setup hugepage mappings for non-NUMA systems

[15] Setup hugepage mappings for NUMA systems

[16] Display current Ethernet device settings

[17] Bind Ethernet device to IGB UIO module

1.6. Quick Start Setup Script 18


DPDK documentation, Release 17.05.0-rc0

[18] Bind Ethernet device to VFIO module

[19] Setup VFIO permissions

------------------------------------------------------------------------

Step 3: Run test application for linuxapp environment

------------------------------------------------------------------------

[20] Run test application ($RTE_TARGET/app/test)

[21] Run testpmd application in interactive mode ($RTE_TARGET/app/testpmd)

------------------------------------------------------------------------

Step 4: Other tools

------------------------------------------------------------------------

[22] List hugepage info from /proc/meminfo

------------------------------------------------------------------------

Step 5: Uninstall and system cleanup

------------------------------------------------------------------------

[23] Uninstall all targets

[24] Unbind NICs from IGB UIO driver

[25] Remove IGB UIO module

[26] Remove VFIO module

[27] Remove KNI module

[28] Remove hugepage mappings

[29] Exit Script

Option:
The following selection demonstrates the creation of the x86_64-native-linuxapp-gcc
DPDK library.
Option: 9

================== Installing x86_64-native-linuxapp-gcc

Configuration done
== Build lib
...
Build complete
RTE_TARGET exported as x86_64-native-linuxapp-gcc

The following selection demonstrates the starting of the DPDK UIO driver.
Option: 25

Unloading any existing DPDK UIO module


Loading DPDK UIO module

1.6. Quick Start Setup Script 19


DPDK documentation, Release 17.05.0-rc0

The following selection demonstrates the creation of hugepages in a NUMA system. 1024 2
MByte pages are assigned to each node. The result is that the application should use -m 4096
for starting the application to access both memory areas (this is done automatically if the -m
option is not provided).

Note: If prompts are displayed to remove temporary files, type ‘y’.

Option: 15

Removing currently reserved hugepages


mounting /mnt/huge and removing directory
Input the number of 2MB pages for each node
Example: to have 128MB of hugepages available per node,
enter '64' to reserve 64 * 2MB pages on each node
Number of pages for node0: 1024
Number of pages for node1: 1024
Reserving hugepages
Creating /mnt/huge and mounting as hugetlbfs

The following selection demonstrates the launch of the test application to run on a single core.
Option: 20

Enter hex bitmask of cores to execute test app on


Example: to execute app on cores 0 to 7, enter 0xff
bitmask: 0x01
Launching app
EAL: coremask set to 1
EAL: Detected lcore 0 on socket 0
...
EAL: Master core 0 is ready (tid=1b2ad720)
RTE>>

Applications

Once the user has run the dpdk-setup.sh script, built one of the EAL targets and set up
hugepages (if using one of the Linux EAL targets), the user can then move on to building
and running their application or one of the examples provided.
The examples in the /examples directory provide a good starting point to gain an understanding
of the operation of the DPDK. The following command sequence shows how the helloworld
sample application is built and run. As recommended in Section 4.2.1 , “Logical Core Use by
Applications”, the logical core layout of the platform should be determined when selecting a
core mask to use for an application.
cd helloworld/
make
CC main.o
LD helloworld
INSTALL-APP helloworld
INSTALL-MAP helloworld.map

sudo ./build/app/helloworld -c 0xf -n 3


[sudo] password for rte:

EAL: coremask set to f


EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 0 on socket 1

1.6. Quick Start Setup Script 20


DPDK documentation, Release 17.05.0-rc0

EAL: Detected lcore 2 as core 1 on socket 0


EAL: Detected lcore 3 as core 1 on socket 1
EAL: Setting up hugepage memory...
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f0add800000 (size = 0x200000)
EAL: Ask a virtual area of 0x3d400000 bytes
EAL: Virtual area found at 0x7f0aa0200000 (size = 0x3d400000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f0a9fc00000 (size = 0x400000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f0a9f600000 (size = 0x400000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f0a9f000000 (size = 0x400000)
EAL: Ask a virtual area of 0x800000 bytes
EAL: Virtual area found at 0x7f0a9e600000 (size = 0x800000)
EAL: Ask a virtual area of 0x800000 bytes
EAL: Virtual area found at 0x7f0a9dc00000 (size = 0x800000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f0a9d600000 (size = 0x400000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f0a9d000000 (size = 0x400000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f0a9ca00000 (size = 0x400000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f0a9c600000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f0a9c200000 (size = 0x200000)
EAL: Ask a virtual area of 0x3fc00000 bytes
EAL: Virtual area found at 0x7f0a5c400000 (size = 0x3fc00000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f0a5c000000 (size = 0x200000)
EAL: Requesting 1024 pages of size 2MB from socket 0
EAL: Requesting 1024 pages of size 2MB from socket 1
EAL: Master core 0 is ready (tid=de25b700)
EAL: Core 1 is ready (tid=5b7fe700)
EAL: Core 3 is ready (tid=5a7fc700)
EAL: Core 2 is ready (tid=5affd700)
hello from core 1
hello from core 2
hello from core 3
hello from core 0

How to get best performance with NICs on Intel platforms

This document is a step-by-step guide for getting high performance from DPDK applications
on Intel platforms.

Hardware and Memory Requirements

For best performance use an Intel Xeon class server system such as Ivy Bridge, Haswell or
newer.
Ensure that each memory channel has at least one memory DIMM inserted, and that the mem-
ory size for each is at least 4GB. Note: this has one of the most direct effects on performance.
You can check the memory configuration using dmidecode as follows:
dmidecode -t memory | grep Locator

1.7. How to get best performance with NICs on Intel platforms 21


DPDK documentation, Release 17.05.0-rc0

Locator: DIMM_A1
Bank Locator: NODE 1
Locator: DIMM_A2
Bank Locator: NODE 1
Locator: DIMM_B1
Bank Locator: NODE 1
Locator: DIMM_B2
Bank Locator: NODE 1
...
Locator: DIMM_G1
Bank Locator: NODE 2
Locator: DIMM_G2
Bank Locator: NODE 2
Locator: DIMM_H1
Bank Locator: NODE 2
Locator: DIMM_H2
Bank Locator: NODE 2

The sample output above shows a total of 8 channels, from A to H, where each channel has 2
DIMMs.
You can also use dmidecode to determine the memory frequency:
dmidecode -t memory | grep Speed

Speed: 2133 MHz


Configured Clock Speed: 2134 MHz
Speed: Unknown
Configured Clock Speed: Unknown
Speed: 2133 MHz
Configured Clock Speed: 2134 MHz
Speed: Unknown
...
Speed: 2133 MHz
Configured Clock Speed: 2134 MHz
Speed: Unknown
Configured Clock Speed: Unknown
Speed: 2133 MHz
Configured Clock Speed: 2134 MHz
Speed: Unknown
Configured Clock Speed: Unknown

The output shows a speed of 2133 MHz (DDR4) and Unknown (not existing). This aligns with
the previous output which showed that each channel has one memory bar.

Network Interface Card Requirements

Use a DPDK supported high end NIC such as the Intel XL710 40GbE.
Make sure each NIC has been flashed the latest version of NVM/firmware.
Use PCIe Gen3 slots, such as Gen3 x8 or Gen3 x16 because PCIe Gen2 slots don’t provide
enough bandwidth for 2 x 10GbE and above. You can use lspci to check the speed of a PCI
slot using something like the following:
lspci -s 03:00.1 -vv | grep LnkSta

LnkSta: Speed 8GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- ...
LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ ...

When inserting NICs into PCI slots always check the caption, such as CPU0 or CPU1 to
indicate which socket it is connected to.

1.7. How to get best performance with NICs on Intel platforms 22


DPDK documentation, Release 17.05.0-rc0

Care should be take with NUMA. If you are using 2 or more ports from different NICs, it is best
to ensure that these NICs are on the same CPU socket. An example of how to determine this
is shown further below.

BIOS Settings

The following are some recommendations on BIOS settings. Different platforms will have dif-
ferent BIOS naming so the following is mainly for reference:
1. Before starting consider resetting all BIOS settings to their default.
2. Disable all power saving options such as: Power performance tuning, CPU P-State, CPU
C3 Report and CPU C6 Report.
3. Select Performance as the CPU Power and Performance policy.
4. Disable Turbo Boost to ensure the performance scaling increases with the number of
cores.
5. Set memory frequency to the highest available number, NOT auto.
6. Disable all virtualization options when you test the physical function of the NIC, and turn
on VT-d if you wants to use VFIO.

Linux boot command line

The following are some recommendations on GRUB boot settings:


1. Use the default grub file as a starting point.
2. Reserve 1G huge pages via grub configurations. For example to reserve 8 huge pages
of 1G size:
default_hugepagesz=1G hugepagesz=1G hugepages=8

3. Isolate CPU cores which will be used for DPDK. For example:
isolcpus=2,3,4,5,6,7,8

4. If it wants to use VFIO, use the following additional grub parameters:


iommu=pt intel_iommu=on

Configurations before running DPDK

1. Build the DPDK target and reserve huge pages. See the earlier section on Use of
Hugepages in the Linux Environment for more details.
The following shell commands may help with building and configuration:
# Build DPDK target.
cd dpdk_folder
make install T=x86_64-native-linuxapp-gcc -j

# Get the hugepage size.


awk '/Hugepagesize/ {print $2}' /proc/meminfo

# Get the total huge page numbers.


awk '/HugePages_Total/ {print $2} ' /proc/meminfo

1.7. How to get best performance with NICs on Intel platforms 23


DPDK documentation, Release 17.05.0-rc0

# Unmount the hugepages.


umount `awk '/hugetlbfs/ {print $2}' /proc/mounts`

# Create the hugepage mount folder.


mkdir -p /mnt/huge

# Mount to the specific folder.


mount -t hugetlbfs nodev /mnt/huge

2. Check the CPU layout using using the DPDK cpu_layout utility:
cd dpdk_folder

usertools/cpu_layout.py

Or run lscpu to check the the cores on each socket.


3. Check your NIC id and related socket id:
# List all the NICs with PCI address and device IDs.
lspci -nn | grep Eth

For example suppose your output was as follows:


82:00.0 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583]
82:00.1 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583]
85:00.0 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583]
85:00.1 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583]

Check the PCI device related numa node id:


cat /sys/bus/pci/devices/0000\:xx\:00.x/numa_node

Usually 0x:00.x is on socket 0 and 8x:00.x is on socket 1. Note: To get the best
performance, ensure that the core and NICs are in the same socket. In the example
above 85:00.0 is on socket 1 and should be used by cores on socket 1 for the best
performance.
4. Bind the test ports to DPDK compatible drivers, such as igb_uio. For example bind two
ports to a DPDK compatible driver and check the status:
# Bind ports 82:00.0 and 85:00.0 to dpdk driver
./dpdk_folder/usertools/dpdk-devbind.py -b igb_uio 82:00.0 85:00.0

# Check the port driver status


./dpdk_folder/usertools/dpdk-devbind.py --status

See dpdk-devbind.py --help for more details.


More details about DPDK setup and Linux kernel requirements see Compiling the DPDK Target
from Source.

Example of getting best performance for an Intel NIC

The following is an example of running the DPDK l3fwd sample application to get high perfor-
mance with an Intel server platform and Intel XL710 NICs. For specific 40G NIC configuration
please refer to the i40e NIC guide.
The example scenario is to get best performance with two Intel XL710 40GbE ports. See Fig.
1.1 for the performance test setup.

1.7. How to get best performance with NICs on Intel platforms 24


DPDK documentation, Release 17.05.0-rc0

Fig. 1.1: Performance Test Setup

1. Add two Intel XL710 NICs to the platform, and use one port per card to get best perfor-
mance. The reason for using two NICs is to overcome a PCIe Gen3’s limitation since
it cannot provide 80G bandwidth for two 40G ports, but two different PCIe Gen3 x8 slot
can. Refer to the sample NICs output above, then we can select 82:00.0 and 85:00.0
as test ports:
82:00.0 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583]
85:00.0 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583]

2. Connect the ports to the traffic generator. For high speed testing, it’s best to use a
hardware traffic generator.
3. Check the PCI devices numa node (socket id) and get the cores number on the exact
socket id. In this case, 82:00.0 and 85:00.0 are both in socket 1, and the cores on
socket 1 in the referenced platform are 18-35 and 54-71. Note: Don’t use 2 logical cores
on the same core (e.g core18 has 2 logical cores, core18 and core54), instead, use 2
logical cores from different cores (e.g core18 and core19).
4. Bind these two ports to igb_uio.
5. As to XL710 40G port, we need at least two queue pairs to achieve best performance,
then two queues per port will be required, and each queue pair will need a dedicated
CPU core for receiving/transmitting packets.
6. The DPDK sample application l3fwd will be used for performance testing, with using
two ports for bi-directional forwarding. Compile the l3fwd sample with the default lpm
mode.
7. The command line of running l3fwd would be something like the followings:
./l3fwd -c 0x3c0000 -n 4 -w 82:00.0 -w 85:00.0 \
-- -p 0x3 --config '(0,0,18),(0,1,19),(1,0,20),(1,1,21)'

This means that the application uses core 18 for port 0, queue pair 0 forwarding, core 19
for port 0, queue pair 1 forwarding, core 20 for port 1, queue pair 0 forwarding, and core
21 for port 1, queue pair 1 forwarding.
8. Configure the traffic at a traffic generator.
• Start creating a stream on packet generator.
• Set the Ethernet II type to 0x0800.

1.7. How to get best performance with NICs on Intel platforms 25


CHAPTER 2

Getting Started Guide for FreeBSD

Introduction

This document contains instructions for installing and configuring the Data Plane Development
Kit (DPDK) software. It is designed to get customers up and running quickly and describes
how to compile and run a DPDK application in a FreeBSD application (bsdapp) environment,
without going deeply into detail.
For a comprehensive guide to installing and using FreeBSD, the following handbook is available
from the FreeBSD Documentation Project: FreeBSD Handbook.

Note: The DPDK is now available as part of the FreeBSD ports collection. Installing via the
ports collection infrastructure is now the recommended way to install the DPDK on FreeBSD,
and is documented in the next chapter, Installing DPDK from the Ports Collection.

Documentation Roadmap

The following is a list of DPDK documents in the suggested reading order:


• Release Notes : Provides release-specific information, including supported features,
limitations, fixed issues, known issues and so on. Also, provides the answers to frequently
asked questions in FAQ format.
• Getting Started Guide (this document): Describes how to install and configure the
DPDK; designed to get users up and running quickly with the software.
• Programmer’s Guide: Describes:
– The software architecture and how to use it (through examples), specifically in a
Linux* application (linuxapp) environment
– The content of the DPDK, the build system (including the commands that can be
used in the root DPDK Makefile to build the development kit and an application) and
guidelines for porting an application
– Optimizations used in the software and those that should be considered for new
development
A glossary of terms is also provided.

26
DPDK documentation, Release 17.05.0-rc0

• API Reference: Provides detailed information about DPDK functions, data structures
and other programming constructs.
• Sample Applications User Guide: Describes a set of sample applications. Each chap-
ter describes a sample application that showcases specific functionality and provides
instructions on how to compile, run and use the sample application.

Installing DPDK from the Ports Collection

The easiest way to get up and running with the DPDK on FreeBSD is to install it from the ports
collection. Details of getting and using the ports collection are documented in the FreeBSD
Handbook.

Note: Testing has been performed using FreeBSD 10.0-RELEASE (x86_64) and requires the
installation of the kernel sources, which should be included during the installation of FreeBSD.

Installing the DPDK FreeBSD Port

On a system with the ports collection installed in /usr/ports, the DPDK can be installed
using the commands:
cd /usr/ports/net/dpdk

make install

After the installation of the DPDK port, instructions will be printed on how to install the kernel
modules required to use the DPDK. A more complete version of these instructions can be
found in the sections Loading the DPDK contigmem Module and Loading the DPDK nic_uio
Module. Normally, lines like those below would be added to the file /boot/loader.conf.
# Reserve 2 x 1G blocks of contiguous memory using contigmem driver:
hw.contigmem.num_buffers=2
hw.contigmem.buffer_size=1073741824
contigmem_load="YES"

# Identify NIC devices for DPDK apps to use and load nic_uio driver:
hw.nic_uio.bdfs="2:0:0,2:0:1"
nic_uio_load="YES"

Compiling and Running the Example Applications

When the DPDK has been installed from the ports collection it installs its example
applications in /usr/local/share/dpdk/examples - also accessible via symlink as
/usr/local/share/examples/dpdk. These examples can be compiled and run as de-
scribed in Compiling and Running Sample Applications. In this case, the required environmen-
tal variables should be set as below:
• RTE_SDK=/usr/local/share/dpdk
• RTE_TARGET=x86_64-native-bsdapp-clang

2.2. Installing DPDK from the Ports Collection 27


DPDK documentation, Release 17.05.0-rc0

Note: To install a copy of the DPDK compiled using gcc, please download the official DPDK
package from http://dpdk.org/ and install manually using the instructions given in the next chap-
ter, Compiling the DPDK Target from Source

An example application can therefore be copied to a user’s home directory and compiled and
run as below:
export RTE_SDK=/usr/local/share/dpdk

export RTE_TARGET=x86_64-native-bsdapp-clang

cp -r /usr/local/share/dpdk/examples/helloworld .

cd helloworld/

gmake
CC main.o
LD helloworld
INSTALL-APP helloworld
INSTALL-MAP helloworld.map

sudo ./build/helloworld -c F -n 2

EAL: Contigmem driver has 2 buffers, each of size 1GB


EAL: Sysctl reports 8 cpus
EAL: Detected lcore 0
EAL: Detected lcore 1
EAL: Detected lcore 2
EAL: Detected lcore 3
EAL: Support maximum 64 logical core(s) by configuration.
EAL: Detected 4 lcore(s)
EAL: Setting up physically contiguous memory...
EAL: Mapped memory segment 1 @ 0x802400000: len 1073741824
EAL: Mapped memory segment 2 @ 0x842400000: len 1073741824
EAL: WARNING: clock_gettime cannot use CLOCK_MONOTONIC_RAW and HPET
is not available - clock timings may be less accurate.
EAL: TSC frequency is ~3569023 KHz
EAL: PCI scan found 24 devices
EAL: Master core 0 is ready (tid=0x802006400)
EAL: Core 1 is ready (tid=0x802006800)
EAL: Core 3 is ready (tid=0x802007000)
EAL: Core 2 is ready (tid=0x802006c00)
EAL: PCI device 0000:01:00.0 on NUMA socket 0
EAL: probe driver: 8086:10fb rte_ixgbe_pmd
EAL: PCI memory mapped at 0x80074a000
EAL: PCI memory mapped at 0x8007ca000
EAL: PCI device 0000:01:00.1 on NUMA socket 0
EAL: probe driver: 8086:10fb rte_ixgbe_pmd
EAL: PCI memory mapped at 0x8007ce000
EAL: PCI memory mapped at 0x80084e000
EAL: PCI device 0000:02:00.0 on NUMA socket 0
EAL: probe driver: 8086:10fb rte_ixgbe_pmd
EAL: PCI memory mapped at 0x800852000
EAL: PCI memory mapped at 0x8008d2000
EAL: PCI device 0000:02:00.1 on NUMA socket 0
EAL: probe driver: 8086:10fb rte_ixgbe_pmd
EAL: PCI memory mapped at 0x801b3f000
EAL: PCI memory mapped at 0x8008d6000
hello from core 1
hello from core 2
hello from core 3

2.2. Installing DPDK from the Ports Collection 28


DPDK documentation, Release 17.05.0-rc0

hello from core 0

Note: To run a DPDK process as a non-root user, adjust the permissions on the
/dev/contigmem and /dev/uio device nodes as described in section Running DPDK
Applications Without Root Privileges

Note: For an explanation of the command-line parameters that can be passed to an DPDK
application, see section Running a Sample Application.

Compiling the DPDK Target from Source

System Requirements

The DPDK and its applications require the GNU make system (gmake) to build on FreeBSD.
Optionally, gcc may also be used in place of clang to build the DPDK, in which case it too
must be installed prior to compiling the DPDK. The installation of these tools is covered in this
section.
Compiling the DPDK requires the FreeBSD kernel sources, which should be included during
the installation of FreeBSD on the development platform. The DPDK also requires the use of
FreeBSD ports to compile and function.
To use the FreeBSD ports system, it is required to update and extract the FreeBSD ports tree
by issuing the following commands:
portsnap fetch
portsnap extract

If the environment requires proxies for external communication, these can be set using:
setenv http_proxy <my_proxy_host>:<port>
setenv ftp_proxy <my_proxy_host>:<port>

The FreeBSD ports below need to be installed prior to building the DPDK. In general these can
be installed using the following set of commands:
cd /usr/ports/<port_location>

make config-recursive

make install

make clean

Each port location can be found using:


whereis <port_name>

The ports required and their locations are as follows:


• dialog4ports: /usr/ports/ports-mgmt/dialog4ports
• GNU make(gmake): /usr/ports/devel/gmake
• coreutils: /usr/ports/sysutils/coreutils

2.3. Compiling the DPDK Target from Source 29


DPDK documentation, Release 17.05.0-rc0

For compiling and using the DPDK with gcc, the compiler must be installed from the ports
collection:
• gcc: version 4.9 is recommended /usr/ports/lang/gcc49. Ensure that CPU_OPTS
is selected (default is OFF).
When running the make config-recursive command, a dialog may be presented to the user.
For the installation of the DPDK, the default options were used.

Note: To avoid multiple dialogs being presented to the user during make install, it is advisable
before running the make install command to re-run the make config-recursive command until
no more dialogs are seen.

Install the DPDK and Browse Sources

First, uncompress the archive and move to the DPDK source directory:
unzip DPDK-<version>.zip
cd DPDK-<version>

The DPDK is composed of several directories:


• lib: Source code of DPDK libraries
• app: Source code of DPDK applications (automatic tests)
• examples: Source code of DPDK applications
• config, buildtools, mk: Framework-related makefiles, scripts and configuration

Installation of the DPDK Target Environments

The format of a DPDK target is:


ARCH-MACHINE-EXECENV-TOOLCHAIN

Where:
• ARCH is: x86_64
• MACHINE is: native
• EXECENV is: bsdapp
• TOOLCHAIN is: gcc | clang
The configuration files for the DPDK targets can be found in the DPDK/config directory in the
form of:
defconfig_ARCH-MACHINE-EXECENV-TOOLCHAIN

Note: Configuration files are provided with the RTE_MACHINE optimization level set. Within
the configuration files, the RTE_MACHINE configuration value is set to native, which means that
the compiled software is tuned for the platform on which it is built. For more information on this
setting, and its possible values, see the DPDK Programmers Guide.

2.3. Compiling the DPDK Target from Source 30


DPDK documentation, Release 17.05.0-rc0

To make the target, use gmake install T=<target>.


For example to compile for FreeBSD use:
gmake install T=x86_64-native-bsdapp-clang

Note: If the compiler binary to be used does not correspond to that given in the TOOLCHAIN
part of the target, the compiler command may need to be explicitly specified. For example,
if compiling for gcc, where the gcc binary is called gcc4.9, the command would need to be
gmake install T=<target> CC=gcc4.9.

Browsing the Installed DPDK Environment Target

Once a target is created, it contains all the libraries and header files for the DPDK environment
that are required to build customer applications. In addition, the test and testpmd applications
are built under the build/app directory, which may be used for testing. A kmod directory is also
present that contains the kernel modules to install.

Loading the DPDK contigmem Module

To run a DPDK application, physically contiguous memory is required. In the absence of non-
transparent superpages, the included sources for the contigmem kernel module provides the
ability to present contiguous blocks of memory for the DPDK to use. The contigmem module
must be loaded into the running kernel before any DPDK is run. The module is found in the
kmod sub-directory of the DPDK target directory.
The amount of physically contiguous memory along with the number of physically contiguous
blocks to be reserved by the module can be set at runtime prior to module loading using:
kenv hw.contigmem.num_buffers=n
kenv hw.contigmem.buffer_size=m

The kernel environment variables can also be specified during boot by placing the following in
/boot/loader.conf:
hw.contigmem.num_buffers=n hw.contigmem.buffer_size=m

The variables can be inspected using the following command:


sysctl -a hw.contigmem

Where n is the number of blocks and m is the size in bytes of each area of contiguous memory.
A default of two buffers of size 1073741824 bytes (1 Gigabyte) each is set during module load
if they are not specified in the environment.
The module can then be loaded using kldload (assuming that the current directory is the DPDK
target directory):
kldload ./kmod/contigmem.ko

It is advisable to include the loading of the contigmem module during the boot process to
avoid issues with potential memory fragmentation during later system up time. This can be
achieved by copying the module to the /boot/kernel/ directory and placing the following
into /boot/loader.conf:
contigmem_load="YES"

2.3. Compiling the DPDK Target from Source 31


DPDK documentation, Release 17.05.0-rc0

Note: The contigmem_load directive should be placed after any definitions of


hw.contigmem.num_buffers and hw.contigmem.buffer_size if the default values are
not to be used.

An error such as:


kldload: can't load ./x86_64-native-bsdapp-gcc/kmod/contigmem.ko:
Exec format error

is generally attributed to not having enough contiguous memory available and can be verified
via dmesg or /var/log/messages:
kernel: contigmalloc failed for buffer <n>

To avoid this error, reduce the number of buffers or the buffer size.

Loading the DPDK nic_uio Module

After loading the contigmem module, the nic_uio module must also be loaded into the run-
ning kernel prior to running any DPDK application. This module must be loaded using the
kldload command as shown below (assuming that the current directory is the DPDK target
directory).
kldload ./kmod/nic_uio.ko

Note: If the ports to be used are currently bound to a existing kernel driver then the
hw.nic_uio.bdfs sysctl value will need to be set before loading the module. Setting
this value is described in the next section below.

Currently loaded modules can be seen by using the kldstat command and a module can be
removed from the running kernel by using kldunload <module_name>.
To load the module during boot, copy the nic_uio module to /boot/kernel and place the
following into /boot/loader.conf:
nic_uio_load="YES"

Note: nic_uio_load="YES" must appear after the contigmem_load directive, if it exists.

By default, the nic_uio module will take ownership of network ports if they are recognized
DPDK devices and are not owned by another module. However, since the FreeBSD kernel
includes support, either built-in, or via a separate driver module, for most network card devices,
it is likely that the ports to be used are already bound to a driver other than nic_uio. The
following sub-section describe how to query and modify the device ownership of the ports to
be used by DPDK applications.

Binding Network Ports to the nic_uio Module

Device ownership can be viewed using the pciconf -l command. The example below shows
four Intel® 82599 network ports under if_ixgbe module ownership.

2.3. Compiling the DPDK Target from Source 32


DPDK documentation, Release 17.05.0-rc0

pciconf -l
ix0@pci0:1:0:0: class=0x020000 card=0x00038086 chip=0x10fb8086 rev=0x01 hdr=0x00
ix1@pci0:1:0:1: class=0x020000 card=0x00038086 chip=0x10fb8086 rev=0x01 hdr=0x00
ix2@pci0:2:0:0: class=0x020000 card=0x00038086 chip=0x10fb8086 rev=0x01 hdr=0x00
ix3@pci0:2:0:1: class=0x020000 card=0x00038086 chip=0x10fb8086 rev=0x01 hdr=0x00

The first column constitutes three components:


1. Device name: ixN
2. Unit name: pci0
3. Selector (Bus:Device:Function): 1:0:0
Where no driver is associated with a device, the device name will be none.
By default, the FreeBSD kernel will include built-in drivers for the most common devices; a
kernel rebuild would normally be required to either remove the drivers or configure them as
loadable modules.
To avoid building a custom kernel, the nic_uio module can detach a network port from its
current device driver. This is achieved by setting the hw.nic_uio.bdfs kernel environment
variable prior to loading nic_uio, as follows:
hw.nic_uio.bdfs="b:d:f,b:d:f,..."

Where a comma separated list of selectors is set, the list must not contain any whitespace.
For example to re-bind ix2@pci0:2:0:0 and ix3@pci0:2:0:1 to the nic_uio module
upon loading, use the following command:
kenv hw.nic_uio.bdfs="2:0:0,2:0:1"

The variable can also be specified during boot by placing the following into
/boot/loader.conf, before the previously-described nic_uio_load line - as shown:
hw.nic_uio.bdfs="2:0:0,2:0:1"
nic_uio_load="YES"

Binding Network Ports Back to their Original Kernel Driver

If the original driver for a network port has been compiled into the kernel, it is necessary to
reboot FreeBSD to restore the original device binding. Before doing so, update or remove the
hw.nic_uio.bdfs in /boot/loader.conf.
If rebinding to a driver that is a loadable module, the network port binding can be reset without
rebooting. To do so, unload both the target kernel module and the nic_uio module, modify
or clear the hw.nic_uio.bdfs kernel environment (kenv) value, and reload the two drivers -
first the original kernel driver, and then the nic_uio driver. Note: the latter does not need
to be reloaded unless there are ports that are still to be bound to it.
Example commands to perform these steps are shown below:
kldunload nic_uio
kldunload <original_driver>

# To clear the value completely:


kenv -u hw.nic_uio.bdfs

# To update the list of ports to bind:


kenv hw.nic_uio.bdfs="b:d:f,b:d:f,..."

2.3. Compiling the DPDK Target from Source 33


DPDK documentation, Release 17.05.0-rc0

kldload <original_driver>

kldload nic_uio # optional

Compiling and Running Sample Applications

The chapter describes how to compile and run applications in a DPDK environment. It also
provides a pointer to where sample applications are stored.

Compiling a Sample Application

Once a DPDK target environment directory has been created (such as


x86_64-native-bsdapp-clang), it contains all libraries and header files required to
build an application.
When compiling an application in the FreeBSD environment on the DPDK, the following vari-
ables must be exported:
• RTE_SDK - Points to the DPDK installation directory.
• RTE_TARGET - Points to the DPDK target environment directory. For FreeBSD, this is the
x86_64-native-bsdapp-clang or x86_64-native-bsdapp-gcc directory.
The following is an example of creating the helloworld application, which runs in the DPDK
FreeBSD environment. While the example demonstrates compiling using gcc version 4.9,
compiling with clang will be similar, except that the CC= parameter can probably be omitted.
The helloworld example may be found in the ${RTE_SDK}/examples directory.
The directory contains the main.c file. This file, when combined with the libraries in the
DPDK target environment, calls the various functions to initialize the DPDK environment, then
launches an entry point (dispatch application) for each core to be utilized. By default, the binary
is generated in the build directory.
setenv RTE_SDK /home/user/DPDK
cd $(RTE_SDK)
cd examples/helloworld/
setenv RTE_SDK $HOME/DPDK
setenv RTE_TARGET x86_64-native-bsdapp-gcc

gmake CC=gcc49
CC main.o
LD helloworld
INSTALL-APP helloworld
INSTALL-MAP helloworld.map

ls build/app
helloworld helloworld.map

Note: In the above example, helloworld was in the directory structure of the DPDK. How-
ever, it could have been located outside the directory structure to keep the DPDK structure
intact. In the following case, the helloworld application is copied to a new directory as a
new starting point.

setenv RTE_SDK /home/user/DPDK


cp -r $(RTE_SDK)/examples/helloworld my_rte_app

2.4. Compiling and Running Sample Applications 34


DPDK documentation, Release 17.05.0-rc0

cd my_rte_app/
setenv RTE_TARGET x86_64-native-bsdapp-gcc

gmake CC=gcc49
CC main.o
LD helloworld
INSTALL-APP helloworld
INSTALL-MAP helloworld.map

Running a Sample Application

1. The contigmem and nic_uio modules must be set up prior to running an application.
2. Any ports to be used by the application must be already bound to the nic_uio module,
as described in section Binding Network Ports to the nic_uio Module, prior to running the
application. The application is linked with the DPDK target environment’s Environment
Abstraction Layer (EAL) library, which provides some options that are generic to every
DPDK application.
The following is the list of options that can be given to the EAL:
./rte-app -c COREMASK [-n NUM] [-b <domain:bus:devid.func>] \
[-r NUM] [-v] [--proc-type <primary|secondary|auto>]

Note: EAL has a common interface between all operating systems and is based on the Linux
notation for PCI devices. For example, a FreeBSD device selector of pci0:2:0:1 is referred
to as 02:00.1 in EAL.

The EAL options for FreeBSD are as follows:


• -c COREMASK: A hexadecimal bit mask of the cores to run on. Note that core numbering
can change between platforms and should be determined beforehand.
• -n NUM: Number of memory channels per processor socket.
• -b <domain:bus:devid.func>: Blacklisting of ports; prevent EAL from using speci-
fied PCI device (multiple -b options are allowed).
• --use-device: Use the specified Ethernet device(s) only. Use comma-separate
[domain:]bus:devid.func values. Cannot be used with -b option.
• -r NUM: Number of memory ranks.
• -v: Display version information on startup.
• --proc-type: The type of process instance.
Other options, specific to Linux and are not supported under FreeBSD are as follows:
• socket-mem: Memory to allocate from hugepages on specific sockets.
• --huge-dir: The directory where hugetlbfs is mounted.
• --file-prefix: The prefix text used for hugepage filenames.
• -m MB: Memory to allocate from hugepages, regardless of processor socket. It is rec-
ommended that --socket-mem be used instead of this option.

2.4. Compiling and Running Sample Applications 35


DPDK documentation, Release 17.05.0-rc0

The -c option is mandatory; the others are optional.


Copy the DPDK application binary to your target, then run the application as follows (assuming
the platform has four memory channels, and that cores 0-3 are present and are to be used for
running the application):
./helloworld -c f -n 4

Note: The --proc-type and --file-prefix EAL options are used for running multiple
DPDK processes. See the “Multi-process Sample Application” chapter in the DPDK Sample
Applications User Guide and the DPDK Programmers Guide for more details.

Running DPDK Applications Without Root Privileges

Although applications using the DPDK use network ports and other hardware resources di-
rectly, with a number of small permission adjustments, it is possible to run these applications
as a user other than “root”. To do so, the ownership, or permissions, on the following file sys-
tem objects should be adjusted to ensure that the user account being used to run the DPDK
application has access to them:
• The userspace-io device files in /dev, for example, /dev/uio0, /dev/uio1, and so on
• The userspace contiguous memory device: /dev/contigmem

Note: Please refer to the DPDK Release Notes for supported applications.

2.4. Compiling and Running Sample Applications 36


CHAPTER 3

Sample Applications User Guides

Introduction

This document describes the sample applications that are included in the Data Plane Devel-
opment Kit (DPDK). Each chapter describes a sample application that showcases specific
functionality and provides instructions on how to compile, run and use the sample application.

Documentation Roadmap

The following is a list of DPDK documents in suggested reading order:


• Release Notes : Provides release-specific information, including supported features,
limitations, fixed issues, known issues and so on. Also, provides the answers to frequently
asked questions in FAQ format.
• Getting Started Guides : Describes how to install and configure the DPDK software for
your operating system; designed to get users up and running quickly with the software.
• Programmer’s Guide: Describes:
– The software architecture and how to use it (through examples), specifically in a
Linux* application (linuxapp) environment.
– The content of the DPDK, the build system (including the commands that can be
used in the root DPDK Makefile to build the development kit and an application) and
guidelines for porting an application.
– Optimizations used in the software and those that should be considered for new
development
A glossary of terms is also provided.
• API Reference : Provides detailed information about DPDK functions, data structures
and other programming constructs.
• Sample Applications User Guide : Describes a set of sample applications. Each chap-
ter describes a sample application that showcases specific functionality and provides
instructions on how to compile, run and use the sample application.

37
DPDK documentation, Release 17.05.0-rc0

Command Line Sample Application

This chapter describes the Command Line sample application that is part of the Data Plane
Development Kit (DPDK).

Overview

The Command Line sample application is a simple application that demonstrates the use of
the command line interface in the DPDK. This application is a readline-like interface that can
be used to debug a DPDK application, in a Linux* application environment.

Note: The rte_cmdline library should not be used in production code since it is not validated
to the same standard as other DPDK libraries. See also the “rte_cmdline library should not
be used in production code due to limited testing” item in the “Known Issues” section of the
Release Notes.

The Command Line sample application supports some of the features of the GNU readline
library such as, completion, cut/paste and some other special bindings that make configuration
and debug faster and easier.
The application shows how the rte_cmdline application can be extended to handle a list of
objects. There are three simple commands:
• add obj_name IP: Add a new object with an IP/IPv6 address associated to it.
• del obj_name: Delete the specified object.
• show obj_name: Show the IP associated with the specified object.

Note: To terminate the application, use Ctrl-d.

Compiling the Application

1. Go to example directory:
export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/cmdline

2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc

Refer to the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make

Running the Application

To run the application in linuxapp environment, issue the following command:


$ ./build/cmdline -c f -n 4

3.2. Command Line Sample Application 38


DPDK documentation, Release 17.05.0-rc0

Refer to the DPDK Getting Started Guide for general information on running applications and
the Environment Abstraction Layer (EAL) options.

Explanation

The following sections provide some explanation of the code.

EAL Initialization and cmdline Start

The first task is the initialization of the Environment Abstraction Layer (EAL). This is achieved
as follows:
int main(int argc, char **argv)
{
ret = rte_eal_init(argc, argv);
if (ret < 0)
rte_panic("Cannot init EAL\n");

Then, a new command line object is created and started to interact with the user through the
console:
cl = cmdline_stdin_new(main_ctx, "example> ");
cmdline_interact(cl);
cmdline_stdin_exit(cl);

The cmd line_interact() function returns when the user types Ctrl-d and in this case, the appli-
cation exits.

Defining a cmdline Context

A cmdline context is a list of commands that are listed in a NULL-terminated table, for example:
cmdline_parse_ctx_t main_ctx[] = {
(cmdline_parse_inst_t *) &cmd_obj_del_show,
(cmdline_parse_inst_t *) &cmd_obj_add,
(cmdline_parse_inst_t *) &cmd_help,
NULL,
};

Each command (of type cmdline_parse_inst_t) is defined statically. It contains a pointer to a


callback function that is executed when the command is parsed, an opaque pointer, a help
string and a list of tokens in a NULL-terminated table.
The rte_cmdline application provides a list of pre-defined token types:
• String Token: Match a static string, a list of static strings or any string.
• Number Token: Match a number that can be signed or unsigned, from 8-bit to 32-bit.
• IP Address Token: Match an IPv4 or IPv6 address or network.
• Ethernet* Address Token: Match a MAC address.
In this example, a new token type obj_list is defined and implemented in the parse_obj_list.c
and parse_obj_list.h files.
For example, the cmd_obj_del_show command is defined as shown below:

3.2. Command Line Sample Application 39


DPDK documentation, Release 17.05.0-rc0

struct cmd_obj_add_result {
cmdline_fixed_string_t action;
cmdline_fixed_string_t name;
struct object *obj;
};

static void cmd_obj_del_show_parsed(void *parsed_result, struct cmdline *cl, attribute ((unused


{
/* ... */
}

cmdline_parse_token_string_t cmd_obj_action = TOKEN_STRING_INITIALIZER(struct cmd_obj_del_show_

parse_token_obj_list_t cmd_obj_obj = TOKEN_OBJ_LIST_INITIALIZER(struct cmd_obj_del_show_result,

cmdline_parse_inst_t cmd_obj_del_show = {
.f = cmd_obj_del_show_parsed, /* function to call */
.data = NULL, /* 2nd arg of func */
.help_str = "Show/del an object",
.tokens = { /* token list, NULL terminated */
(void *)&cmd_obj_action,
(void *)&cmd_obj_obj,
NULL,
},
};

This command is composed of two tokens:


• The first token is a string token that can be show or del.
• The second token is an object that was previously added using the add command in the
global_obj_list variable.
Once the command is parsed, the rte_cmdline application fills a cmd_obj_del_show_result
structure. A pointer to this structure is given as an argument to the callback function and can
be used in the body of this function.

Ethtool Sample Application

The Ethtool sample application shows an implementation of an ethtool-like API and provides a
console environment that allows its use to query and change Ethernet card parameters. The
sample is based upon a simple L2 frame reflector.

Compiling the Application

To compile the application:


1. Go to the sample application directory:
export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/ethtool

2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc

See the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:

3.3. Ethtool Sample Application 40


DPDK documentation, Release 17.05.0-rc0

make

Running the Application

The application requires an available core for each port, plus one. The only available options
are the standard ones for the EAL:
./ethtool-app/ethtool-app/${RTE_TARGET}/ethtool [EAL options]

Refer to the DPDK Getting Started Guide for general information on running applications and
the Environment Abstraction Layer (EAL) options.

Using the application

The application is console-driven using the cmdline DPDK interface:


EthApp>

From this interface the available commands and descriptions of what they do as as follows:
• drvinfo: Print driver info
• eeprom: Dump EEPROM to file
• link: Print port link states
• macaddr: Gets/sets MAC address
• mtu: Set NIC MTU
• open: Open port
• pause: Get/set port pause state
• portstats: Print port statistics
• regs: Dump port register(s) to file
• ringparam: Get/set ring parameters
• rxmode: Toggle port Rx mode
• stop: Stop port
• validate: Check that given MAC address is valid unicast address
• vlan: Add/remove VLAN id
• quit: Exit program

Explanation

The sample program has two parts: A background packet reflector that runs on a slave core,
and a foreground Ethtool Shell that runs on the master core. These are described below.

3.3. Ethtool Sample Application 41


DPDK documentation, Release 17.05.0-rc0

Packet Reflector

The background packet reflector is intended to demonstrate basic packet processing on NIC
ports controlled by the Ethtool shim. Each incoming MAC frame is rewritten so that it is returned
to the sender, using the port in question’s own MAC address as the source address, and is then
sent out on the same port.

Ethtool Shell

The foreground part of the Ethtool sample is a console-based interface that accepts commands
as described in using the application. Individual call-back functions handle the detail associ-
ated with each command, which make use of the functions defined in the Ethtool interface to
the DPDK functions.

Ethtool interface

The Ethtool interface is built as a separate library, and implements the following functions:
• rte_ethtool_get_drvinfo()
• rte_ethtool_get_regs_len()
• rte_ethtool_get_regs()
• rte_ethtool_get_link()
• rte_ethtool_get_eeprom_len()
• rte_ethtool_get_eeprom()
• rte_ethtool_set_eeprom()
• rte_ethtool_get_pauseparam()
• rte_ethtool_set_pauseparam()
• rte_ethtool_net_open()
• rte_ethtool_net_stop()
• rte_ethtool_net_get_mac_addr()
• rte_ethtool_net_set_mac_addr()
• rte_ethtool_net_validate_addr()
• rte_ethtool_net_change_mtu()
• rte_ethtool_net_get_stats64()
• rte_ethtool_net_vlan_rx_add_vid()
• rte_ethtool_net_vlan_rx_kill_vid()
• rte_ethtool_net_set_rx_mode()
• rte_ethtool_get_ringparam()
• rte_ethtool_set_ringparam()

3.3. Ethtool Sample Application 42


DPDK documentation, Release 17.05.0-rc0

Exception Path Sample Application

The Exception Path sample application is a simple example that demonstrates the use of the
DPDK to set up an exception path for packets to go through the Linux* kernel. This is done
by using virtual TAP network interfaces. These can be read from and written to by the DPDK
application and appear to the kernel as a standard network interface.

Overview

The application creates two threads for each NIC port being used. One thread reads from
the port and writes the data unmodified to a thread-specific TAP interface. The second thread
reads from a TAP interface and writes the data unmodified to the NIC port.
The packet flow through the exception path application is as shown in the following figure.

Fig. 3.1: Packet Flow

To make throughput measurements, kernel bridges must be setup to forward data between the
bridges appropriately.

Compiling the Application

1. Go to example directory:
export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/exception_path

2. Set the target (a default target will be used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc

This application is intended as a linuxapp only. See the DPDK Getting Started Guide for
possible RTE_TARGET values.
1. Build the application:
make

Running the Application

The application requires a number of command line options:


.build/exception_path [EAL options] -- -p PORTMASK -i IN_CORES -o OUT_CORES

where:
• -p PORTMASK: A hex bitmask of ports to use
• -i IN_CORES: A hex bitmask of cores which read from NIC
• -o OUT_CORES: A hex bitmask of cores which write to NIC
Refer to the DPDK Getting Started Guide for general information on running applications and
the Environment Abstraction Layer (EAL) options.

3.4. Exception Path Sample Application 43


DPDK documentation, Release 17.05.0-rc0

The number of bits set in each bitmask must be the same. The coremask -c parameter of
the EAL options should include IN_CORES and OUT_CORES. The same bit must not be set
in IN_CORES and OUT_CORES. The affinities between ports and cores are set beginning
with the least significant bit of each mask, that is, the port represented by the lowest bit in
PORTMASK is read from by the core represented by the lowest bit in IN_CORES, and written
to by the core represented by the lowest bit in OUT_CORES.
For example to run the application with two ports and four cores:
./build/exception_path -c f -n 4 -- -p 3 -i 3 -o c

Getting Statistics

While the application is running, statistics on packets sent and received can be displayed by
sending the SIGUSR1 signal to the application from another terminal:
killall -USR1 exception_path

The statistics can be reset by sending a SIGUSR2 signal in a similar way.

Explanation

The following sections provide some explanation of the code.

Initialization

Setup of the mbuf pool, driver and queues is similar to the setup done in the L2 Forwarding
Sample Application (in Real and Virtualized Environments). In addition, the TAP interfaces
must also be created. A TAP interface is created for each lcore that is being used. The code
for creating the TAP interface is as follows:
/*
* Create a tap network interface, or use existing one with same name.
* If name[0]='\0' then a name is automatically assigned and returned in name.
*/

static int tap_create(char *name)


{
struct ifreq ifr;
int fd, ret;

fd = open("/dev/net/tun", O_RDWR);
if (fd < 0)
return fd;

memset(&ifr, 0, sizeof(ifr));

/* TAP device without packet information */

ifr.ifr_flags = IFF_TAP | IFF_NO_PI;


if (name && *name)
rte_snprinf(ifr.ifr_name, IFNAMSIZ, name);

ret = ioctl(fd, TUNSETIFF, (void *) &ifr);

if (ret < 0) {
close(fd);

3.4. Exception Path Sample Application 44


DPDK documentation, Release 17.05.0-rc0

return ret;

if (name)
snprintf(name, IFNAMSIZ, ifr.ifr_name);

return fd;
}

The other step in the initialization process that is unique to this sample application is the asso-
ciation of each port with two cores:
• One core to read from the port and write to a TAP interface
• A second core to read from a TAP interface and write to the port
This is done using an array called port_ids[], which is indexed by the lcore IDs. The population
of this array is shown below:
tx_port = 0;
rx_port = 0;

RTE_LCORE_FOREACH(i) {
if (input_cores_mask & (1ULL << i)) {
/* Skip ports that are not enabled */
while ((ports_mask & (1 << rx_port)) == 0) {
rx_port++;
if (rx_port > (sizeof(ports_mask) * 8))
goto fail; /* not enough ports */
}
port_ids[i] = rx_port++;
} else if (output_cores_mask & (1ULL << i)) {
/* Skip ports that are not enabled */
while ((ports_mask & (1 << tx_port)) == 0) {
tx_port++;
if (tx_port > (sizeof(ports_mask) * 8))
goto fail; /* not enough ports */
}
port_ids[i] = tx_port++;
}
}

Packet Forwarding

After the initialization steps are complete, the main_loop() function is run on each lcore.
This function first checks the lcore_id against the user provided input_cores_mask and out-
put_cores_mask to see if this core is reading from or writing to a TAP interface.
For the case that reads from a NIC port, the packet reception is the same as in the L2 Forward-
ing sample application (see Receive, Process and Transmit Packets). The packet transmission
is done by calling write() with the file descriptor of the appropriate TAP interface and then
explicitly freeing the mbuf back to the pool.
/* Loop forever reading from NIC and writing to tap */

for (;;) {
struct rte_mbuf *pkts_burst[PKT_BURST_SZ];
unsigned i;

const unsigned nb_rx = rte_eth_rx_burst(port_ids[lcore_id], 0, pkts_burst, PKT_BURST_SZ);

3.4. Exception Path Sample Application 45


DPDK documentation, Release 17.05.0-rc0

lcore_stats[lcore_id].rx += nb_rx;

for (i = 0; likely(i < nb_rx); i++) {


struct rte_mbuf *m = pkts_burst[i];
int ret = write(tap_fd, rte_pktmbuf_mtod(m, void*),

rte_pktmbuf_data_len(m));
rte_pktmbuf_free(m);
if (unlikely(ret<0))
lcore_stats[lcore_id].dropped++;
else
lcore_stats[lcore_id].tx++;
}
}

For the other case that reads from a TAP interface and writes to a NIC port, packets are
retrieved by doing a read() from the file descriptor of the appropriate TAP interface. This fills in
the data into the mbuf, then other fields are set manually. The packet can then be transmitted
as normal.
/* Loop forever reading from tap and writing to NIC */

for (;;) {
int ret;
struct rte_mbuf *m = rte_pktmbuf_alloc(pktmbuf_pool);

if (m == NULL)
continue;

ret = read(tap_fd, m->pkt.data, MAX_PACKET_SZ); lcore_stats[lcore_id].rx++;


if (unlikely(ret < 0)) {
FATAL_ERROR("Reading from %s interface failed", tap_name);
}

m->pkt.nb_segs = 1;
m->pkt.next = NULL;
m->pkt.data_len = (uint16_t)ret;

ret = rte_eth_tx_burst(port_ids[lcore_id], 0, &m, 1);


if (unlikely(ret < 1)) {
rte_pktmuf_free(m);
lcore_stats[lcore_id].dropped++;
}
else {
lcore_stats[lcore_id].tx++;
}
}

To set up loops for measuring throughput, TAP interfaces can be connected using bridging.
The steps to do this are described in the section that follows.

Managing TAP Interfaces and Bridges

The Exception Path sample application creates TAP interfaces with names of the format
tap_dpdk_nn, where nn is the lcore ID. These TAP interfaces need to be configured for use:
ifconfig tap_dpdk_00 up

To set up a bridge between two interfaces so that packets sent to one interface can be read
from another, use the brctl tool:

3.4. Exception Path Sample Application 46


DPDK documentation, Release 17.05.0-rc0

brctl addbr "br0"


brctl addif br0 tap_dpdk_00
brctl addif br0 tap_dpdk_03
ifconfig br0 up

The TAP interfaces created by this application exist only when the application is running, so
the steps above need to be repeated each time the application is run. To avoid this, persistent
TAP interfaces can be created using openvpn:
openvpn --mktun --dev tap_dpdk_00

If this method is used, then the steps above have to be done only once and the same TAP
interfaces can be reused each time the application is run. To remove bridges and persistent
TAP interfaces, the following commands are used:
ifconfig br0 down
brctl delbr br0
openvpn --rmtun --dev tap_dpdk_00

Hello World Sample Application

The Hello World sample application is an example of the simplest DPDK application that can
be written. The application simply prints an “helloworld” message on every enabled lcore.

Compiling the Application

1. Go to the example directory:


export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/helloworld

2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc

See the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make

Running the Application

To run the example in a linuxapp environment:


$ ./build/helloworld -c f -n 4

Refer to DPDK Getting Started Guide for general information on running applications and the
Environment Abstraction Layer (EAL) options.

Explanation

The following sections provide some explanation of code.

3.5. Hello World Sample Application 47


DPDK documentation, Release 17.05.0-rc0

EAL Initialization

The first task is to initialize the Environment Abstraction Layer (EAL). This is done in the main()
function using the following code:
int

main(int argc, char **argv)

{
ret = rte_eal_init(argc, argv);
if (ret < 0)
rte_panic("Cannot init EAL\n");

This call finishes the initialization process that was started before main() is called (in case
of a Linuxapp environment). The argc and argv arguments are provided to the rte_eal_init()
function. The value returned is the number of parsed arguments.

Starting Application Unit Lcores

Once the EAL is initialized, the application is ready to launch a function on an lcore. In this
example, lcore_hello() is called on every available lcore. The following is the definition of the
function:
static int
lcore_hello( attribute ((unused)) void *arg)
{
unsigned lcore_id;

lcore_id = rte_lcore_id();
printf("hello from core %u\n", lcore_id);
return 0;
}

The code that launches the function on each lcore is as follows:


/* call lcore_hello() on every slave lcore */

RTE_LCORE_FOREACH_SLAVE(lcore_id) {
rte_eal_remote_launch(lcore_hello, NULL, lcore_id);
}

/* call it on master lcore too */

lcore_hello(NULL);

The following code is equivalent and simpler:


rte_eal_mp_remote_launch(lcore_hello, NULL, CALL_MASTER);

Refer to the DPDK API Reference for detailed information on the rte_eal_mp_remote_launch()
function.

Basic Forwarding Sample Application

The Basic Forwarding sample application is a simple skeleton example of a forwarding appli-
cation.

3.6. Basic Forwarding Sample Application 48


DPDK documentation, Release 17.05.0-rc0

It is intended as a demonstration of the basic components of a DPDK forwarding application.


For more detailed implementations see the L2 and L3 forwarding sample applications.

Compiling the Application

To compile the application export the path to the DPDK source tree and go to the example
directory:
export RTE_SDK=/path/to/rte_sdk

cd ${RTE_SDK}/examples/skeleton

Set the target, for example:


export RTE_TARGET=x86_64-native-linuxapp-gcc

See the DPDK Getting Started Guide for possible RTE_TARGET values.
Build the application as follows:
make

Running the Application

To run the example in a linuxapp environment:


./build/basicfwd -c 2 -n 4

Refer to DPDK Getting Started Guide for general information on running applications and the
Environment Abstraction Layer (EAL) options.

Explanation

The following sections provide an explanation of the main components of the code.
All DPDK library functions used in the sample code are prefixed with rte_ and are explained
in detail in the DPDK API Documentation.

The Main Function

The main() function performs the initialization and calls the execution threads for each lcore.
The first task is to initialize the Environment Abstraction Layer (EAL). The argc and argv
arguments are provided to the rte_eal_init() function. The value returned is the number
of parsed arguments:
int ret = rte_eal_init(argc, argv);
if (ret < 0)
rte_exit(EXIT_FAILURE, "Error with EAL initialization\n");

The main() also allocates a mempool to hold the mbufs (Message Buffers) used by the ap-
plication:
mbuf_pool = rte_mempool_create("MBUF_POOL",
NUM_MBUFS * nb_ports,
MBUF_SIZE,
MBUF_CACHE_SIZE,
sizeof(struct rte_pktmbuf_pool_private),

3.6. Basic Forwarding Sample Application 49


DPDK documentation, Release 17.05.0-rc0

rte_pktmbuf_pool_init, NULL,
rte_pktmbuf_init, NULL,
rte_socket_id(),
0);

Mbufs are the packet buffer structure used by DPDK. They are explained in detail in the “Mbuf
Library” section of the DPDK Programmer’s Guide.
The main() function also initializes all the ports using the user defined port_init() function
which is explained in the next section:
for (portid = 0; portid < nb_ports; portid++) {
if (port_init(portid, mbuf_pool) != 0) {
rte_exit(EXIT_FAILURE,
"Cannot init port %" PRIu8 "\n", portid);
}
}

Once the initialization is complete, the application is ready to launch a function on an lcore. In
this example lcore_main() is called on a single lcore.
lcore_main();

The lcore_main() function is explained below.

The Port Initialization Function

The main functional part of the port initialization used in the Basic Forwarding application is
shown below:
static inline int
port_init(uint8_t port, struct rte_mempool *mbuf_pool)
{
struct rte_eth_conf port_conf = port_conf_default;
const uint16_t rx_rings = 1, tx_rings = 1;
struct ether_addr addr;
int retval;
uint16_t q;

if (port >= rte_eth_dev_count())


return -1;

/* Configure the Ethernet device. */


retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
if (retval != 0)
return retval;

/* Allocate and set up 1 RX queue per Ethernet port. */


for (q = 0; q < rx_rings; q++) {
retval = rte_eth_rx_queue_setup(port, q, RX_RING_SIZE,
rte_eth_dev_socket_id(port), NULL, mbuf_pool);
if (retval < 0)
return retval;
}

/* Allocate and set up 1 TX queue per Ethernet port. */


for (q = 0; q < tx_rings; q++) {
retval = rte_eth_tx_queue_setup(port, q, TX_RING_SIZE,
rte_eth_dev_socket_id(port), NULL);
if (retval < 0)
return retval;
}

3.6. Basic Forwarding Sample Application 50


DPDK documentation, Release 17.05.0-rc0

/* Start the Ethernet port. */


retval = rte_eth_dev_start(port);
if (retval < 0)
return retval;

/* Enable RX in promiscuous mode for the Ethernet device. */


rte_eth_promiscuous_enable(port);

return 0;
}

The Ethernet ports are configured with default settings using the
rte_eth_dev_configure() function and the port_conf_default struct:
static const struct rte_eth_conf port_conf_default = {
.rxmode = { .max_rx_pkt_len = ETHER_MAX_LEN }
};

For this example the ports are set up with 1 RX and 1 TX queue using the
rte_eth_rx_queue_setup() and rte_eth_tx_queue_setup() functions.
The Ethernet port is then started:
retval = rte_eth_dev_start(port);

Finally the RX port is set in promiscuous mode:


rte_eth_promiscuous_enable(port);

The Lcores Main

As we saw above the main() function calls an application function on the available lcores. For
the Basic Forwarding application the lcore function looks like the following:
static __attribute__((noreturn)) void
lcore_main(void)
{
const uint8_t nb_ports = rte_eth_dev_count();
uint8_t port;

/*
* Check that the port is on the same NUMA node as the polling thread
* for best performance.
*/
for (port = 0; port < nb_ports; port++)
if (rte_eth_dev_socket_id(port) > 0 &&
rte_eth_dev_socket_id(port) !=
(int)rte_socket_id())
printf("WARNING, port %u is on remote NUMA node to "
"polling thread.\n\tPerformance will "
"not be optimal.\n", port);

printf("\nCore %u forwarding packets. [Ctrl+C to quit]\n",


rte_lcore_id());

/* Run until the application is quit or killed. */


for (;;) {
/*
* Receive packets on a port and forward them on the paired
* port. The mapping is 0 -> 1, 1 -> 0, 2 -> 3, 3 -> 2, etc.
*/
for (port = 0; port < nb_ports; port++) {

3.6. Basic Forwarding Sample Application 51


DPDK documentation, Release 17.05.0-rc0

/* Get burst of RX packets, from first port of pair. */


struct rte_mbuf *bufs[BURST_SIZE];
const uint16_t nb_rx = rte_eth_rx_burst(port, 0,
bufs, BURST_SIZE);

if (unlikely(nb_rx == 0))
continue;

/* Send burst of TX packets, to second port of pair. */


const uint16_t nb_tx = rte_eth_tx_burst(port ^ 1, 0,
bufs, nb_rx);

/* Free any unsent packets. */


if (unlikely(nb_tx < nb_rx)) {
uint16_t buf;
for (buf = nb_tx; buf < nb_rx; buf++)
rte_pktmbuf_free(bufs[buf]);
}
}
}
}

The main work of the application is done within the loop:


for (;;) {
for (port = 0; port < nb_ports; port++) {

/* Get burst of RX packets, from first port of pair. */


struct rte_mbuf *bufs[BURST_SIZE];
const uint16_t nb_rx = rte_eth_rx_burst(port, 0,
bufs, BURST_SIZE);

if (unlikely(nb_rx == 0))
continue;

/* Send burst of TX packets, to second port of pair. */


const uint16_t nb_tx = rte_eth_tx_burst(port ^ 1, 0,
bufs, nb_rx);

/* Free any unsent packets. */


if (unlikely(nb_tx < nb_rx)) {
uint16_t buf;
for (buf = nb_tx; buf < nb_rx; buf++)
rte_pktmbuf_free(bufs[buf]);
}
}
}

Packets are received in bursts on the RX ports and transmitted in bursts on the TX ports.
The ports are grouped in pairs with a simple mapping scheme using the an XOR on the port
number:
0 -> 1
1 -> 0

2 -> 3
3 -> 2

etc.

The rte_eth_tx_burst() function frees the memory buffers of packets that are transmit-
ted. If packets fail to transmit, (nb_tx < nb_rx), then they must be freed explicitly using

3.6. Basic Forwarding Sample Application 52


DPDK documentation, Release 17.05.0-rc0

rte_pktmbuf_free().
The forwarding loop can be interrupted and the application closed using Ctrl-C.

RX/TX Callbacks Sample Application

The RX/TX Callbacks sample application is a packet forwarding application that demonstrates
the use of user defined callbacks on received and transmitted packets. The application per-
forms a simple latency check, using callbacks, to determine the time packets spend within the
application.
In the sample application a user defined callback is applied to all received packets to add a
timestamp. A separate callback is applied to all packets prior to transmission to calculate the
elapsed time, in CPU cycles.

Compiling the Application

To compile the application export the path to the DPDK source tree and go to the example
directory:
export RTE_SDK=/path/to/rte_sdk

cd ${RTE_SDK}/examples/rxtx_callbacks

Set the target, for example:


export RTE_TARGET=x86_64-native-linuxapp-gcc

See the DPDK Getting Started Guide for possible RTE_TARGET values.
The callbacks feature requires that the CONFIG_RTE_ETHDEV_RXTX_CALLBACKS setting is
on in the config/common_ config file that applies to the target. This is generally on by
default:
CONFIG_RTE_ETHDEV_RXTX_CALLBACKS=y

Build the application as follows:


make

Running the Application

To run the example in a linuxapp environment:


./build/rxtx_callbacks -c 2 -n 4

Refer to DPDK Getting Started Guide for general information on running applications and the
Environment Abstraction Layer (EAL) options.

Explanation

The rxtx_callbacks application is mainly a simple forwarding application based on the


Basic Forwarding Sample Application. See that section of the documentation for more details
of the forwarding part of the application.
The sections below explain the additional RX/TX callback code.

3.7. RX/TX Callbacks Sample Application 53


DPDK documentation, Release 17.05.0-rc0

The Main Function

The main() function performs the application initialization and calls the execution threads for
each lcore. This function is effectively identical to the main() function explained in Basic
Forwarding Sample Application.
The lcore_main() function is also identical.
The main difference is in the user defined port_init() function where the callbacks are
added. This is explained in the next section:

The Port Initialization Function

The main functional part of the port initialization is shown below with comments:
static inline int
port_init(uint8_t port, struct rte_mempool *mbuf_pool)
{
struct rte_eth_conf port_conf = port_conf_default;
const uint16_t rx_rings = 1, tx_rings = 1;
struct ether_addr addr;
int retval;
uint16_t q;

if (port >= rte_eth_dev_count())


return -1;

/* Configure the Ethernet device. */


retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
if (retval != 0)
return retval;

/* Allocate and set up 1 RX queue per Ethernet port. */


for (q = 0; q < rx_rings; q++) {
retval = rte_eth_rx_queue_setup(port, q, RX_RING_SIZE,
rte_eth_dev_socket_id(port), NULL, mbuf_pool);
if (retval < 0)
return retval;
}

/* Allocate and set up 1 TX queue per Ethernet port. */


for (q = 0; q < tx_rings; q++) {
retval = rte_eth_tx_queue_setup(port, q, TX_RING_SIZE,
rte_eth_dev_socket_id(port), NULL);
if (retval < 0)
return retval;
}

/* Start the Ethernet port. */


retval = rte_eth_dev_start(port);
if (retval < 0)
return retval;

/* Enable RX in promiscuous mode for the Ethernet device. */


rte_eth_promiscuous_enable(port);

/* Add the callbacks for RX and TX.*/


rte_eth_add_rx_callback(port, 0, add_timestamps, NULL);
rte_eth_add_tx_callback(port, 0, calc_latency, NULL);

3.7. RX/TX Callbacks Sample Application 54


DPDK documentation, Release 17.05.0-rc0

return 0;
}

The RX and TX callbacks are added to the ports/queues as function pointers:


rte_eth_add_rx_callback(port, 0, add_timestamps, NULL);
rte_eth_add_tx_callback(port, 0, calc_latency, NULL);

More than one callback can be added and additional information can be passed to callback
function pointers as a void*. In the examples above NULL is used.
The add_timestamps() and calc_latency() functions are explained below.

The add_timestamps() Callback

The add_timestamps() callback is added to the RX port and is applied to all packets re-
ceived:
static uint16_t
add_timestamps(uint8_t port __rte_unused, uint16_t qidx __rte_unused,
struct rte_mbuf **pkts, uint16_t nb_pkts, void *_ __rte_unused)
{
unsigned i;
uint64_t now = rte_rdtsc();

for (i = 0; i < nb_pkts; i++)


pkts[i]->udata64 = now;

return nb_pkts;
}

The DPDK function rte_rdtsc() is used to add a cycle count timestamp to each packet (see
the cycles section of the DPDK API Documentation for details).

The calc_latency() Callback

The calc_latency() callback is added to the TX port and is applied to all packets prior to
transmission:
static uint16_t
calc_latency(uint8_t port __rte_unused, uint16_t qidx __rte_unused,
struct rte_mbuf **pkts, uint16_t nb_pkts, void *_ __rte_unused)
{
uint64_t cycles = 0;
uint64_t now = rte_rdtsc();
unsigned i;

for (i = 0; i < nb_pkts; i++)


cycles += now - pkts[i]->udata64;

latency_numbers.total_cycles += cycles;
latency_numbers.total_pkts += nb_pkts;

if (latency_numbers.total_pkts > (100 * 1000 * 1000ULL)) {


printf("Latency = %"PRIu64" cycles\n",
latency_numbers.total_cycles / latency_numbers.total_pkts);

latency_numbers.total_cycles = latency_numbers.total_pkts = 0;
}

3.7. RX/TX Callbacks Sample Application 55


DPDK documentation, Release 17.05.0-rc0

return nb_pkts;
}

The calc_latency() function accumulates the total number of packets and the total number
of cycles used. Once more than 100 million packets have been transmitted the average cycle
count per packet is printed out and the counters are reset.

IP Fragmentation Sample Application

The IPv4 Fragmentation application is a simple example of packet processing using the Data
Plane Development Kit (DPDK). The application does L3 forwarding with IPv4 and IPv6 packet
fragmentation.

Overview

The application demonstrates the use of zero-copy buffers for packet fragmentation. The ini-
tialization and run-time paths are very similar to those of the L2 Forwarding Sample Application
(in Real and Virtualized Environments). This guide highlights the differences between the two
applications.
There are three key differences from the L2 Forwarding sample application:
• The first difference is that the IP Fragmentation sample application makes use of indirect
buffers.
• The second difference is that the forwarding decision is taken based on information read
from the input packet’s IP header.
• The third difference is that the application differentiates between IP and non-IP traffic by
means of offload flags.
The Longest Prefix Match (LPM for IPv4, LPM6 for IPv6) table is used to store/lookup an
outgoing port number, associated with that IP address. Any unmatched packets are forwarded
to the originating port.
By default, input frame sizes up to 9.5 KB are supported. Before forwarding, the input IP packet
is fragmented to fit into the “standard” Ethernet* v2 MTU (1500 bytes).

Building the Application

To build the application:


1. Go to the sample application directory:
export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/ip_fragmentation

2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc

See the DPDK Getting Started Guide for possible RTE_TARGET values.
1. Build the application:
make

3.8. IP Fragmentation Sample Application 56


DPDK documentation, Release 17.05.0-rc0

Running the Application

The LPM object is created and loaded with the pre-configured entries read from global
l3fwd_ipv4_route_array and l3fwd_ipv6_route_array tables. For each input packet, the packet
forwarding decision (that is, the identification of the output interface for the packet) is taken as
a result of LPM lookup. If the IP packet size is greater than default output MTU, then the input
packet is fragmented and several fragments are sent via the output interface.
Application usage:
./build/ip_fragmentation [EAL options] -- -p PORTMASK [-q NQ]

where:
• -p PORTMASK is a hexadecimal bitmask of ports to configure
• -q NQ is the number of queue (=ports) per lcore (the default is 1)
To run the example in linuxapp environment with 2 lcores (2,4) over 2 ports(0,2) with 1 RX
queue per lcore:
./build/ip_fragmentation -c 0x14 -n 3 -- -p 5
EAL: coremask set to 14
EAL: Detected lcore 0 on socket 0
EAL: Detected lcore 1 on socket 1
EAL: Detected lcore 2 on socket 0
EAL: Detected lcore 3 on socket 1
EAL: Detected lcore 4 on socket 0
...

Initializing port 0 on lcore 2... Address:00:1B:21:76:FA:2C, rxq=0 txq=2,0 txq=4,1


done: Link Up - speed 10000 Mbps - full-duplex
Skipping disabled port 1
Initializing port 2 on lcore 4... Address:00:1B:21:5C:FF:54, rxq=0 txq=2,0 txq=4,1
done: Link Up - speed 10000 Mbps - full-duplex
Skipping disabled port 3IP_FRAG: Socket 0: adding route 100.10.0.0/16 (port 0)
IP_FRAG: Socket 0: adding route 100.20.0.0/16 (port 1)
...
IP_FRAG: Socket 0: adding route 0101:0101:0101:0101:0101:0101:0101:0101/48 (port 0)
IP_FRAG: Socket 0: adding route 0201:0101:0101:0101:0101:0101:0101:0101/48 (port 1)
...
IP_FRAG: entering main loop on lcore 4
IP_FRAG: -- lcoreid=4 portid=2
IP_FRAG: entering main loop on lcore 2
IP_FRAG: -- lcoreid=2 portid=0

To run the example in linuxapp environment with 1 lcore (4) over 2 ports(0,2) with 2 RX queues
per lcore:
./build/ip_fragmentation -c 0x10 -n 3 -- -p 5 -q 2

To test the application, flows should be set up in the flow generator that match the values in
the l3fwd_ipv4_route_array and/or l3fwd_ipv6_route_array table.
The default l3fwd_ipv4_route_array table is:
struct l3fwd_ipv4_route l3fwd_ipv4_route_array[] = {
{IPv4(100, 10, 0, 0), 16, 0},
{IPv4(100, 20, 0, 0), 16, 1},
{IPv4(100, 30, 0, 0), 16, 2},
{IPv4(100, 40, 0, 0), 16, 3},
{IPv4(100, 50, 0, 0), 16, 4},
{IPv4(100, 60, 0, 0), 16, 5},
{IPv4(100, 70, 0, 0), 16, 6},

3.8. IP Fragmentation Sample Application 57


DPDK documentation, Release 17.05.0-rc0

{IPv4(100, 80, 0, 0), 16, 7},


};

The default l3fwd_ipv6_route_array table is:


struct l3fwd_ipv6_route l3fwd_ipv6_route_array[] = {
{{1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, 48, 0},
{{2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, 48, 1},
{{3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, 48, 2},
{{4, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, 48, 3},
{{5, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, 48, 4},
{{6, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, 48, 5},
{{7, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, 48, 6},
{{8, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, 48, 7},
};

For example, for the input IPv4 packet with destination address: 100.10.1.1 and packet length
9198 bytes, seven IPv4 packets will be sent out from port #0 to the destination address
100.10.1.1: six of those packets will have length 1500 bytes and one packet will have length
318 bytes. IP Fragmentation sample application provides basic NUMA support in that all the
memory structures are allocated on all sockets that have active lcores on them.
Refer to the DPDK Getting Started Guide for general information on running applications and
the Environment Abstraction Layer (EAL) options.

IPv4 Multicast Sample Application

The IPv4 Multicast application is a simple example of packet processing using the Data Plane
Development Kit (DPDK). The application performs L3 multicasting.

Overview

The application demonstrates the use of zero-copy buffers for packet forwarding. The initial-
ization and run-time paths are very similar to those of the L2 Forwarding Sample Application
(in Real and Virtualized Environments). This guide highlights the differences between the two
applications. There are two key differences from the L2 Forwarding sample application:
• The IPv4 Multicast sample application makes use of indirect buffers.
• The forwarding decision is taken based on information read from the input packet’s IPv4
header.
The lookup method is the Four-byte Key (FBK) hash-based method. The lookup table is com-
posed of pairs of destination IPv4 address (the FBK) and a port mask associated with that IPv4
address.
For convenience and simplicity, this sample application does not take IANA-assigned multicast
addresses into account, but instead equates the last four bytes of the multicast group (that is,
the last four bytes of the destination IP address) with the mask of ports to multicast packets
to. Also, the application does not consider the Ethernet addresses; it looks only at the IPv4
destination address for any given packet.

Building the Application

To compile the application:

3.9. IPv4 Multicast Sample Application 58


DPDK documentation, Release 17.05.0-rc0

1. Go to the sample application directory:


export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/ipv4_multicast

2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc

See the DPDK Getting Started Guide for possible RTE_TARGET values.
1. Build the application:
make

Note: The compiled application is written to the build subdirectory. To have the application
written to a different location, the O=/path/to/build/directory option may be specified in the make
command.

Running the Application

The application has a number of command line options:


./build/ipv4_multicast [EAL options] -- -p PORTMASK [-q NQ]

where,
• -p PORTMASK: Hexadecimal bitmask of ports to configure
• -q NQ: determines the number of queues per lcore

Note: Unlike the basic L2/L3 Forwarding sample applications, NUMA support is not provided
in the IPv4 Multicast sample application.

Typically, to run the IPv4 Multicast sample application, issue the following command (as root):
./build/ipv4_multicast -c 0x00f -n 3 -- -p 0x3 -q 1

In this command:
• The -c option enables cores 0, 1, 2 and 3
• The -n option specifies 3 memory channels
• The -p option enables ports 0 and 1
• The -q option assigns 1 queue to each lcore
Refer to the DPDK Getting Started Guide for general information on running applications and
the Environment Abstraction Layer (EAL) options.

Explanation

The following sections provide some explanation of the code. As mentioned in the overview
section, the initialization and run-time paths are very similar to those of the L2 Forwarding
Sample Application (in Real and Virtualized Environments). The following sections describe
aspects that are specific to the IPv4 Multicast sample application.

3.9. IPv4 Multicast Sample Application 59


DPDK documentation, Release 17.05.0-rc0

Memory Pool Initialization

The IPv4 Multicast sample application uses three memory pools. Two of the pools are for
indirect buffers used for packet duplication purposes. Memory pools for indirect buffers are
initialized differently from the memory pool for direct buffers:
packet_pool = rte_mempool_create("packet_pool", NB_PKT_MBUF, PKT_MBUF_SIZE, 32, sizeof(struct r
rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL, rte_socke

header_pool = rte_mempool_create("header_pool", NB_HDR_MBUF, HDR_MBUF_SIZE, 32, 0, NULL, NULL,


clone_pool = rte_mempool_create("clone_pool", NB_CLONE_MBUF,
CLONE_MBUF_SIZE, 32, 0, NULL, NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0);

The reason for this is because indirect buffers are not supposed to hold any packet data and
therefore can be initialized with lower amount of reserved memory for each buffer.

Hash Initialization

The hash object is created and loaded with the pre-configured entries read from a global array:
static int

init_mcast_hash(void)
{
uint32_t i;
mcast_hash_params.socket_id = rte_socket_id();

mcast_hash = rte_fbk_hash_create(&mcast_hash_params);
if (mcast_hash == NULL){
return -1;
}

for (i = 0; i < N_MCAST_GROUPS; i ++){


if (rte_fbk_hash_add_key(mcast_hash, mcast_group_table[i].ip, mcast_group_table[i].port
return -1;
}
}
return 0;
}

Forwarding

All forwarding is done inside the mcast_forward() function. Firstly, the Ethernet* header is
removed from the packet and the IPv4 address is extracted from the IPv4 header:
/* Remove the Ethernet header from the input packet */

iphdr = (struct ipv4_hdr *)rte_pktmbuf_adj(m, sizeof(struct ether_hdr));


RTE_ASSERT(iphdr != NULL);
dest_addr = rte_be_to_cpu_32(iphdr->dst_addr);

Then, the packet is checked to see if it has a multicast destination address and if the routing
table has any ports assigned to the destination address:
if (!IS_IPV4_MCAST(dest_addr) ||
(hash = rte_fbk_hash_lookup(mcast_hash, dest_addr)) <= 0 ||
(port_mask = hash & enabled_port_mask) == 0) {
rte_pktmbuf_free(m);
return;
}

3.9. IPv4 Multicast Sample Application 60


DPDK documentation, Release 17.05.0-rc0

Then, the number of ports in the destination portmask is calculated with the help of the bitcnt()
function:
/* Get number of bits set. */

static inline uint32_t bitcnt(uint32_t v)


{
uint32_t n;

for (n = 0; v != 0; v &= v - 1, n++)


;
return n;
}

This is done to determine which forwarding algorithm to use. This is explained in more detail
in the next section.
Thereafter, a destination Ethernet address is constructed:
/* construct destination Ethernet address */

dst_eth_addr = ETHER_ADDR_FOR_IPV4_MCAST(dest_addr);

Since Ethernet addresses are also part of the multicast process, each outgoing packet carries
the same destination Ethernet address. The destination Ethernet address is constructed from
the lower 23 bits of the multicast group OR-ed with the Ethernet address 01:00:5e:00:00:00,
as per RFC 1112:
#define ETHER_ADDR_FOR_IPV4_MCAST(x) \
(rte_cpu_to_be_64(0x01005e000000ULL | ((x) & 0x7fffff)) >> 16)

Then, packets are dispatched to the destination ports according to the portmask associated
with a multicast group:
for (port = 0; use_clone != port_mask; port_mask >>= 1, port++) {
/* Prepare output packet and send it out. */

if ((port_mask & 1) != 0) {
if (likely ((mc = mcast_out_pkt(m, use_clone)) != NULL))
mcast_send_pkt(mc, &dst_eth_addr.as_addr, qconf, port);
else if (use_clone == 0)
rte_pktmbuf_free(m);
}
}

The actual packet transmission is done in the mcast_send_pkt() function:


static inline void mcast_send_pkt(struct rte_mbuf *pkt, struct ether_addr *dest_addr, struct lc
{
struct ether_hdr *ethdr;
uint16_t len;

/* Construct Ethernet header. */

ethdr = (struct ether_hdr *)rte_pktmbuf_prepend(pkt, (uint16_t) sizeof(*ethdr));

RTE_ASSERT(ethdr != NULL);

ether_addr_copy(dest_addr, &ethdr->d_addr);
ether_addr_copy(&ports_eth_addr[port], &ethdr->s_addr);
ethdr->ether_type = rte_be_to_cpu_16(ETHER_TYPE_IPv4);

/* Put new packet into the output queue */

3.9. IPv4 Multicast Sample Application 61


DPDK documentation, Release 17.05.0-rc0

len = qconf->tx_mbufs[port].len;
qconf->tx_mbufs[port].m_table[len] = pkt;
qconf->tx_mbufs[port].len = ++len;

/* Transmit packets */

if (unlikely(MAX_PKT_BURST == len))
send_burst(qconf, port);
}

Buffer Cloning

This is the most important part of the application since it demonstrates the use of zero- copy
buffer cloning. There are two approaches for creating the outgoing packet and although both
are based on the data zero-copy idea, there are some differences in the detail.
The first approach creates a clone of the input packet, for example, walk though all segments
of the input packet and for each of segment, create a new buffer and attach that new buffer
to the segment (refer to rte_pktmbuf_clone() in the rte_mbuf library for more details). A new
buffer is then allocated for the packet header and is prepended to the cloned buffer.
The second approach does not make a clone, it just increments the reference counter for all
input packet segment, allocates a new buffer for the packet header and prepends it to the input
packet.
Basically, the first approach reuses only the input packet’s data, but creates its own copy of
packet’s metadata. The second approach reuses both input packet’s data and metadata.
The advantage of first approach is that each outgoing packet has its own copy of the metadata,
so we can safely modify the data pointer of the input packet. That allows us to skip creation
if the output packet is for the last destination port and instead modify input packet’s header in
place. For example, for N destination ports, we need to invoke mcast_out_pkt() (N-1) times.
The advantage of the second approach is that there is less work to be done for each outgoing
packet, that is, the “clone” operation is skipped completely. However, there is a price to pay.
The input packet’s metadata must remain intact, so for N destination ports, we need to invoke
mcast_out_pkt() (N) times.
Therefore, for a small number of outgoing ports (and segments in the input packet), first ap-
proach is faster. As the number of outgoing ports (and/or input segments) grows, the second
approach becomes more preferable.
Depending on the number of segments or the number of ports in the outgoing portmask, either
the first (with cloning) or the second (without cloning) approach is taken:
use_clone = (port_num <= MCAST_CLONE_PORTS && m->pkt.nb_segs <= MCAST_CLONE_SEGS);

It is the mcast_out_pkt() function that performs the packet duplication (either with or without
actually cloning the buffers):
static inline struct rte_mbuf *mcast_out_pkt(struct rte_mbuf *pkt, int use_clone)
{
struct rte_mbuf *hdr;

/* Create new mbuf for the header. */

if (unlikely ((hdr = rte_pktmbuf_alloc(header_pool)) == NULL))


return NULL;

3.9. IPv4 Multicast Sample Application 62


DPDK documentation, Release 17.05.0-rc0

/* If requested, then make a new clone packet. */

if (use_clone != 0 && unlikely ((pkt = rte_pktmbuf_clone(pkt, clone_pool)) == NULL)) {


rte_pktmbuf_free(hdr);
return NULL;
}

/* prepend new header */

hdr->pkt.next = pkt;

/* update header's fields */

hdr->pkt.pkt_len = (uint16_t)(hdr->pkt.data_len + pkt->pkt.pkt_len);


hdr->pkt.nb_segs = (uint8_t)(pkt->pkt.nb_segs + 1);

/* copy metadata from source packet */

hdr->pkt.in_port = pkt->pkt.in_port;
hdr->pkt.vlan_macip = pkt->pkt.vlan_macip;
hdr->pkt.hash = pkt->pkt.hash;
hdr->ol_flags = pkt->ol_flags;
rte_mbuf_sanity_check(hdr, RTE_MBUF_PKT, 1);

return hdr;
}

IP Reassembly Sample Application

The L3 Forwarding application is a simple example of packet processing using the DPDK. The
application performs L3 forwarding with reassembly for fragmented IPv4 and IPv6 packets.

Overview

The application demonstrates the use of the DPDK libraries to implement packet forwarding
with reassembly for IPv4 and IPv6 fragmented packets. The initialization and run- time paths
are very similar to those of the L2 Forwarding Sample Application (in Real and Virtualized
Environments). The main difference from the L2 Forwarding sample application is that it re-
assembles fragmented IPv4 and IPv6 packets before forwarding. The maximum allowed size
of reassembled packet is 9.5 KB.
There are two key differences from the L2 Forwarding sample application:
• The first difference is that the forwarding decision is taken based on information read
from the input packet’s IP header.
• The second difference is that the application differentiates between IP and non-IP traffic
by means of offload flags.

The Longest Prefix Match (LPM for IPv4, LPM6 for IPv6) table is used to
store/lookup an outgoing port number, associated with that IPv4 address.
Any unmatched packets are forwarded to the originating port.Compiling
the Application

To compile the application:

3.10. IP Reassembly Sample Application 63


DPDK documentation, Release 17.05.0-rc0

1. Go to the sample application directory:


export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/ip_reassembly

1. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc

See the DPDK Getting Started Guide for possible RTE_TARGET values.
1. Build the application:
make

Running the Application

The application has a number of command line options:


./build/ip_reassembly [EAL options] -- -p PORTMASK [-q NQ] [--maxflows=FLOWS>] [--flowttl=TTL[(

where:
• -p PORTMASK: Hexadecimal bitmask of ports to configure
• -q NQ: Number of RX queues per lcore
• –maxflows=FLOWS: determines maximum number of active fragmented flows (1-65535).
Default value: 4096.
• –flowttl=TTL[(s|ms)]: determines maximum Time To Live for fragmented packet. If all
fragments of the packet wouldn’t appear within given time-out, then they are considered
as invalid and will be dropped. Valid range is 1ms - 3600s. Default value: 1s.
To run the example in linuxapp environment with 2 lcores (2,4) over 2 ports(0,2) with 1 RX
queue per lcore:
./build/ip_reassembly -c 0x14 -n 3 -- -p 5
EAL: coremask set to 14
EAL: Detected lcore 0 on socket 0
EAL: Detected lcore 1 on socket 1
EAL: Detected lcore 2 on socket 0
EAL: Detected lcore 3 on socket 1
EAL: Detected lcore 4 on socket 0
...

Initializing port 0 on lcore 2... Address:00:1B:21:76:FA:2C, rxq=0 txq=2,0 txq=4,1


done: Link Up - speed 10000 Mbps - full-duplex
Skipping disabled port 1
Initializing port 2 on lcore 4... Address:00:1B:21:5C:FF:54, rxq=0 txq=2,0 txq=4,1
done: Link Up - speed 10000 Mbps - full-duplex
Skipping disabled port 3IP_FRAG: Socket 0: adding route 100.10.0.0/16 (port 0)
IP_RSMBL: Socket 0: adding route 100.20.0.0/16 (port 1)
...

IP_RSMBL: Socket 0: adding route 0101:0101:0101:0101:0101:0101:0101:0101/48 (port 0)


IP_RSMBL: Socket 0: adding route 0201:0101:0101:0101:0101:0101:0101:0101/48 (port 1)
...

IP_RSMBL: entering main loop on lcore 4


IP_RSMBL: -- lcoreid=4 portid=2
IP_RSMBL: entering main loop on lcore 2
IP_RSMBL: -- lcoreid=2 portid=0

3.10. IP Reassembly Sample Application 64


DPDK documentation, Release 17.05.0-rc0

To run the example in linuxapp environment with 1 lcore (4) over 2 ports(0,2) with 2 RX queues
per lcore:
./build/ip_reassembly -c 0x10 -n 3 -- -p 5 -q 2

To test the application, flows should be set up in the flow generator that match the values in
the l3fwd_ipv4_route_array and/or l3fwd_ipv6_route_array table.
Please note that in order to test this application, the traffic generator should be generating valid
fragmented IP packets. For IPv6, the only supported case is when no other extension headers
other than fragment extension header are present in the packet.
The default l3fwd_ipv4_route_array table is:
struct l3fwd_ipv4_route l3fwd_ipv4_route_array[] = {
{IPv4(100, 10, 0, 0), 16, 0},
{IPv4(100, 20, 0, 0), 16, 1},
{IPv4(100, 30, 0, 0), 16, 2},
{IPv4(100, 40, 0, 0), 16, 3},
{IPv4(100, 50, 0, 0), 16, 4},
{IPv4(100, 60, 0, 0), 16, 5},
{IPv4(100, 70, 0, 0), 16, 6},
{IPv4(100, 80, 0, 0), 16, 7},
};

The default l3fwd_ipv6_route_array table is:


struct l3fwd_ipv6_route l3fwd_ipv6_route_array[] = {
{{1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, 48, 0},
{{2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, 48, 1},
{{3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, 48, 2},
{{4, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, 48, 3},
{{5, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, 48, 4},
{{6, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, 48, 5},
{{7, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, 48, 6},
{{8, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, 48, 7},
};

For example, for the fragmented input IPv4 packet with destination address: 100.10.1.1, a
reassembled IPv4 packet be sent out from port #0 to the destination address 100.10.1.1 once
all the fragments are collected.

Explanation

The following sections provide some explanation of the sample application code. As mentioned
in the overview section, the initialization and run-time paths are very similar to those of the L2
Forwarding Sample Application (in Real and Virtualized Environments). The following sections
describe aspects that are specific to the IP reassemble sample application.

IPv4 Fragment Table Initialization

This application uses the rte_ip_frag library. Please refer to Programmer’s Guide for more
detailed explanation of how to use this library. Fragment table maintains information about al-
ready received fragments of the packet. Each IP packet is uniquely identified by triple <Source
IP address>, <Destination IP address>, <ID>. To avoid lock contention, each RX queue has its
own Fragment Table, e.g. the application can’t handle the situation when different fragments
of the same packet arrive through different RX queues. Each table entry can hold information
about packet consisting of up to RTE_LIBRTE_IP_FRAG_MAX_FRAGS fragments.

3.10. IP Reassembly Sample Application 65


DPDK documentation, Release 17.05.0-rc0

frag_cycles = (rte_get_tsc_hz() + MS_PER_S - 1) / MS_PER_S * max_flow_ttl;

if ((qconf->frag_tbl[queue] = rte_ip_frag_tbl_create(max_flow_num, IPV4_FRAG_TBL_BUCKET_ENTRIES


{
RTE_LOG(ERR, IP_RSMBL, "ip_frag_tbl_create(%u) on " "lcore: %u for queue: %u failed\n", ma
return -1;
}

Mempools Initialization

The reassembly application demands a lot of mbuf’s to be allocated. At any given time up to
(2 * max_flow_num * RTE_LIBRTE_IP_FRAG_MAX_FRAGS * <maximum number of mbufs
per packet>) can be stored inside Fragment Table waiting for remaining fragments. To keep
mempool size under reasonable limits and to avoid situation when one RX queue can starve
other queues, each RX queue uses its own mempool.
nb_mbuf = RTE_MAX(max_flow_num, 2UL * MAX_PKT_BURST) * RTE_LIBRTE_IP_FRAG_MAX_FRAGS;
nb_mbuf *= (port_conf.rxmode.max_rx_pkt_len + BUF_SIZE - 1) / BUF_SIZE;
nb_mbuf *= 2; /* ipv4 and ipv6 */
nb_mbuf += RTE_TEST_RX_DESC_DEFAULT + RTE_TEST_TX_DESC_DEFAULT;
nb_mbuf = RTE_MAX(nb_mbuf, (uint32_t)NB_MBUF);

snprintf(buf, sizeof(buf), "mbuf_pool_%u_%u", lcore, queue);

if ((rxq->pool = rte_mempool_create(buf, nb_mbuf, MBUF_SIZE, 0, sizeof(struct rte_pktmbuf_pool_


rte_pktmbuf_init, NULL, socket, MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET)) == NULL) {

RTE_LOG(ERR, IP_RSMBL, "mempool_create(%s) failed", buf);


return -1;
}

Packet Reassembly and Forwarding

For each input packet, the packet forwarding operation is done by the l3fwd_simple_forward()
function. If the packet is an IPv4 or IPv6 fragment, then it calls rte_ipv4_reassemble_packet()
for IPv4 packets, or rte_ipv6_reassemble_packet() for IPv6 packets. These functions either
return a pointer to valid mbuf that contains reassembled packet, or NULL (if the packet can’t
be reassembled for some reason). Then l3fwd_simple_forward() continues with the code for
the packet forwarding decision (that is, the identification of the output interface for the packet)
and actual transmit of the packet.
The rte_ipv4_reassemble_packet() or rte_ipv6_reassemble_packet() are responsible for:
1. Searching the Fragment Table for entry with packet’s <IP Source Address, IP Destination
Address, Packet ID>
2. If the entry is found, then check if that entry already timed-out. If yes, then free all
previously received fragments, and remove information about them from the entry.
3. If no entry with such key is found, then try to create a new one by one of two ways:
(a) Use as empty entry
(b) Delete a timed-out entry, free mbufs associated with it mbufs and store a new entry
with specified key in it.
4. Update the entry with new fragment information and check if a packet can be reassem-
bled (the packet’s entry contains all fragments).

3.10. IP Reassembly Sample Application 66


DPDK documentation, Release 17.05.0-rc0

(a) If yes, then, reassemble the packet, mark table’s entry as empty and return the
reassembled mbuf to the caller.
(b) If no, then just return a NULL to the caller.
If at any stage of packet processing a reassembly function encounters an error (can’t insert
new entry into the Fragment table, or invalid/timed-out fragment), then it will free all associated
with the packet fragments, mark the table entry as invalid and return NULL to the caller.

Debug logging and Statistics Collection

The RTE_LIBRTE_IP_FRAG_TBL_STAT controls statistics collection for the IP Fragment Ta-


ble. This macro is disabled by default. To make ip_reassembly print the statistics to the stan-
dard output, the user must send either an USR1, INT or TERM signal to the process. For all of
these signals, the ip_reassembly process prints Fragment table statistics for each RX queue,
plus the INT and TERM will cause process termination as usual.

Kernel NIC Interface Sample Application

The Kernel NIC Interface (KNI) is a DPDK control plane solution that allows userspace ap-
plications to exchange packets with the kernel networking stack. To accomplish this, DPDK
userspace applications use an IOCTL call to request the creation of a KNI virtual device in the
Linux* kernel. The IOCTL call provides interface information and the DPDK’s physical address
space, which is re-mapped into the kernel address space by the KNI kernel loadable module
that saves the information to a virtual device context. The DPDK creates FIFO queues for
packet ingress and egress to the kernel module for each device allocated.
The KNI kernel loadable module is a standard net driver, which upon receiving the IOCTL
call access the DPDK’s FIFO queue to receive/transmit packets from/to the DPDK userspace
application. The FIFO queues contain pointers to data packets in the DPDK. This:
• Provides a faster mechanism to interface with the kernel net stack and eliminates system
calls
• Facilitates the DPDK using standard Linux* userspace net tools (tcpdump, ftp, and so on)
• Eliminate the copy_to_user and copy_from_user operations on packets.
The Kernel NIC Interface sample application is a simple example that demonstrates the use of
the DPDK to create a path for packets to go through the Linux* kernel. This is done by creating
one or more kernel net devices for each of the DPDK ports. The application allows the use of
standard Linux tools (ethtool, ifconfig, tcpdump) with the DPDK ports and also the exchange
of packets between the DPDK application and the Linux* kernel.

Overview

The Kernel NIC Interface sample application uses two threads in user space for each physical
NIC port being used, and allocates one or more KNI device for each physical NIC port with
kernel module’s support. For a physical NIC port, one thread reads from the port and writes to
KNI devices, and another thread reads from KNI devices and writes the data unmodified to the
physical NIC port. It is recommended to configure one KNI device for each physical NIC port.

3.11. Kernel NIC Interface Sample Application 67


DPDK documentation, Release 17.05.0-rc0

If configured with more than one KNI devices for a physical NIC port, it is just for performance
testing, or it can work together with VMDq support in future.
The packet flow through the Kernel NIC Interface application is as shown in the following figure.

Fig. 3.2: Kernel NIC Application Packet Flow

Compiling the Application

Compile the application as follows:


1. Go to the example directory:
export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/kni

2. Set the target (a default target is used if not specified)

Note: This application is intended as a linuxapp only.

export RTE_TARGET=x86_64-native-linuxapp-gcc

3. Build the application:


make

Loading the Kernel Module

Loading the KNI kernel module without any parameter is the typical way a DPDK application
gets packets into and out of the kernel net stack. This way, only one kernel thread is created
for all KNI devices for packet receiving in kernel side:

3.11. Kernel NIC Interface Sample Application 68


DPDK documentation, Release 17.05.0-rc0

#insmod rte_kni.ko

Pinning the kernel thread to a specific core can be done using a taskset command such as
following:
#taskset -p 100000 `pgrep --fl kni_thread | awk '{print $1}'`

This command line tries to pin the specific kni_thread on the 20th lcore (lcore numbering starts
at 0), which means it needs to check if that lcore is available on the board. This command must
be sent after the application has been launched, as insmod does not start the kni thread.
For optimum performance, the lcore in the mask must be selected to be on the same socket
as the lcores used in the KNI application.
To provide flexibility of performance, the kernel module of the KNI, located in the kmod sub-
directory of the DPDK target directory, can be loaded with parameter of kthread_mode as
follows:
• #insmod rte_kni.ko kthread_mode=single
This mode will create only one kernel thread for all KNI devices for packet receiving in
kernel side. By default, it is in this single kernel thread mode. It can set core affinity for
this kernel thread by using Linux command taskset.
• #insmod rte_kni.ko kthread_mode =multiple
This mode will create a kernel thread for each KNI device for packet receiving in ker-
nel side. The core affinity of each kernel thread is set when creating the KNI device.
The lcore ID for each kernel thread is provided in the command line of launching the
application. Multiple kernel thread mode can provide scalable higher performance.
To measure the throughput in a loopback mode, the kernel module of the KNI, located in the
kmod sub-directory of the DPDK target directory, can be loaded with parameters as follows:
• #insmod rte_kni.ko lo_mode=lo_mode_fifo
This loopback mode will involve ring enqueue/dequeue operations in kernel space.
• #insmod rte_kni.ko lo_mode=lo_mode_fifo_skb
This loopback mode will involve ring enqueue/dequeue operations and sk buffer copies
in kernel space.

Running the Application

The application requires a number of command line options:


kni [EAL options] -- -P -p PORTMASK --config="(port,lcore_rx,lcore_tx[,lcore_kthread,...])[,por

Where:
• -P: Set all ports to promiscuous mode so that packets are accepted regardless of the
packet’s Ethernet MAC destination address. Without this option, only packets with the
Ethernet MAC destination address set to the Ethernet address of the port are accepted.
• -p PORTMASK: Hexadecimal bitmask of ports to configure.
• –config=”(port,lcore_rx, lcore_tx[,lcore_kthread, ...]) [, port,lcore_rx,
lcore_tx[,lcore_kthread, ...]]”: Determines which lcores of RX, TX, kernel thread
are mapped to which ports.

3.11. Kernel NIC Interface Sample Application 69


DPDK documentation, Release 17.05.0-rc0

Refer to DPDK Getting Started Guide for general information on running applications and the
Environment Abstraction Layer (EAL) options.
The -c coremask parameter of the EAL options should include the lcores indicated by the
lcore_rx and lcore_tx, but does not need to include lcores indicated by lcore_kthread as they
are used to pin the kernel thread on. The -p PORTMASK parameter should include the ports
indicated by the port in –config, neither more nor less.
The lcore_kthread in –config can be configured none, one or more lcore IDs. In multiple kernel
thread mode, if configured none, a KNI device will be allocated for each port, while no specific
lcore affinity will be set for its kernel thread. If configured one or more lcore IDs, one or more
KNI devices will be allocated for each port, while specific lcore affinity will be set for its kernel
thread. In single kernel thread mode, if configured none, a KNI device will be allocated for each
port. If configured one or more lcore IDs, one or more KNI devices will be allocated for each
port while no lcore affinity will be set as there is only one kernel thread for all KNI devices.
For example, to run the application with two ports served by six lcores, one lcore of RX, one
lcore of TX, and one lcore of kernel thread for each port:
./build/kni -c 0xf0 -n 4 -- -P -p 0x3 -config="(0,4,6,8),(1,5,7,9)"

KNI Operations

Once the KNI application is started, one can use different Linux* commands to manage the
net interfaces. If more than one KNI devices configured for a physical port, only the first KNI
device will be paired to the physical device. Operations on other KNI devices will not affect the
physical port handled in user space application.
Assigning an IP address:
#ifconfig vEth0_0 192.168.0.1

Displaying the NIC registers:


#ethtool -d vEth0_0

Dumping the network traffic:


#tcpdump -i vEth0_0

When the DPDK userspace application is closed, all the KNI devices are deleted from Linux*.

Explanation

The following sections provide some explanation of code.

Initialization

Setup of mbuf pool, driver and queues is similar to the setup done in the L2 Forwarding Sample
Application (in Real and Virtualized Environments).. In addition, one or more kernel NIC inter-
faces are allocated for each of the configured ports according to the command line parameters.
The code for allocating the kernel NIC interfaces for a specific port is as follows:
static int
kni_alloc(uint8_t port_id)
{

3.11. Kernel NIC Interface Sample Application 70


DPDK documentation, Release 17.05.0-rc0

uint8_t i;
struct rte_kni *kni;
struct rte_kni_conf conf;
struct kni_port_params **params = kni_port_params_array;

if (port_id >= RTE_MAX_ETHPORTS || !params[port_id])


return -1;

params[port_id]->nb_kni = params[port_id]->nb_lcore_k ? params[port_id]->nb_lcore_k : 1;

for (i = 0; i < params[port_id]->nb_kni; i++) {

/* Clear conf at first */

memset(&conf, 0, sizeof(conf));
if (params[port_id]->nb_lcore_k) {
snprintf(conf.name, RTE_KNI_NAMESIZE, "vEth%u_%u", port_id, i);
conf.core_id = params[port_id]->lcore_k[i];
conf.force_bind = 1;
} else
snprintf(conf.name, RTE_KNI_NAMESIZE, "vEth%u", port_id);
conf.group_id = (uint16_t)port_id;
conf.mbuf_size = MAX_PACKET_SZ;

/*
* The first KNI device associated to a port
* is the master, for multiple kernel thread
* environment.
*/

if (i == 0) {
struct rte_kni_ops ops;
struct rte_eth_dev_info dev_info;

memset(&dev_info, 0, sizeof(dev_info)); rte_eth_dev_info_get(port_id, &dev_inf

conf.addr = dev_info.pci_dev->addr;
conf.id = dev_info.pci_dev->id;

memset(&ops, 0, sizeof(ops));

ops.port_id = port_id;
ops.change_mtu = kni_change_mtu;
ops.config_network_if = kni_config_network_interface;

kni = rte_kni_alloc(pktmbuf_pool, &conf, &ops);


} else
kni = rte_kni_alloc(pktmbuf_pool, &conf, NULL);

if (!kni)
rte_exit(EXIT_FAILURE, "Fail to create kni for "
"port: %d\n", port_id);

params[port_id]->kni[i] = kni;
}
return 0;
}

The other step in the initialization process that is unique to this sample application is the asso-
ciation of each port with lcores for RX, TX and kernel threads.
• One lcore to read from the port and write to the associated one or more KNI devices

3.11. Kernel NIC Interface Sample Application 71


DPDK documentation, Release 17.05.0-rc0

• Another lcore to read from one or more KNI devices and write to the port
• Other lcores for pinning the kernel threads on one by one
This is done by using the‘kni_port_params_array[]‘ array, which is indexed by the port ID. The
code is as follows:
static int
parse_config(const char *arg)
{
const char *p, *p0 = arg;
char s[256], *end;
unsigned size;
enum fieldnames {
FLD_PORT = 0,
FLD_LCORE_RX,
FLD_LCORE_TX,
_NUM_FLD = KNI_MAX_KTHREAD + 3,
};
int i, j, nb_token;
char *str_fld[_NUM_FLD];
unsigned long int_fld[_NUM_FLD];
uint8_t port_id, nb_kni_port_params = 0;

memset(&kni_port_params_array, 0, sizeof(kni_port_params_array));

while (((p = strchr(p0, '(')) != NULL) && nb_kni_port_params < RTE_MAX_ETHPORTS) {


p++;
if ((p0 = strchr(p, ')')) == NULL)
goto fail;

size = p0 - p;

if (size >= sizeof(s)) {


printf("Invalid config parameters\n");
goto fail;
}

snprintf(s, sizeof(s), "%.*s", size, p);


nb_token = rte_strsplit(s, sizeof(s), str_fld, _NUM_FLD, ',');

if (nb_token <= FLD_LCORE_TX) {


printf("Invalid config parameters\n");
goto fail;
}

for (i = 0; i < nb_token; i++) {


errno = 0;
int_fld[i] = strtoul(str_fld[i], &end, 0);
if (errno != 0 || end == str_fld[i]) {
printf("Invalid config parameters\n");
goto fail;
}
}

i = 0;
port_id = (uint8_t)int_fld[i++];

if (port_id >= RTE_MAX_ETHPORTS) {


printf("Port ID %u could not exceed the maximum %u\n", port_id, RTE_MAX_ETHPORTS);
goto fail;
}

if (kni_port_params_array[port_id]) {

3.11. Kernel NIC Interface Sample Application 72


DPDK documentation, Release 17.05.0-rc0

printf("Port %u has been configured\n", port_id);


goto fail;
}

kni_port_params_array[port_id] = (struct kni_port_params*)rte_zmalloc("KNI_port_params"


kni_port_params_array[port_id]->port_id = port_id;
kni_port_params_array[port_id]->lcore_rx = (uint8_t)int_fld[i++];
kni_port_params_array[port_id]->lcore_tx = (uint8_t)int_fld[i++];

if (kni_port_params_array[port_id]->lcore_rx >= RTE_MAX_LCORE || kni_port_params_array[


printf("lcore_rx %u or lcore_tx %u ID could not "
"exceed the maximum %u\n",
kni_port_params_array[port_id]->lcore_rx, kni_port_params_array[port_id]->l
goto fail;
}

for (j = 0; i < nb_token && j < KNI_MAX_KTHREAD; i++, j++)


kni_port_params_array[port_id]->lcore_k[j] = (uint8_t)int_fld[i];
kni_port_params_array[port_id]->nb_lcore_k = j;
}

print_config();

return 0;

fail:

for (i = 0; i < RTE_MAX_ETHPORTS; i++) {


if (kni_port_params_array[i]) {
rte_free(kni_port_params_array[i]);
kni_port_params_array[i] = NULL;
}
}

return -1;

Packet Forwarding

After the initialization steps are completed, the main_loop() function is run on each lcore. This
function first checks the lcore_id against the user provided lcore_rx and lcore_tx to see if this
lcore is reading from or writing to kernel NIC interfaces.
For the case that reads from a NIC port and writes to the kernel NIC interfaces, the packet
reception is the same as in L2 Forwarding sample application (see Receive, Process and
Transmit Packets). The packet transmission is done by sending mbufs into the kernel NIC
interfaces by rte_kni_tx_burst(). The KNI library automatically frees the mbufs after the kernel
successfully copied the mbufs.
/**
* Interface to burst rx and enqueue mbufs into rx_q
*/

static void
kni_ingress(struct kni_port_params *p)
{
uint8_t i, nb_kni, port_id;
unsigned nb_rx, num;
struct rte_mbuf *pkts_burst[PKT_BURST_SZ];

3.11. Kernel NIC Interface Sample Application 73


DPDK documentation, Release 17.05.0-rc0

if (p == NULL)
return;

nb_kni = p->nb_kni;
port_id = p->port_id;

for (i = 0; i < nb_kni; i++) {


/* Burst rx from eth */
nb_rx = rte_eth_rx_burst(port_id, 0, pkts_burst, PKT_BURST_SZ);
if (unlikely(nb_rx > PKT_BURST_SZ)) {
RTE_LOG(ERR, APP, "Error receiving from eth\n");
return;
}

/* Burst tx to kni */
num = rte_kni_tx_burst(p->kni[i], pkts_burst, nb_rx);
kni_stats[port_id].rx_packets += num;
rte_kni_handle_request(p->kni[i]);

if (unlikely(num < nb_rx)) {


/* Free mbufs not tx to kni interface */
kni_burst_free_mbufs(&pkts_burst[num], nb_rx - num);
kni_stats[port_id].rx_dropped += nb_rx - num;
}
}
}

For the other case that reads from kernel NIC interfaces and writes to a physical NIC port,
packets are retrieved by reading mbufs from kernel NIC interfaces by rte_kni_rx_burst(). The
packet transmission is the same as in the L2 Forwarding sample application (see Receive,
Process and Transmit Packets).
/**
* Interface to dequeue mbufs from tx_q and burst tx
*/

static void

kni_egress(struct kni_port_params *p)


{
uint8_t i, nb_kni, port_id;
unsigned nb_tx, num;
struct rte_mbuf *pkts_burst[PKT_BURST_SZ];

if (p == NULL)
return;

nb_kni = p->nb_kni;
port_id = p->port_id;

for (i = 0; i < nb_kni; i++) {


/* Burst rx from kni */
num = rte_kni_rx_burst(p->kni[i], pkts_burst, PKT_BURST_SZ);
if (unlikely(num > PKT_BURST_SZ)) {
RTE_LOG(ERR, APP, "Error receiving from KNI\n");
return;
}

/* Burst tx to eth */

nb_tx = rte_eth_tx_burst(port_id, 0, pkts_burst, (uint16_t)num);

kni_stats[port_id].tx_packets += nb_tx;

3.11. Kernel NIC Interface Sample Application 74


DPDK documentation, Release 17.05.0-rc0

if (unlikely(nb_tx < num)) {


/* Free mbufs not tx to NIC */
kni_burst_free_mbufs(&pkts_burst[nb_tx], num - nb_tx);
kni_stats[port_id].tx_dropped += num - nb_tx;
}
}
}

Callbacks for Kernel Requests

To execute specific PMD operations in user space requested by some Linux* commands, call-
backs must be implemented and filled in the struct rte_kni_ops structure. Currently, setting a
new MTU and configuring the network interface (up/ down) are supported.
static struct rte_kni_ops kni_ops = {
.change_mtu = kni_change_mtu,
.config_network_if = kni_config_network_interface,
};

/* Callback for request of changing MTU */

static int
kni_change_mtu(uint8_t port_id, unsigned new_mtu)
{
int ret;
struct rte_eth_conf conf;

if (port_id >= rte_eth_dev_count()) {


RTE_LOG(ERR, APP, "Invalid port id %d\n", port_id);
return -EINVAL;
}

RTE_LOG(INFO, APP, "Change MTU of port %d to %u\n", port_id, new_mtu);

/* Stop specific port */

rte_eth_dev_stop(port_id);

memcpy(&conf, &port_conf, sizeof(conf));

/* Set new MTU */

if (new_mtu > ETHER_MAX_LEN)


conf.rxmode.jumbo_frame = 1;
else
conf.rxmode.jumbo_frame = 0;

/* mtu + length of header + length of FCS = max pkt length */

conf.rxmode.max_rx_pkt_len = new_mtu + KNI_ENET_HEADER_SIZE + KNI_ENET_FCS_SIZE;

ret = rte_eth_dev_configure(port_id, 1, 1, &conf);


if (ret < 0) {
RTE_LOG(ERR, APP, "Fail to reconfigure port %d\n", port_id);
return ret;
}

/* Restart specific port */

ret = rte_eth_dev_start(port_id);

3.11. Kernel NIC Interface Sample Application 75


DPDK documentation, Release 17.05.0-rc0

if (ret < 0) {
RTE_LOG(ERR, APP, "Fail to restart port %d\n", port_id);
return ret;
}

return 0;
}

/* Callback for request of configuring network interface up/down */

static int
kni_config_network_interface(uint8_t port_id, uint8_t if_up)
{
int ret = 0;

if (port_id >= rte_eth_dev_count() || port_id >= RTE_MAX_ETHPORTS) {


RTE_LOG(ERR, APP, "Invalid port id %d\n", port_id);
return -EINVAL;
}

RTE_LOG(INFO, APP, "Configure network interface of %d %s\n",

port_id, if_up ? "up" : "down");

if (if_up != 0) {
/* Configure network interface up */
rte_eth_dev_stop(port_id);
ret = rte_eth_dev_start(port_id);
} else /* Configure network interface down */
rte_eth_dev_stop(port_id);

if (ret < 0)
RTE_LOG(ERR, APP, "Failed to start port %d\n", port_id);
return ret;
}

Keep Alive Sample Application

The Keep Alive application is a simple example of a heartbeat/watchdog for packet processing
cores. It demonstrates how to detect ‘failed’ DPDK cores and notify a fault management entity
of this failure. Its purpose is to ensure the failure of the core does not result in a fault that is not
detectable by a management entity.

Overview

The application demonstrates how to protect against ‘silent outages’ on packet processing
cores. A Keep Alive Monitor Agent Core (master) monitors the state of packet processing cores
(worker cores) by dispatching pings at a regular time interval (default is 5ms) and monitoring
the state of the cores. Cores states are: Alive, MIA, Dead or Buried. MIA indicates a missed
ping, and Dead indicates two missed pings within the specified time interval. When a core is
Dead, a callback function is invoked to restart the packet processing core; A real life application
might use this callback function to notify a higher level fault management entity of the core
failure in order to take the appropriate corrective action.
Note: Only the worker cores are monitored. A local (on the host) mechanism or agent to
supervise the Keep Alive Monitor Agent Core DPDK core is required to detect its failure.

3.12. Keep Alive Sample Application 76


DPDK documentation, Release 17.05.0-rc0

Note: This application is based on the L2 Forwarding Sample Application (in Real and Virtual-
ized Environments). As such, the initialization and run-time paths are very similar to those of
the L2 forwarding application.

Compiling the Application

To compile the application:


1. Go to the sample application directory:
export RTE_SDK=/path/to/rte_sdk cd ${RTE_SDK}/examples/keep_alive

2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc

See the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make

Running the Application

The application has a number of command line options:


./build/l2fwd-keepalive [EAL options] \
-- -p PORTMASK [-q NQ] [-K PERIOD] [-T PERIOD]

where,
• p PORTMASK: A hexadecimal bitmask of the ports to configure
• q NQ: A number of queues (=ports) per lcore (default is 1)
• K PERIOD: Heartbeat check period in ms(5ms default; 86400 max)
• T PERIOD: statistics will be refreshed each PERIOD seconds (0 to disable, 10 default,
86400 maximum).
To run the application in linuxapp environment with 4 lcores, 16 ports 8 RX queues per lcore
and a ping interval of 10ms, issue the command:
./build/l2fwd-keepalive -c f -n 4 -- -q 8 -p ffff -K 10

Refer to the DPDK Getting Started Guide for general information on running applications and
the Environment Abstraction Layer (EAL) options.

Explanation

The following sections provide some explanation of the The Keep-Alive/’Liveliness’ conceptual
scheme. As mentioned in the overview section, the initialization and run-time paths are very
similar to those of the L2 Forwarding Sample Application (in Real and Virtualized Environ-
ments).
The Keep-Alive/’Liveliness’ conceptual scheme:
• A Keep- Alive Agent Runs every N Milliseconds.
• DPDK Cores respond to the keep-alive agent.

3.12. Keep Alive Sample Application 77


DPDK documentation, Release 17.05.0-rc0

• If keep-alive agent detects time-outs, it notifies the fault management entity through a
callback function.
The following sections provide some explanation of the code aspects that are specific to the
Keep Alive sample application.
The keepalive functionality is initialized with a struct rte_keepalive and the callback function to
invoke in the case of a timeout.
rte_global_keepalive_info = rte_keepalive_create(&dead_core, NULL);
if (rte_global_keepalive_info == NULL)
rte_exit(EXIT_FAILURE, "keepalive_create() failed");

The function that issues the pings keepalive_dispatch_pings() is configured to run every
check_period milliseconds.
if (rte_timer_reset(&hb_timer,
(check_period * rte_get_timer_hz()) / 1000,
PERIODICAL,
rte_lcore_id(),
&rte_keepalive_dispatch_pings,
rte_global_keepalive_info
) != 0 )
rte_exit(EXIT_FAILURE, "Keepalive setup failure.\n");

The rest of the initialization and run-time path follows the same paths as the the L2 forwarding
application. The only addition to the main processing loop is the mark alive functionality and
the example random failures.
rte_keepalive_mark_alive(&rte_global_keepalive_info);
cur_tsc = rte_rdtsc();

/* Die randomly within 7 secs for demo purposes.. */


if (cur_tsc - tsc_initial > tsc_lifetime)
break;

The rte_keepalive_mark_alive function simply sets the core state to alive.


static inline void
rte_keepalive_mark_alive(struct rte_keepalive *keepcfg)
{
keepcfg->state_flags[rte_lcore_id()] = ALIVE;
}

L2 Forwarding with Crypto Sample Application

The L2 Forwarding with Crypto (l2fwd-crypto) sample application is a simple example of packet
processing using the Data Plane Development Kit (DPDK), in conjunction with the Cryptodev
library.

Overview

The L2 Forwarding with Crypto sample application performs a crypto operation (cipher/hash)
specified by the user from command line (or using the default values), with a crypto device
capable of doing that operation, for each packet that is received on a RX_PORT and performs
L2 forwarding. The destination port is the adjacent port from the enabled portmask, that is, if
the first four ports are enabled (portmask 0xf), ports 0 and 1 forward into each other, and ports
2 and 3 forward into each other. Also, the MAC addresses are affected as follows:

3.13. L2 Forwarding with Crypto Sample Application 78


DPDK documentation, Release 17.05.0-rc0

• The source MAC address is replaced by the TX_PORT MAC address


• The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID

Compiling the Application

1. Go to the example directory:


export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/l2fwd-crypto

2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc

See the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make

Running the Application

The application requires a number of command line options:


./build/l2fwd-crypto [EAL options] -- [-p PORTMASK] [-q NQ] [-s] [-T PERIOD] /
[--cdev_type HW/SW/ANY] [--chain HASH_CIPHER/CIPHER_HASH/CIPHER_ONLY/HASH_ONLY] /
[--cipher_algo ALGO] [--cipher_op ENCRYPT/DECRYPT] [--cipher_key KEY] /
[--cipher_key_random_size SIZE] [--iv IV] [--iv_random_size SIZE] /
[--auth_algo ALGO] [--auth_op GENERATE/VERIFY] [--auth_key KEY] /
[--auth_key_random_size SIZE] [--aad AAD] [--aad_random_size SIZE] /
[--digest size SIZE] [--sessionless]

where,
• p PORTMASK: A hexadecimal bitmask of the ports to configure (default is all the ports)
• q NQ: A number of queues (=ports) per lcore (default is 1)
• s: manage all ports from single core
• T PERIOD: statistics will be refreshed each PERIOD seconds
(0 to disable, 10 default, 86400 maximum)
• cdev_type: select preferred crypto device type: HW, SW or anything (ANY)
(default is ANY)
• chain: select the operation chaining to perform: Cipher->Hash (CIPHER_HASH),
Hash->Cipher (HASH_CIPHER), Cipher (CIPHER_ONLY), Hash(HASH_ONLY)
(default is Cipher->Hash)
• cipher_algo: select the ciphering algorithm (default is AES CBC)
• cipher_op: select the ciphering operation to perform: ENCRYPT or DECRYPT
(default is ENCRYPT)
• cipher_key: set the ciphering key to be used. Bytes has to be separated with ”:”

3.13. L2 Forwarding with Crypto Sample Application 79


DPDK documentation, Release 17.05.0-rc0

• cipher_key_random_size: set the size of the ciphering key,


which will be generated randomly.
Note that if –cipher_key is used, this will be ignored.
• iv: set the IV to be used. Bytes has to be separated with ”:”
• iv_random_size: set the size of the IV, which will be generated randomly.
Note that if –iv is used, this will be ignored.
• auth_algo: select the authentication algorithm (default is SHA1-HMAC)
• cipher_op: select the authentication operation to perform: GENERATE or VERIFY
(default is GENERATE)
• auth_key: set the authentication key to be used. Bytes has to be separated with ”:”
• auth_key_random_size: set the size of the authentication key,
which will be generated randomly.
Note that if –auth_key is used, this will be ignored.
• aad: set the AAD to be used. Bytes has to be separated with ”:”
• aad_random_size: set the size of the AAD, which will be generated randomly.
Note that if –aad is used, this will be ignored.
• digest_size: set the size of the digest to be generated/verified.
• sessionless: no crypto session will be created.
The application requires that crypto devices capable of performing the specified crypto oper-
ation are available on application initialization. This means that HW crypto device/s must be
bound to a DPDK driver or a SW crypto device/s (virtual crypto PMD) must be created (using
–vdev).
To run the application in linuxapp environment with 2 lcores, 2 ports and 2 crypto devices, issue
the command:
$ ./build/l2fwd-crypto -c 0x3 -n 4 --vdev "cryptodev_aesni_mb_pmd" \
--vdev "cryptodev_aesni_mb_pmd" -- -p 0x3 --chain CIPHER_HASH \
--cipher_op ENCRYPT --cipher_algo AES_CBC \
--cipher_key 00:01:02:03:04:05:06:07:08:09:0a:0b:0c:0d:0e:0f \
--auth_op GENERATE --auth_algo AES_XCBC_MAC \
--auth_key 10:11:12:13:14:15:16:17:18:19:1a:1b:1c:1d:1e:1f

Refer to the DPDK Getting Started Guide for general information on running applications and
the Environment Abstraction Layer (EAL) options.

Explanation

The L2 forward with Crypto application demonstrates the performance of a crypto operation on
a packet received on a RX PORT before forwarding it to a TX PORT.
The following figure illustrates a sample flow of a packet in the application, from reception until
transmission.
The following sections provide some explanation of the application.

3.13. L2 Forwarding with Crypto Sample Application 80


DPDK documentation, Release 17.05.0-rc0

Fig. 3.3: Encryption flow Through the L2 Forwarding with Crypto Application

Crypto operation specification

All the packets received in all the ports get transformed by the crypto device/s (ciphering and/or
authentication). The crypto operation to be performed on the packet is parsed from the com-
mand line (go to “Running the Application section for all the options).
If no parameter is passed, the default crypto operation is:
• Encryption with AES-CBC with 128 bit key.
• Authentication with SHA1-HMAC (generation).
• Keys, IV and AAD are generated randomly.
There are two methods to pass keys, IV and ADD from the command line:
• Passing the full key, separated bytes by ”:”:
--cipher_key 00:11:22:33:44

• Passing the size, so key is generated randomly:


--cipher_key_random_size 16

Note: If full key is passed (first method) and the size is passed as well (second method), the
latter will be ignored.
Size of these keys are checked (regardless the method), before starting the app, to make sure
that it is supported by the crypto devices.

Crypto device initialization

Once the encryption operation is defined, crypto devices are initialized. The crypto devices
must be either bound to a DPDK driver (if they are physical devices) or created using the EAL
option –vdev (if they are virtual devices), when running the application.
The initialize_cryptodevs() function performs the device initialization. It iterates through the list
of the available crypto devices and check which ones are capable of performing the operation.
Each device has a set of capabilities associated with it, which are stored in the device info
structure, so the function checks if the operation is within the structure of each device.
The following code checks if the device supports the specified cipher algorithm (similar for the
authentication algorithm):
/* Check if device supports cipher algo */
i = 0;
opt_cipher_algo = options->cipher_xform.cipher.algo;
cap = &dev_info.capabilities[i];
while (cap->op != RTE_CRYPTO_OP_TYPE_UNDEFINED) {
cap_cipher_algo = cap->sym.cipher.algo;
if (cap->sym.xform_type ==
RTE_CRYPTO_SYM_XFORM_CIPHER) {
if (cap_cipher_algo == opt_cipher_algo) {
if (check_type(options, &dev_info) == 0)
break;
}
}

3.13. L2 Forwarding with Crypto Sample Application 81


DPDK documentation, Release 17.05.0-rc0

cap = &dev_info.capabilities[++i];
}

If a capable crypto device is found, key sizes are checked to see if they are supported (cipher
key and IV for the ciphering):
/*
* Check if length of provided cipher key is supported
* by the algorithm chosen.
*/
if (options->ckey_param) {
if (check_supported_size(
options->cipher_xform.cipher.key.length,
cap->sym.cipher.key_size.min,
cap->sym.cipher.key_size.max,
cap->sym.cipher.key_size.increment)
!= 0) {
printf("Unsupported cipher key length\n");
return -1;
}
/*
* Check if length of the cipher key to be randomly generated
* is supported by the algorithm chosen.
*/
} else if (options->ckey_random_size != -1) {
if (check_supported_size(options->ckey_random_size,
cap->sym.cipher.key_size.min,
cap->sym.cipher.key_size.max,
cap->sym.cipher.key_size.increment)
!= 0) {
printf("Unsupported cipher key length\n");
return -1;
}
options->cipher_xform.cipher.key.length =
options->ckey_random_size;
/* No size provided, use minimum size. */
} else
options->cipher_xform.cipher.key.length =
cap->sym.cipher.key_size.min;

After all the checks, the device is configured and it is added to the crypto device list.
Note: The number of crypto devices that supports the specified crypto operation must be at
least the number of ports to be used.

Session creation

The crypto operation has a crypto session associated to it, which contains information such as
the transform chain to perform (e.g. ciphering then hashing), pointers to the keys, lengths...
etc.
This session is created and is later attached to the crypto operation:
static struct rte_cryptodev_sym_session *
initialize_crypto_session(struct l2fwd_crypto_options *options,
uint8_t cdev_id)
{
struct rte_crypto_sym_xform *first_xform;

if (options->xform_chain == L2FWD_CRYPTO_CIPHER_HASH) {
first_xform = &options->cipher_xform;
first_xform->next = &options->auth_xform;

3.13. L2 Forwarding with Crypto Sample Application 82


DPDK documentation, Release 17.05.0-rc0

} else if (options->xform_chain == L2FWD_CRYPTO_HASH_CIPHER) {


first_xform = &options->auth_xform;
first_xform->next = &options->cipher_xform;
} else if (options->xform_chain == L2FWD_CRYPTO_CIPHER_ONLY) {
first_xform = &options->cipher_xform;
} else {
first_xform = &options->auth_xform;
}

/* Setup Cipher Parameters */


return rte_cryptodev_sym_session_create(cdev_id, first_xform);
}

...

port_cparams[i].session = initialize_crypto_session(options,
port_cparams[i].dev_id);

Crypto operation creation

Given N packets received from a RX PORT, N crypto operations are allocated and filled:
if (nb_rx) {
/*
* If we can't allocate a crypto_ops, then drop
* the rest of the burst and dequeue and
* process the packets to free offload structs
*/
if (rte_crypto_op_bulk_alloc(
l2fwd_crypto_op_pool,
RTE_CRYPTO_OP_TYPE_SYMMETRIC,
ops_burst, nb_rx) !=
nb_rx) {
for (j = 0; j < nb_rx; j++)
rte_pktmbuf_free(pkts_burst[i]);

nb_rx = 0;
}

After filling the crypto operation (including session attachment), the mbuf which will be trans-
formed is attached to it:
op->sym->m_src = m;

Since no destination mbuf is set, the source mbuf will be overwritten after the operation is done
(in-place).

Crypto operation enqueuing/dequeuing

Once the operation has been created, it has to be enqueued in one of the crypto devices.
Before doing so, for performance reasons, the operation stays in a buffer. When the buffer has
enough operations (MAX_PKT_BURST), they are enqueued in the device, which will perform
the operation at that moment:
static int
l2fwd_crypto_enqueue(struct rte_crypto_op *op,
struct l2fwd_crypto_params *cparams)
{
unsigned lcore_id, len;
struct lcore_queue_conf *qconf;

3.13. L2 Forwarding with Crypto Sample Application 83


DPDK documentation, Release 17.05.0-rc0

lcore_id = rte_lcore_id();

qconf = &lcore_queue_conf[lcore_id];
len = qconf->op_buf[cparams->dev_id].len;
qconf->op_buf[cparams->dev_id].buffer[len] = op;
len++;

/* enough ops to be sent */


if (len == MAX_PKT_BURST) {
l2fwd_crypto_send_burst(qconf, MAX_PKT_BURST, cparams);
len = 0;
}

qconf->op_buf[cparams->dev_id].len = len;
return 0;
}

...

static int
l2fwd_crypto_send_burst(struct lcore_queue_conf *qconf, unsigned n,
struct l2fwd_crypto_params *cparams)
{
struct rte_crypto_op **op_buffer;
unsigned ret;

op_buffer = (struct rte_crypto_op **)


qconf->op_buf[cparams->dev_id].buffer;

ret = rte_cryptodev_enqueue_burst(cparams->dev_id,
cparams->qp_id, op_buffer, (uint16_t) n);

crypto_statistics[cparams->dev_id].enqueued += ret;
if (unlikely(ret < n)) {
crypto_statistics[cparams->dev_id].errors += (n - ret);
do {
rte_pktmbuf_free(op_buffer[ret]->sym->m_src);
rte_crypto_op_free(op_buffer[ret]);
} while (++ret < n);
}

return 0;
}

After this, the operations are dequeued from the device, and the transformed mbuf is extracted
from the operation. Then, the operation is freed and the mbuf is forwarded as it is done in the
L2 forwarding application.
/* Dequeue packets from Crypto device */
do {
nb_rx = rte_cryptodev_dequeue_burst(
cparams->dev_id, cparams->qp_id,
ops_burst, MAX_PKT_BURST);

crypto_statistics[cparams->dev_id].dequeued +=
nb_rx;

/* Forward crypto'd packets */


for (j = 0; j < nb_rx; j++) {
m = ops_burst[j]->sym->m_src;

rte_crypto_op_free(ops_burst[j]);

3.13. L2 Forwarding with Crypto Sample Application 84


DPDK documentation, Release 17.05.0-rc0

l2fwd_simple_forward(m, portid);
}
} while (nb_rx == MAX_PKT_BURST);

L2 Forwarding Sample Application (in Real and Virtualized Envi-


ronments) with core load statistics.

The L2 Forwarding sample application is a simple example of packet processing using the Data
Plane Development Kit (DPDK) which also takes advantage of Single Root I/O Virtualization
(SR-IOV) features in a virtualized environment.

Note: This application is a variation of L2 Forwarding sample application. It demonstrate


possible scheme of job stats library usage therefore some parts of this document is identical
with original L2 forwarding application.

Overview

The L2 Forwarding sample application, which can operate in real and virtualized environments,
performs L2 forwarding for each packet that is received. The destination port is the adjacent
port from the enabled portmask, that is, if the first four ports are enabled (portmask 0xf), ports
1 and 2 forward into each other, and ports 3 and 4 forward into each other. Also, the MAC
addresses are affected as follows:
• The source MAC address is replaced by the TX port MAC address
• The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID
This application can be used to benchmark performance using a traffic-generator, as shown in
the Fig. 3.4.
The application can also be used in a virtualized environment as shown in Fig. 3.5.
The L2 Forwarding application can also be used as a starting point for developing a new appli-
cation based on the DPDK.

Fig. 3.4: Performance Benchmark Setup (Basic Environment)

Virtual Function Setup Instructions

This application can use the virtual function available in the system and therefore can be used
in a virtual machine without passing through the whole Network Device into a guest machine
in a virtualized scenario. The virtual functions can be enabled in the host machine or the
hypervisor with the respective physical function driver.
For example, in a Linux* host machine, it is possible to enable a virtual function using the
following command:
modprobe ixgbe max_vfs=2,2

3.14. L2 Forwarding Sample Application (in Real and Virtualized Environments) with 85
core load statistics.
DPDK documentation, Release 17.05.0-rc0

Fig. 3.5: Performance Benchmark Setup (Virtualized Environment)

3.14. L2 Forwarding Sample Application (in Real and Virtualized Environments) with 86
core load statistics.
DPDK documentation, Release 17.05.0-rc0

This command enables two Virtual Functions on each of Physical Function of the NIC, with
two physical ports in the PCI configuration space. It is important to note that enabled Virtual
Function 0 and 2 would belong to Physical Function 0 and Virtual Function 1 and 3 would
belong to Physical Function 1, in this case enabling a total of four Virtual Functions.

Compiling the Application

1. Go to the example directory:


export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/l2fwd-jobstats

2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc

See the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make

Running the Application

The application requires a number of command line options:


./build/l2fwd-jobstats [EAL options] -- -p PORTMASK [-q NQ] [-l]

where,
• p PORTMASK: A hexadecimal bitmask of the ports to configure
• q NQ: A number of queues (=ports) per lcore (default is 1)
• l: Use locale thousands separator when formatting big numbers.
To run the application in linuxapp environment with 4 lcores, 16 ports, 8 RX queues per lcore
and thousands separator printing, issue the command:
$ ./build/l2fwd-jobstats -c f -n 4 -- -q 8 -p ffff -l

Refer to the DPDK Getting Started Guide for general information on running applications and
the Environment Abstraction Layer (EAL) options.

Explanation

The following sections provide some explanation of the code.

Command Line Arguments

The L2 Forwarding sample application takes specific parameters, in addition to Environment


Abstraction Layer (EAL) arguments (see Running the Application). The preferred way to parse
parameters is to use the getopt() function, since it is part of a well-defined and portable library.
The parsing of arguments is done in the l2fwd_parse_args() function. The method of argument
parsing is not described here. Refer to the glibc getopt(3) man page for details.

3.14. L2 Forwarding Sample Application (in Real and Virtualized Environments) with 87
core load statistics.
DPDK documentation, Release 17.05.0-rc0

EAL arguments are parsed first, then application-specific arguments. This is done at the be-
ginning of the main() function:
/* init EAL */

ret = rte_eal_init(argc, argv);


if (ret < 0)
rte_exit(EXIT_FAILURE, "Invalid EAL arguments\n");

argc -= ret;
argv += ret;

/* parse application arguments (after the EAL ones) */

ret = l2fwd_parse_args(argc, argv);


if (ret < 0)
rte_exit(EXIT_FAILURE, "Invalid L2FWD arguments\n");

Mbuf Pool Initialization

Once the arguments are parsed, the mbuf pool is created. The mbuf pool contains a set of
mbuf objects that will be used by the driver and the application to store network packet data:
/* create the mbuf pool */
l2fwd_pktmbuf_pool =
rte_mempool_create("mbuf_pool", NB_MBUF,
MBUF_SIZE, 32,
sizeof(struct rte_pktmbuf_pool_private),
rte_pktmbuf_pool_init, NULL,
rte_pktmbuf_init, NULL,
rte_socket_id(), 0);

if (l2fwd_pktmbuf_pool == NULL)
rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n");

The rte_mempool is a generic structure used to handle pools of objects. In this case, it is
necessary to create a pool that will be used by the driver, which expects to have some reserved
space in the mempool structure, sizeof(struct rte_pktmbuf_pool_private) bytes. The number of
allocated pkt mbufs is NB_MBUF, with a size of MBUF_SIZE each. A per-lcore cache of 32
mbufs is kept. The memory is allocated in rte_socket_id() socket, but it is possible to extend
this code to allocate one mbuf pool per socket.
Two callback pointers are also given to the rte_mempool_create() function:
• The first callback pointer is to rte_pktmbuf_pool_init() and is used to initialize the private
data of the mempool, which is needed by the driver. This function is provided by the mbuf
API, but can be copied and extended by the developer.
• The second callback pointer given to rte_mempool_create() is the mbuf initializer. The
default is used, that is, rte_pktmbuf_init(), which is provided in the rte_mbuf library. If a
more complex application wants to extend the rte_pktmbuf structure for its own needs, a
new function derived from rte_pktmbuf_init( ) can be created.

Driver Initialization

The main part of the code in the main() function relates to the initialization of the driver. To fully
understand this code, it is recommended to study the chapters that related to the Poll Mode
Driver in the DPDK Programmer’s Guide and the DPDK API Reference.

3.14. L2 Forwarding Sample Application (in Real and Virtualized Environments) with 88
core load statistics.
DPDK documentation, Release 17.05.0-rc0

nb_ports = rte_eth_dev_count();

if (nb_ports == 0)
rte_exit(EXIT_FAILURE, "No Ethernet ports - bye\n");

/* reset l2fwd_dst_ports */

for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++)


l2fwd_dst_ports[portid] = 0;

last_port = 0;

/*
* Each logical core is assigned a dedicated TX queue on each port.
*/
for (portid = 0; portid < nb_ports; portid++) {
/* skip ports that are not enabled */
if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
continue;

if (nb_ports_in_mask % 2) {
l2fwd_dst_ports[portid] = last_port;
l2fwd_dst_ports[last_port] = portid;
}
else
last_port = portid;

nb_ports_in_mask++;

rte_eth_dev_info_get((uint8_t) portid, &dev_info);


}

The next step is to configure the RX and TX queues. For each port, there is only one RX queue
(only one lcore is able to poll a given port). The number of TX queues depends on the number
of available lcores. The rte_eth_dev_configure() function is used to configure the number of
queues for a port:
ret = rte_eth_dev_configure((uint8_t)portid, 1, 1, &port_conf);
if (ret < 0)
rte_exit(EXIT_FAILURE, "Cannot configure device: "
"err=%d, port=%u\n",
ret, portid);

The global configuration is stored in a static structure:


static const struct rte_eth_conf port_conf = {
.rxmode = {
.split_hdr_size = 0,
.header_split = 0, /**< Header Split disabled */
.hw_ip_checksum = 0, /**< IP checksum offload disabled */
.hw_vlan_filter = 0, /**< VLAN filtering disabled */
.jumbo_frame = 0, /**< Jumbo Frame Support disabled */
.hw_strip_crc= 0, /**< CRC stripped by hardware */
},

.txmode = {
.mq_mode = ETH_DCB_NONE
},
};

3.14. L2 Forwarding Sample Application (in Real and Virtualized Environments) with 89
core load statistics.
DPDK documentation, Release 17.05.0-rc0

RX Queue Initialization

The application uses one lcore to poll one or several ports, depending on the -q option, which
specifies the number of queues per lcore.
For example, if the user specifies -q 4, the application is able to poll four ports with one lcore.
If there are 16 ports on the target (and if the portmask argument is -p ffff ), the application will
need four lcores to poll all the ports.
ret = rte_eth_rx_queue_setup(portid, 0, nb_rxd,
rte_eth_dev_socket_id(portid),
NULL,
l2fwd_pktmbuf_pool);

if (ret < 0)
rte_exit(EXIT_FAILURE, "rte_eth_rx_queue_setup:err=%d, port=%u\n",
ret, (unsigned) portid);

The list of queues that must be polled for a given lcore is stored in a private structure called
struct lcore_queue_conf.
struct lcore_queue_conf {
unsigned n_rx_port;
unsigned rx_port_list[MAX_RX_QUEUE_PER_LCORE];
truct mbuf_table tx_mbufs[RTE_MAX_ETHPORTS];

struct rte_timer rx_timers[MAX_RX_QUEUE_PER_LCORE];


struct rte_jobstats port_fwd_jobs[MAX_RX_QUEUE_PER_LCORE];

struct rte_timer flush_timer;


struct rte_jobstats flush_job;
struct rte_jobstats idle_job;
struct rte_jobstats_context jobs_context;

rte_atomic16_t stats_read_pending;
rte_spinlock_t lock;
} __rte_cache_aligned;

Values of struct lcore_queue_conf:


• n_rx_port and rx_port_list[] are used in the main packet processing loop (see Section
Receive, Process and Transmit Packets later in this chapter).
• rx_timers and flush_timer are used to ensure forced TX on low packet rate.
• flush_job, idle_job and jobs_context are librte_jobstats objects used for managing l2fwd
jobs.
• stats_read_pending and lock are used during job stats read phase.

TX Queue Initialization

Each lcore should be able to transmit on any port. For every port, a single TX queue is
initialized.
/* init one TX queue on each port */

fflush(stdout);
ret = rte_eth_tx_queue_setup(portid, 0, nb_txd,
rte_eth_dev_socket_id(portid),
NULL);

3.14. L2 Forwarding Sample Application (in Real and Virtualized Environments) with 90
core load statistics.
DPDK documentation, Release 17.05.0-rc0

if (ret < 0)
rte_exit(EXIT_FAILURE, "rte_eth_tx_queue_setup:err=%d, port=%u\n",
ret, (unsigned) portid);

Jobs statistics initialization

There are several statistics objects available:


• Flush job statistics
rte_jobstats_init(&qconf->flush_job, "flush", drain_tsc, drain_tsc,
drain_tsc, 0);

rte_timer_init(&qconf->flush_timer);
ret = rte_timer_reset(&qconf->flush_timer, drain_tsc, PERIODICAL,
lcore_id, &l2fwd_flush_job, NULL);

if (ret < 0) {
rte_exit(1, "Failed to reset flush job timer for lcore %u: %s",
lcore_id, rte_strerror(-ret));
}

• Statistics per RX port


rte_jobstats_init(job, name, 0, drain_tsc, 0, MAX_PKT_BURST);
rte_jobstats_set_update_period_function(job, l2fwd_job_update_cb);

rte_timer_init(&qconf->rx_timers[i]);
ret = rte_timer_reset(&qconf->rx_timers[i], 0, PERIODICAL, lcore_id,
l2fwd_fwd_job, (void *)(uintptr_t)i);

if (ret < 0) {
rte_exit(1, "Failed to reset lcore %u port %u job timer: %s",
lcore_id, qconf->rx_port_list[i], rte_strerror(-ret));
}

Following parameters are passed to rte_jobstats_init():


• 0 as minimal poll period
• drain_tsc as maximum poll period
• MAX_PKT_BURST as desired target value (RX burst size)

Main loop

The forwarding path is reworked comparing to original L2 Forwarding application. In the


l2fwd_main_loop() function three loops are placed.
for (;;) {
rte_spinlock_lock(&qconf->lock);

do {
rte_jobstats_context_start(&qconf->jobs_context);

/* Do the Idle job:


* - Read stats_read_pending flag
* - check if some real job need to be executed
*/
rte_jobstats_start(&qconf->jobs_context, &qconf->idle_job);

do {

3.14. L2 Forwarding Sample Application (in Real and Virtualized Environments) with 91
core load statistics.
DPDK documentation, Release 17.05.0-rc0

uint8_t i;
uint64_t now = rte_get_timer_cycles();

need_manage = qconf->flush_timer.expire < now;


/* Check if we was esked to give a stats. */
stats_read_pending =
rte_atomic16_read(&qconf->stats_read_pending);
need_manage |= stats_read_pending;

for (i = 0; i < qconf->n_rx_port && !need_manage; i++)


need_manage = qconf->rx_timers[i].expire < now;

} while (!need_manage);
rte_jobstats_finish(&qconf->idle_job, qconf->idle_job.target);

rte_timer_manage();
rte_jobstats_context_finish(&qconf->jobs_context);
} while (likely(stats_read_pending == 0));

rte_spinlock_unlock(&qconf->lock);
rte_pause();
}

First infinite for loop is to minimize impact of stats reading. Lock is only locked/unlocked when
asked.
Second inner while loop do the whole jobs management. When any job is ready, the use
rte_timer_manage() is used to call the job handler. In this place functions l2fwd_fwd_job() and
l2fwd_flush_job() are called when needed. Then rte_jobstats_context_finish() is called to mark
loop end - no other jobs are ready to execute. By this time stats are ready to be read and if
stats_read_pending is set, loop breaks allowing stats to be read.
Third do-while loop is the idle job (idle stats counter). Its only purpose is monitoring if any job
is ready or stats job read is pending for this lcore. Statistics from this part of code is considered
as the headroom available for additional processing.

Receive, Process and Transmit Packets

The main task of l2fwd_fwd_job() function is to read ingress packets from the RX queue of
particular port and forward it. This is done using the following code:
total_nb_rx = rte_eth_rx_burst((uint8_t) portid, 0, pkts_burst,
MAX_PKT_BURST);

for (j = 0; j < total_nb_rx; j++) {


m = pkts_burst[j];
rte_prefetch0(rte_pktmbuf_mtod(m, void *));
l2fwd_simple_forward(m, portid);
}

Packets are read in a burst of size MAX_PKT_BURST. Then, each mbuf in the table is pro-
cessed by the l2fwd_simple_forward() function. The processing is very simple: process the TX
port from the RX port, then replace the source and destination MAC addresses.
The rte_eth_rx_burst() function writes the mbuf pointers in a local table and returns the number
of available mbufs in the table.
After first read second try is issued.
if (total_nb_rx == MAX_PKT_BURST) {
const uint16_t nb_rx = rte_eth_rx_burst((uint8_t) portid, 0, pkts_burst,

3.14. L2 Forwarding Sample Application (in Real and Virtualized Environments) with 92
core load statistics.
DPDK documentation, Release 17.05.0-rc0

MAX_PKT_BURST);

total_nb_rx += nb_rx;
for (j = 0; j < nb_rx; j++) {
m = pkts_burst[j];
rte_prefetch0(rte_pktmbuf_mtod(m, void *));
l2fwd_simple_forward(m, portid);
}
}

This second read is important to give job stats library a feedback how many packets was
processed.
/* Adjust period time in which we are running here. */
if (rte_jobstats_finish(job, total_nb_rx) != 0) {
rte_timer_reset(&qconf->rx_timers[port_idx], job->period, PERIODICAL,
lcore_id, l2fwd_fwd_job, arg);
}

To maximize performance exactly MAX_PKT_BURST is expected (the target value) to be read


for each l2fwd_fwd_job() call. If total_nb_rx is smaller than target value job->period will be
increased. If it is greater the period will be decreased.

Note: In the following code, one line for getting the output port requires some explanation.

During the initialization process, a static array of destination ports (l2fwd_dst_ports[]) is filled
such that for each source port, a destination port is assigned that is either the next or previous
enabled port from the portmask. Naturally, the number of ports in the portmask must be even,
otherwise, the application exits.
static void
l2fwd_simple_forward(struct rte_mbuf *m, unsigned portid)
{
struct ether_hdr *eth;
void *tmp;
unsigned dst_port;

dst_port = l2fwd_dst_ports[portid];

eth = rte_pktmbuf_mtod(m, struct ether_hdr *);

/* 02:00:00:00:00:xx */

tmp = &eth->d_addr.addr_bytes[0];

*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t) dst_port << 40);

/* src addr */

ether_addr_copy(&l2fwd_ports_eth_addr[dst_port], &eth->s_addr);

l2fwd_send_packet(m, (uint8_t) dst_port);


}

Then, the packet is sent using the l2fwd_send_packet (m, dst_port) function. For this test
application, the processing is exactly the same for all packets arriving on the same RX port.
Therefore, it would have been possible to call the l2fwd_send_burst() function directly from the
main loop to send all the received packets on the same TX port, using the burst-oriented send
function, which is more efficient.

3.14. L2 Forwarding Sample Application (in Real and Virtualized Environments) with 93
core load statistics.
DPDK documentation, Release 17.05.0-rc0

However, in real-life applications (such as, L3 routing), packet N is not necessarily forwarded
on the same port as packet N-1. The application is implemented to illustrate that, so the same
approach can be reused in a more complex application.
The l2fwd_send_packet() function stores the packet in a per-lcore and per-txport table. If the
table is full, the whole packets table is transmitted using the l2fwd_send_burst() function:
/* Send the packet on an output interface */

static int
l2fwd_send_packet(struct rte_mbuf *m, uint8_t port)
{
unsigned lcore_id, len;
struct lcore_queue_conf *qconf;

lcore_id = rte_lcore_id();
qconf = &lcore_queue_conf[lcore_id];
len = qconf->tx_mbufs[port].len;
qconf->tx_mbufs[port].m_table[len] = m;
len++;

/* enough pkts to be sent */

if (unlikely(len == MAX_PKT_BURST)) {
l2fwd_send_burst(qconf, MAX_PKT_BURST, port);
len = 0;
}

qconf->tx_mbufs[port].len = len; return 0;


}

To ensure that no packets remain in the tables, the flush job exists. The l2fwd_flush_job() is
called periodically to for each lcore draining TX queue of each port. This technique introduces
some latency when there are not many packets to send, however it improves performance:
static void
l2fwd_flush_job(__rte_unused struct rte_timer *timer, __rte_unused void *arg)
{
uint64_t now;
unsigned lcore_id;
struct lcore_queue_conf *qconf;
struct mbuf_table *m_table;
uint8_t portid;

lcore_id = rte_lcore_id();
qconf = &lcore_queue_conf[lcore_id];

rte_jobstats_start(&qconf->jobs_context, &qconf->flush_job);

now = rte_get_timer_cycles();
lcore_id = rte_lcore_id();
qconf = &lcore_queue_conf[lcore_id];
for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) {
m_table = &qconf->tx_mbufs[portid];
if (m_table->len == 0 || m_table->next_flush_time <= now)
continue;

l2fwd_send_burst(qconf, portid);
}

/* Pass target to indicate that this job is happy of time interval


* in which it was called. */

3.14. L2 Forwarding Sample Application (in Real and Virtualized Environments) with 94
core load statistics.
DPDK documentation, Release 17.05.0-rc0

rte_jobstats_finish(&qconf->flush_job, qconf->flush_job.target);
}

L2 Forwarding Sample Application (in Real and Virtualized Envi-


ronments)

The L2 Forwarding sample application is a simple example of packet processing using the Data
Plane Development Kit (DPDK) which also takes advantage of Single Root I/O Virtualization
(SR-IOV) features in a virtualized environment.

Note: Please note that previously a separate L2 Forwarding in Virtualized Environments


sample application was used, however, in later DPDK versions these sample applications have
been merged.

Overview

The L2 Forwarding sample application, which can operate in real and virtualized environments,
performs L2 forwarding for each packet that is received on an RX_PORT. The destination port is
the adjacent port from the enabled portmask, that is, if the first four ports are enabled (portmask
0xf), ports 1 and 2 forward into each other, and ports 3 and 4 forward into each other. Also, if
MAC addresses updating is enabled, the MAC addresses are affected as follows:
• The source MAC address is replaced by the TX_PORT MAC address
• The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID
This application can be used to benchmark performance using a traffic-generator, as shown in
the Fig. 3.6, or in a virtualized environment as shown in Fig. 3.7.

Fig. 3.6: Performance Benchmark Setup (Basic Environment)

This application may be used for basic VM to VM communication as shown in Fig. 3.8, when
MAC addresses updating is disabled.
The L2 Forwarding application can also be used as a starting point for developing a new appli-
cation based on the DPDK.

Virtual Function Setup Instructions

This application can use the virtual function available in the system and therefore can be used
in a virtual machine without passing through the whole Network Device into a guest machine
in a virtualized scenario. The virtual functions can be enabled in the host machine or the
hypervisor with the respective physical function driver.
For example, in a Linux* host machine, it is possible to enable a virtual function using the
following command:
modprobe ixgbe max_vfs=2,2

3.15. L2 Forwarding Sample Application (in Real and Virtualized Environments) 95


DPDK documentation, Release 17.05.0-rc0

Fig. 3.7: Performance Benchmark Setup (Virtualized Environment)

Fig. 3.8: Virtual Machine to Virtual Machine communication.

3.15. L2 Forwarding Sample Application (in Real and Virtualized Environments) 96


DPDK documentation, Release 17.05.0-rc0

This command enables two Virtual Functions on each of Physical Function of the NIC, with
two physical ports in the PCI configuration space. It is important to note that enabled Virtual
Function 0 and 2 would belong to Physical Function 0 and Virtual Function 1 and 3 would
belong to Physical Function 1, in this case enabling a total of four Virtual Functions.

Compiling the Application

1. Go to the example directory:


export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/l2fwd

2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc

See the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make

Running the Application

The application requires a number of command line options:


./build/l2fwd [EAL options] -- -p PORTMASK [-q NQ] --[no-]mac-updating

where,
• p PORTMASK: A hexadecimal bitmask of the ports to configure
• q NQ: A number of queues (=ports) per lcore (default is 1)
• –[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default).
To run the application in linuxapp environment with 4 lcores, 16 ports and 8 RX queues per
lcore and MAC address updating enabled, issue the command:
$ ./build/l2fwd -c f -n 4 -- -q 8 -p ffff

Refer to the DPDK Getting Started Guide for general information on running applications and
the Environment Abstraction Layer (EAL) options.

Explanation

The following sections provide some explanation of the code.

Command Line Arguments

The L2 Forwarding sample application takes specific parameters, in addition to Environment


Abstraction Layer (EAL) arguments. The preferred way to parse parameters is to use the
getopt() function, since it is part of a well-defined and portable library.
The parsing of arguments is done in the l2fwd_parse_args() function. The method of argument
parsing is not described here. Refer to the glibc getopt(3) man page for details.

3.15. L2 Forwarding Sample Application (in Real and Virtualized Environments) 97


DPDK documentation, Release 17.05.0-rc0

EAL arguments are parsed first, then application-specific arguments. This is done at the be-
ginning of the main() function:
/* init EAL */

ret = rte_eal_init(argc, argv);


if (ret < 0)
rte_exit(EXIT_FAILURE, "Invalid EAL arguments\n");

argc -= ret;
argv += ret;

/* parse application arguments (after the EAL ones) */

ret = l2fwd_parse_args(argc, argv);


if (ret < 0)
rte_exit(EXIT_FAILURE, "Invalid L2FWD arguments\n");

Mbuf Pool Initialization

Once the arguments are parsed, the mbuf pool is created. The mbuf pool contains a set of
mbuf objects that will be used by the driver and the application to store network packet data:
/* create the mbuf pool */

l2fwd_pktmbuf_pool = rte_mempool_create("mbuf_pool", NB_MBUF, MBUF_SIZE, 32, sizeof(struct rte_


rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL, SOCKET0, 0);

if (l2fwd_pktmbuf_pool == NULL)
rte_panic("Cannot init mbuf pool\n");

The rte_mempool is a generic structure used to handle pools of objects. In this case, it is
necessary to create a pool that will be used by the driver, which expects to have some reserved
space in the mempool structure, sizeof(struct rte_pktmbuf_pool_private) bytes. The number of
allocated pkt mbufs is NB_MBUF, with a size of MBUF_SIZE each. A per-lcore cache of 32
mbufs is kept. The memory is allocated in NUMA socket 0, but it is possible to extend this code
to allocate one mbuf pool per socket.
Two callback pointers are also given to the rte_mempool_create() function:
• The first callback pointer is to rte_pktmbuf_pool_init() and is used to initialize the private
data of the mempool, which is needed by the driver. This function is provided by the mbuf
API, but can be copied and extended by the developer.
• The second callback pointer given to rte_mempool_create() is the mbuf initializer. The
default is used, that is, rte_pktmbuf_init(), which is provided in the rte_mbuf library. If a
more complex application wants to extend the rte_pktmbuf structure for its own needs, a
new function derived from rte_pktmbuf_init( ) can be created.

Driver Initialization

The main part of the code in the main() function relates to the initialization of the driver. To fully
understand this code, it is recommended to study the chapters that related to the Poll Mode
Driver in the DPDK Programmer’s Guide - Rel 1.4 EAR and the DPDK API Reference.
if (rte_eal_pci_probe() < 0)
rte_exit(EXIT_FAILURE, "Cannot probe PCI\n");

3.15. L2 Forwarding Sample Application (in Real and Virtualized Environments) 98


DPDK documentation, Release 17.05.0-rc0

nb_ports = rte_eth_dev_count();

if (nb_ports == 0)
rte_exit(EXIT_FAILURE, "No Ethernet ports - bye\n");

/* reset l2fwd_dst_ports */

for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++)


l2fwd_dst_ports[portid] = 0;

last_port = 0;

/*
* Each logical core is assigned a dedicated TX queue on each port.
*/

for (portid = 0; portid < nb_ports; portid++) {


/* skip ports that are not enabled */

if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)


continue;

if (nb_ports_in_mask % 2) {
l2fwd_dst_ports[portid] = last_port;
l2fwd_dst_ports[last_port] = portid;
}
else
last_port = portid;

nb_ports_in_mask++;

rte_eth_dev_info_get((uint8_t) portid, &dev_info);


}

Observe that:
• rte_igb_pmd_init_all() simultaneously registers the driver as a PCI driver and as an Eth-
ernet* Poll Mode Driver.
• rte_eal_pci_probe() parses the devices on the PCI bus and initializes recognized devices.
The next step is to configure the RX and TX queues. For each port, there is only one RX queue
(only one lcore is able to poll a given port). The number of TX queues depends on the number
of available lcores. The rte_eth_dev_configure() function is used to configure the number of
queues for a port:
ret = rte_eth_dev_configure((uint8_t)portid, 1, 1, &port_conf);
if (ret < 0)
rte_exit(EXIT_FAILURE, "Cannot configure device: "
"err=%d, port=%u\n",
ret, portid);

The global configuration is stored in a static structure:


static const struct rte_eth_conf port_conf = {
.rxmode = {
.split_hdr_size = 0,
.header_split = 0, /**< Header Split disabled */
.hw_ip_checksum = 0, /**< IP checksum offload disabled */
.hw_vlan_filter = 0, /**< VLAN filtering disabled */
.jumbo_frame = 0, /**< Jumbo Frame Support disabled */
.hw_strip_crc= 0, /**< CRC stripped by hardware */
},

3.15. L2 Forwarding Sample Application (in Real and Virtualized Environments) 99


DPDK documentation, Release 17.05.0-rc0

.txmode = {
.mq_mode = ETH_DCB_NONE
},
};

RX Queue Initialization

The application uses one lcore to poll one or several ports, depending on the -q option, which
specifies the number of queues per lcore.
For example, if the user specifies -q 4, the application is able to poll four ports with one lcore.
If there are 16 ports on the target (and if the portmask argument is -p ffff ), the application will
need four lcores to poll all the ports.
ret = rte_eth_rx_queue_setup((uint8_t) portid, 0, nb_rxd, SOCKET0, &rx_conf, l2fwd_pktmbuf_pool
if (ret < 0)

rte_exit(EXIT_FAILURE, "rte_eth_rx_queue_setup: "


"err=%d, port=%u\n",
ret, portid);

The list of queues that must be polled for a given lcore is stored in a private structure called
struct lcore_queue_conf.
struct lcore_queue_conf {
unsigned n_rx_port;
unsigned rx_port_list[MAX_RX_QUEUE_PER_LCORE];
struct mbuf_table tx_mbufs[L2FWD_MAX_PORTS];
} rte_cache_aligned;

struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];

The values n_rx_port and rx_port_list[] are used in the main packet processing loop (see Re-
ceive, Process and Transmit Packets).
The global configuration for the RX queues is stored in a static structure:
static const struct rte_eth_rxconf rx_conf = {
.rx_thresh = {
.pthresh = RX_PTHRESH,
.hthresh = RX_HTHRESH,
.wthresh = RX_WTHRESH,
},
};

TX Queue Initialization

Each lcore should be able to transmit on any port. For every port, a single TX queue is
initialized.
/* init one TX queue on each port */

fflush(stdout);

ret = rte_eth_tx_queue_setup((uint8_t) portid, 0, nb_txd, rte_eth_dev_socket_id(portid), &tx_co


if (ret < 0)
rte_exit(EXIT_FAILURE, "rte_eth_tx_queue_setup:err=%d, port=%u\n", ret, (unsigned) portid);

The global configuration for TX queues is stored in a static structure:

3.15. L2 Forwarding Sample Application (in Real and Virtualized Environments) 100
DPDK documentation, Release 17.05.0-rc0

static const struct rte_eth_txconf tx_conf = {


.tx_thresh = {
.pthresh = TX_PTHRESH,
.hthresh = TX_HTHRESH,
.wthresh = TX_WTHRESH,
},
.tx_free_thresh = RTE_TEST_TX_DESC_DEFAULT + 1, /* disable feature */
};

Receive, Process and Transmit Packets

In the l2fwd_main_loop() function, the main task is to read ingress packets from the RX queues.
This is done using the following code:
/*
* Read packet from RX queues
*/

for (i = 0; i < qconf->n_rx_port; i++) {


portid = qconf->rx_port_list[i];
nb_rx = rte_eth_rx_burst((uint8_t) portid, 0, pkts_burst, MAX_PKT_BURST);

for (j = 0; j < nb_rx; j++) {


m = pkts_burst[j];
rte_prefetch0[rte_pktmbuf_mtod(m, void *)); l2fwd_simple_forward(m, portid);
}
}

Packets are read in a burst of size MAX_PKT_BURST. The rte_eth_rx_burst() function writes
the mbuf pointers in a local table and returns the number of available mbufs in the table.
Then, each mbuf in the table is processed by the l2fwd_simple_forward() function. The pro-
cessing is very simple: process the TX port from the RX port, then replace the source and
destination MAC addresses if MAC addresses updating is enabled.

Note: In the following code, one line for getting the output port requires some explanation.

During the initialization process, a static array of destination ports (l2fwd_dst_ports[]) is filled
such that for each source port, a destination port is assigned that is either the next or previous
enabled port from the portmask. Naturally, the number of ports in the portmask must be even,
otherwise, the application exits.
static void
l2fwd_simple_forward(struct rte_mbuf *m, unsigned portid)
{
struct ether_hdr *eth;
void *tmp;
unsigned dst_port;

dst_port = l2fwd_dst_ports[portid];

eth = rte_pktmbuf_mtod(m, struct ether_hdr *);

/* 02:00:00:00:00:xx */

tmp = &eth->d_addr.addr_bytes[0];

*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t) dst_port << 40);

3.15. L2 Forwarding Sample Application (in Real and Virtualized Environments) 101
DPDK documentation, Release 17.05.0-rc0

/* src addr */

ether_addr_copy(&l2fwd_ports_eth_addr[dst_port], &eth->s_addr);

l2fwd_send_packet(m, (uint8_t) dst_port);


}

Then, the packet is sent using the l2fwd_send_packet (m, dst_port) function. For this test
application, the processing is exactly the same for all packets arriving on the same RX port.
Therefore, it would have been possible to call the l2fwd_send_burst() function directly from the
main loop to send all the received packets on the same TX port, using the burst-oriented send
function, which is more efficient.
However, in real-life applications (such as, L3 routing), packet N is not necessarily forwarded
on the same port as packet N-1. The application is implemented to illustrate that, so the same
approach can be reused in a more complex application.
The l2fwd_send_packet() function stores the packet in a per-lcore and per-txport table. If the
table is full, the whole packets table is transmitted using the l2fwd_send_burst() function:
/* Send the packet on an output interface */

static int
l2fwd_send_packet(struct rte_mbuf *m, uint8_t port)
{
unsigned lcore_id, len;
struct lcore_queue_conf *qconf;

lcore_id = rte_lcore_id();
qconf = &lcore_queue_conf[lcore_id];
len = qconf->tx_mbufs[port].len;
qconf->tx_mbufs[port].m_table[len] = m;
len++;

/* enough pkts to be sent */

if (unlikely(len == MAX_PKT_BURST)) {
l2fwd_send_burst(qconf, MAX_PKT_BURST, port);
len = 0;
}

qconf->tx_mbufs[port].len = len; return 0;


}

To ensure that no packets remain in the tables, each lcore does a draining of TX queue in its
main loop. This technique introduces some latency when there are not many packets to send,
however it improves performance:
cur_tsc = rte_rdtsc();

/*
* TX burst queue drain
*/

diff_tsc = cur_tsc - prev_tsc;

if (unlikely(diff_tsc > drain_tsc)) {


for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) {
if (qconf->tx_mbufs[portid].len == 0)
continue;

l2fwd_send_burst(&lcore_queue_conf[lcore_id], qconf->tx_mbufs[portid].len, (uint8_t) po

3.15. L2 Forwarding Sample Application (in Real and Virtualized Environments) 102
DPDK documentation, Release 17.05.0-rc0

qconf->tx_mbufs[portid].len = 0;
}

/* if timer is enabled */

if (timer_period > 0) {
/* advance the timer */

timer_tsc += diff_tsc;

/* if timer has reached its timeout */

if (unlikely(timer_tsc >= (uint64_t) timer_period)) {


/* do this only on master core */

if (lcore_id == rte_get_master_lcore()) {
print_stats();

/* reset the timer */


timer_tsc = 0;
}
}
}

prev_tsc = cur_tsc;
}

L2 Forwarding Sample Application with Cache Allocation Technol-


ogy (CAT)

Basic Forwarding sample application is a simple skeleton example of a forwarding application.


It has been extended to make use of CAT via extended command line options and linking
against the libpqos library.
It is intended as a demonstration of the basic components of a DPDK forwarding application
and use of the libpqos library to program CAT. For more detailed implementations see the L2
and L3 forwarding sample applications.
CAT and Code Data Prioritization (CDP) features allow management of the CPU’s last level
cache. CAT introduces classes of service (COS) that are essentially bitmasks. In current
CAT implementations, a bit in a COS bitmask corresponds to one cache way in last level
cache. A CPU core is always assigned to one of the CAT classes. By programming CPU core
assignment and COS bitmasks, applications can be given exclusive, shared, or mixed access
to the CPU’s last level cache. CDP extends CAT so that there are two bitmasks per COS, one
for data and one for code. The number of classes and number of valid bits in a COS bitmask is
CPU model specific and COS bitmasks need to be contiguous. Sample code calls this bitmask
cbm or capacity bitmask. By default, after reset, all CPU cores are assigned to COS 0 and all
classes are programmed to allow fill into all cache ways. CDP is off by default.
For more information about CAT please see:
• https://github.com/01org/intel-cmt-cat
White paper demonstrating example use case:
• Increasing Platform Determinism with Platform Quality of Service for the Data Plane De-
velopment Kit

3.16. L2 Forwarding Sample Application with Cache Allocation Technology (CAT) 103
DPDK documentation, Release 17.05.0-rc0

Compiling the Application

Requires libpqos from Intel’s intel-cmt-cat software package hosted on GitHub repository.
For installation notes, please see README file.
GIT:
• https://github.com/01org/intel-cmt-cat
To compile the application export the path to PQoS lib and the DPDK source tree and go to the
example directory:
export PQOS_INSTALL_PATH=/path/to/libpqos
export RTE_SDK=/path/to/rte_sdk

cd ${RTE_SDK}/examples/l2fwd-cat

Set the target, for example:


export RTE_TARGET=x86_64-native-linuxapp-gcc

See the DPDK Getting Started Guide for possible RTE_TARGET values.
Build the application as follows:
make

Running the Application

To run the example in a linuxapp environment and enable CAT on cpus 0-2:
./build/l2fwd-cat -c 2 -n 4 -- --l3ca="0x3@(0-2)"

or to enable CAT and CDP on cpus 1,3:


./build/l2fwd-cat -c 2 -n 4 -- --l3ca="(0x00C00,0x00300)@(1,3)"

If CDP is not supported it will fail with following error message:


PQOS: CDP requested but not supported.
PQOS: Requested CAT configuration is not valid!
PQOS: Shutting down PQoS library...
EAL: Error - exiting with code: 1
Cause: PQOS: L3CA init failed!

The option to enable CAT is:


• --l3ca=’<common_cbm@cpus>[,<(code_cbm,data_cbm)@cpus>...]’:
where cbm stands for capacity bitmask and must be expressed in hexadecimal form.
common_cbm is a single mask, for a CDP enabled system, a group of two masks
(code_cbm and data_cbm) is used.
( and ) are necessary if it’s a group.
cpus could be a single digit/range or a group and must be expressed in decimal form.
( and ) are necessary if it’s a group.
e.g. --l3ca=’0x00F00@(1,3),0x0FF00@(4-6),0xF0000@7’
– cpus 1 and 3 share its 4 ways with cpus 4, 5 and 6;
– cpus 4, 5 and 6 share half (4 out of 8 ways) of its L3 with cpus 1 and 3;

3.16. L2 Forwarding Sample Application with Cache Allocation Technology (CAT) 104
DPDK documentation, Release 17.05.0-rc0

– cpus 4, 5 and 6 have exclusive access to 4 out of 8 ways;


– cpu 7 has exclusive access to all of its 4 ways;
e.g. --l3ca=’(0x00C00,0x00300)@(1,3)’ for CDP enabled system
– cpus 1 and 3 have access to 2 ways for code and 2 ways for data, code and data
ways are not overlapping.
Refer to DPDK Getting Started Guide for general information on running applications and the
Environment Abstraction Layer (EAL) options.
To reset or list CAT configuration and control CDP please use pqos tool from Intel’s intel-cmt-
cat software package.
To enabled or disable CDP:
sudo ./pqos -S cdp-on

sudo ./pqos -S cdp-off

to reset CAT configuration:


sudo ./pqos -R

to list CAT config:


sudo ./pqos -s

For more info about pqos tool please see its man page or intel-cmt-cat wiki.

Explanation

The following sections provide an explanation of the main components of the code.
All DPDK library functions used in the sample code are prefixed with rte_ and are explained
in detail in the DPDK API Documentation.

The Main Function

The main() function performs the initialization and calls the execution threads for each lcore.
The first task is to initialize the Environment Abstraction Layer (EAL). The argc and argv
arguments are provided to the rte_eal_init() function. The value returned is the number
of parsed arguments:
int ret = rte_eal_init(argc, argv);
if (ret < 0)
rte_exit(EXIT_FAILURE, "Error with EAL initialization\n");

The next task is to initialize the PQoS library and configure CAT. The argc and argv argu-
ments are provided to the cat_init() function. The value returned is the number of parsed
arguments:
int ret = cat_init(argc, argv);
if (ret < 0)
rte_exit(EXIT_FAILURE, "PQOS: L3CA init failed!\n");

cat_init() is a wrapper function which parses the command, validates the requested pa-
rameters and configures CAT accordingly.

3.16. L2 Forwarding Sample Application with Cache Allocation Technology (CAT) 105
DPDK documentation, Release 17.05.0-rc0

Parsing of command line arguments is done in parse_args(...). libpqos is then ini-


tialized with the pqos_init(...) call. Next, libpqos is queried for system CPU infor-
mation and L3CA capabilities via pqos_cap_get(...) and pqos_cap_get_type(...,
PQOS_CAP_TYPE_L3CA, ...) calls. When all capability and topology information is col-
lected, the requested CAT configuration is validated. A check is then performed (on per
socket basis) for a sufficient number of un-associated COS. COS are selected and config-
ured via the pqos_l3ca_set(...) call. Finally, COS are associated to relevant CPUs via
pqos_l3ca_assoc_set(...) calls.
atexit(...) is used to register cat_exit(...) to be called on a clean exit.
cat_exit(...) performs a simple CAT clean-up, by associating COS 0 to all involved CPUs
via pqos_l3ca_assoc_set(...) calls.

L3 Forwarding Sample Application

The L3 Forwarding application is a simple example of packet processing using the DPDK. The
application performs L3 forwarding.

Overview

The application demonstrates the use of the hash and LPM libraries in the DPDK to implement
packet forwarding. The initialization and run-time paths are very similar to those of the L2
Forwarding Sample Application (in Real and Virtualized Environments). The main difference
from the L2 Forwarding sample application is that the forwarding decision is made based on
information read from the input packet.
The lookup method is either hash-based or LPM-based and is selected at run time. When the
selected lookup method is hash-based, a hash object is used to emulate the flow classification
stage. The hash object is used in correlation with a flow table to map each input packet to its
flow at runtime.
The hash lookup key is represented by a DiffServ 5-tuple composed of the following fields read
from the input packet: Source IP Address, Destination IP Address, Protocol, Source Port and
Destination Port. The ID of the output interface for the input packet is read from the identified
flow table entry. The set of flows used by the application is statically configured and loaded
into the hash at initialization time. When the selected lookup method is LPM based, an LPM
object is used to emulate the forwarding stage for IPv4 packets. The LPM object is used as the
routing table to identify the next hop for each input packet at runtime.
The LPM lookup key is represented by the Destination IP Address field read from the input
packet. The ID of the output interface for the input packet is the next hop returned by the LPM
lookup. The set of LPM rules used by the application is statically configured and loaded into
the LPM object at initialization time.
In the sample application, hash-based forwarding supports IPv4 and IPv6. LPM-based for-
warding supports IPv4 only.

Compiling the Application

To compile the application:


1. Go to the sample application directory:

3.17. L3 Forwarding Sample Application 106


DPDK documentation, Release 17.05.0-rc0

export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/l3fwd

2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc

See the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make

Running the Application

The application has a number of command line options:


./l3fwd [EAL options] -- -p PORTMASK
[-P]
[-E]
[-L]
--config(port,queue,lcore)[,(port,queue,lcore)]
[--eth-dest=X,MM:MM:MM:MM:MM:MM]
[--enable-jumbo [--max-pkt-len PKTLEN]]
[--no-numa]
[--hash-entry-num]
[--ipv6]
[--parse-ptype]

Where,
• -p PORTMASK: Hexadecimal bitmask of ports to configure
• -P: Optional, sets all ports to promiscuous mode so that packets are accepted regard-
less of the packet’s Ethernet MAC destination address. Without this option, only packets
with the Ethernet MAC destination address set to the Ethernet address of the port are
accepted.
• -E: Optional, enable exact match.
• -L: Optional, enable longest prefix match.
• --config (port,queue,lcore)[,(port,queue,lcore)]: Determines which
queues from which ports are mapped to which cores.
• --eth-dest=X,MM:MM:MM:MM:MM:MM: Optional, ethernet destination for port X.
• --enable-jumbo: Optional, enables jumbo frames.
• --max-pkt-len: Optional, under the premise of enabling jumbo, maximum packet
length in decimal (64-9600).
• --no-numa: Optional, disables numa awareness.
• --hash-entry-num: Optional, specifies the hash entry number in hexadecimal to be
setup.
• --ipv6: Optional, set if running ipv6 packets.
• --parse-ptype: Optional, set to use software to analyze packet type. Without this
option, hardware will check the packet type.

3.17. L3 Forwarding Sample Application 107


DPDK documentation, Release 17.05.0-rc0

For example, consider a dual processor socket platform with 8 physical cores, where cores 0-7
and 16-23 appear on socket 0, while cores 8-15 and 24-31 appear on socket 1.
To enable L3 forwarding between two ports, assuming that both ports are in the same socket,
using two cores, cores 1 and 2, (which are in the same socket too), use the following command:
./build/l3fwd -l 1,2 -n 4 -- -p 0x3 --config="(0,0,1),(1,0,2)"

In this command:
• The -l option enables cores 1, 2
• The -p option enables ports 0 and 1
• The –config option enables one queue on each port and maps each (port,queue) pair to
a specific core. The following table shows the mapping in this example:
Port Queue lcore Description
0 0 1 Map queue 0 from port 0 to lcore 1.
1 0 2 Map queue 0 from port 1 to lcore 2.
Refer to the DPDK Getting Started Guide for general information on running applications and
the Environment Abstraction Layer (EAL) options.

Explanation

The following sections provide some explanation of the sample application code. As mentioned
in the overview section, the initialization and run-time paths are very similar to those of the L2
Forwarding Sample Application (in Real and Virtualized Environments). The following sections
describe aspects that are specific to the L3 Forwarding sample application.

Hash Initialization

The hash object is created and loaded with the pre-configured entries read from a global array,
and then generate the expected 5-tuple as key to keep consistence with those of real flow for
the convenience to execute hash performance test on 4M/8M/16M flows.

Note: The Hash initialization will setup both ipv4 and ipv6 hash table, and populate the either
table depending on the value of variable ipv6. To support the hash performance test with
up to 8M single direction flows/16M bi-direction flows, populate_ipv4_many_flow_into_table()
function will populate the hash table with specified hash table entry number(default 4M).

Note: Value of global variable ipv6 can be specified with –ipv6 in the command line. Value
of global variable hash_entry_number, which is used to specify the total hash entry number
for all used ports in hash performance test, can be specified with –hash-entry-num VALUE in
command line, being its default value 4.

#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)

static void
setup_hash(int socketid)
{
// ...

3.17. L3 Forwarding Sample Application 108


DPDK documentation, Release 17.05.0-rc0

if (hash_entry_number != HASH_ENTRY_NUMBER_DEFAULT) {
if (ipv6 == 0) {
/* populate the ipv4 hash */
populate_ipv4_many_flow_into_table(ipv4_l3fwd_lookup_struct[socketid], hash_ent
} else {
/* populate the ipv6 hash */
populate_ipv6_many_flow_into_table( ipv6_l3fwd_lookup_struct[socketid], hash_en
}
} else
if (ipv6 == 0) {
/* populate the ipv4 hash */
populate_ipv4_few_flow_into_table(ipv4_l3fwd_lookup_struct[socketid]);
} else {
/* populate the ipv6 hash */
populate_ipv6_few_flow_into_table(ipv6_l3fwd_lookup_struct[socketid]);
}
}
}
#endif

LPM Initialization

The LPM object is created and loaded with the pre-configured entries read from a global array.
#if (APP_LOOKUP_METHOD == APP_LOOKUP_LPM)

static void
setup_lpm(int socketid)
{
unsigned i;
int ret;
char s[64];

/* create the LPM table */

snprintf(s, sizeof(s), "IPV4_L3FWD_LPM_%d", socketid);

ipv4_l3fwd_lookup_struct[socketid] = rte_lpm_create(s, socketid, IPV4_L3FWD_LPM_MAX_RULES,

if (ipv4_l3fwd_lookup_struct[socketid] == NULL)
rte_exit(EXIT_FAILURE, "Unable to create the l3fwd LPM table"
" on socket %d\n", socketid);

/* populate the LPM table */

for (i = 0; i < IPV4_L3FWD_NUM_ROUTES; i++) {


/* skip unused ports */

if ((1 << ipv4_l3fwd_route_array[i].if_out & enabled_port_mask) == 0)


continue;

ret = rte_lpm_add(ipv4_l3fwd_lookup_struct[socketid], ipv4_l3fwd_route_array[i].ip,


ipv4_l3fwd_route_array[i].depth, ipv4_l3fwd_route_array[i].if_o

if (ret < 0) {
rte_exit(EXIT_FAILURE, "Unable to add entry %u to the "
"l3fwd LPM table on socket %d\n", i, socketid);
}

printf("LPM: Adding route 0x%08x / %d (%d)\n",


(unsigned)ipv4_l3fwd_route_array[i].ip, ipv4_l3fwd_route_array[i].depth, ipv4_l3fwd

3.17. L3 Forwarding Sample Application 109


DPDK documentation, Release 17.05.0-rc0

}
}
#endif

Packet Forwarding for Hash-based Lookups

For each input packet, the packet forwarding operation is done by the l3fwd_simple_forward()
or simple_ipv4_fwd_4pkts() function for IPv4 packets or the simple_ipv6_fwd_4pkts() func-
tion for IPv6 packets. The l3fwd_simple_forward() function provides the basic functionality
for both IPv4 and IPv6 packet forwarding for any number of burst packets received, and the
packet forwarding decision (that is, the identification of the output interface for the packet) for
hash-based lookups is done by the get_ipv4_dst_port() or get_ipv6_dst_port() function. The
get_ipv4_dst_port() function is shown below:
static inline uint8_t
get_ipv4_dst_port(void *ipv4_hdr, uint8_t portid, lookup_struct_t *ipv4_l3fwd_lookup_struct)
{
int ret = 0;
union ipv4_5tuple_host key;

ipv4_hdr = (uint8_t *)ipv4_hdr + offsetof(struct ipv4_hdr, time_to_live);

m128i data = _mm_loadu_si128(( m128i*)(ipv4_hdr));

/* Get 5 tuple: dst port, src port, dst IP address, src IP address and protocol */

key.xmm = _mm_and_si128(data, mask0);

/* Find destination port */

ret = rte_hash_lookup(ipv4_l3fwd_lookup_struct, (const void *)&key);

return (uint8_t)((ret < 0)? portid : ipv4_l3fwd_out_if[ret]);


}

The get_ipv6_dst_port() function is similar to the get_ipv4_dst_port() function.


The simple_ipv4_fwd_4pkts() and simple_ipv6_fwd_4pkts() function are optimized for contin-
uous 4 valid ipv4 and ipv6 packets, they leverage the multiple buffer optimization to boost the
performance of forwarding packets with the exact match on hash table. The key code snippet
of simple_ipv4_fwd_4pkts() is shown below:
static inline void
simple_ipv4_fwd_4pkts(struct rte_mbuf* m[4], uint8_t portid, struct lcore_conf *qconf)
{
// ...

data[0] = _mm_loadu_si128(( m128i*)(rte_pktmbuf_mtod(m[0], unsigned char *) + sizeof(struct


data[1] = _mm_loadu_si128(( m128i*)(rte_pktmbuf_mtod(m[1], unsigned char *) + sizeof(struct
data[2] = _mm_loadu_si128(( m128i*)(rte_pktmbuf_mtod(m[2], unsigned char *) + sizeof(struct
data[3] = _mm_loadu_si128(( m128i*)(rte_pktmbuf_mtod(m[3], unsigned char *) + sizeof(struct

key[0].xmm = _mm_and_si128(data[0], mask0);


key[1].xmm = _mm_and_si128(data[1], mask0);
key[2].xmm = _mm_and_si128(data[2], mask0);
key[3].xmm = _mm_and_si128(data[3], mask0);

const void *key_array[4] = {&key[0], &key[1], &key[2],&key[3]};

rte_hash_lookup_bulk(qconf->ipv4_lookup_struct, &key_array[0], 4, ret);

3.17. L3 Forwarding Sample Application 110


DPDK documentation, Release 17.05.0-rc0

dst_port[0] = (ret[0] < 0)? portid:ipv4_l3fwd_out_if[ret[0]];


dst_port[1] = (ret[1] < 0)? portid:ipv4_l3fwd_out_if[ret[1]];
dst_port[2] = (ret[2] < 0)? portid:ipv4_l3fwd_out_if[ret[2]];
dst_port[3] = (ret[3] < 0)? portid:ipv4_l3fwd_out_if[ret[3]];

// ...
}

The simple_ipv6_fwd_4pkts() function is similar to the simple_ipv4_fwd_4pkts() function.


Known issue: IP packets with extensions or IP packets which are not TCP/UDP cannot work
well at this mode.

Packet Forwarding for LPM-based Lookups

For each input packet, the packet forwarding operation is done by the l3fwd_simple_forward()
function, but the packet forwarding decision (that is, the identification of the output interface for
the packet) for LPM-based lookups is done by the get_ipv4_dst_port() function below:
static inline uint8_t
get_ipv4_dst_port(struct ipv4_hdr *ipv4_hdr, uint8_t portid, lookup_struct_t *ipv4_l3fwd_lookup
{
uint8_t next_hop;

return (uint8_t) ((rte_lpm_lookup(ipv4_l3fwd_lookup_struct, rte_be_to_cpu_32(ipv4_hdr->dst_


}

L3 Forwarding with Power Management Sample Application

Introduction

The L3 Forwarding with Power Management application is an example of power-aware packet


processing using the DPDK. The application is based on existing L3 Forwarding sample appli-
cation, with the power management algorithms to control the P-states and C-states of the Intel
processor via a power management library.

Overview

The application demonstrates the use of the Power libraries in the DPDK to implement packet
forwarding. The initialization and run-time paths are very similar to those of the L3 Forwarding
Sample Application. The main difference from the L3 Forwarding sample application is that this
application introduces power-aware optimization algorithms by leveraging the Power library to
control P-state and C-state of processor based on packet load.
The DPDK includes poll-mode drivers to configure Intel NIC devices and their receive (Rx) and
transmit (Tx) queues. The design principle of this PMD is to access the Rx and Tx descriptors
directly without any interrupts to quickly receive, process and deliver packets in the user space.
In general, the DPDK executes an endless packet processing loop on dedicated IA cores that
include the following steps:
• Retrieve input packets through the PMD to poll Rx queue

3.18. L3 Forwarding with Power Management Sample Application 111


DPDK documentation, Release 17.05.0-rc0

• Process each received packet or provide received packets to other processing cores
through software queues
• Send pending output packets to Tx queue through the PMD
In this way, the PMD achieves better performance than a traditional interrupt-mode driver, at
the cost of keeping cores active and running at the highest frequency, hence consuming the
maximum power all the time. However, during the period of processing light network traffic,
which happens regularly in communication infrastructure systems due to well-known “tidal ef-
fect”, the PMD is still busy waiting for network packets, which wastes a lot of power.
Processor performance states (P-states) are the capability of an Intel processor to switch be-
tween different supported operating frequencies and voltages. If configured correctly, accord-
ing to system workload, this feature provides power savings. CPUFreq is the infrastructure
provided by the Linux* kernel to control the processor performance state capability. CPUFreq
supports a user space governor that enables setting frequency via manipulating the virtual file
device from a user space application. The Power library in the DPDK provides a set of APIs for
manipulating a virtual file device to allow user space application to set the CPUFreq governor
and set the frequency of specific cores.
This application includes a P-state power management algorithm to generate a frequency hint
to be sent to CPUFreq. The algorithm uses the number of received and available Rx packets
on recent polls to make a heuristic decision to scale frequency up/down. Specifically, some
thresholds are checked to see whether a specific core running an DPDK polling thread needs
to increase frequency a step up based on the near to full trend of polled Rx queues. Also, it
decreases frequency a step if packet processed per loop is far less than the expected threshold
or the thread’s sleeping time exceeds a threshold.
C-States are also known as sleep states. They allow software to put an Intel core into a low
power idle state from which it is possible to exit via an event, such as an interrupt. However,
there is a tradeoff between the power consumed in the idle state and the time required to wake
up from the idle state (exit latency). Therefore, as you go into deeper C-states, the power
consumed is lower but the exit latency is increased. Each C-state has a target residency. It is
essential that when entering into a C-state, the core remains in this C-state for at least as long
as the target residency in order to fully realize the benefits of entering the C-state. CPUIdle is
the infrastructure provide by the Linux kernel to control the processor C-state capability. Unlike
CPUFreq, CPUIdle does not provide a mechanism that allows the application to change C-
state. It actually has its own heuristic algorithms in kernel space to select target C-state to
enter by executing privileged instructions like HLT and MWAIT, based on the speculative sleep
duration of the core. In this application, we introduce a heuristic algorithm that allows packet
processing cores to sleep for a short period if there is no Rx packet received on recent polls.
In this way, CPUIdle automatically forces the corresponding cores to enter deeper C-states
instead of always running to the C0 state waiting for packets.

Note: To fully demonstrate the power saving capability of using C-states, it is recommended
to enable deeper C3 and C6 states in the BIOS during system boot up.

Compiling the Application

To compile the application:


1. Go to the sample application directory:

3.18. L3 Forwarding with Power Management Sample Application 112


DPDK documentation, Release 17.05.0-rc0

export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/l3fwd-power

2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc

See the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make

Running the Application

The application has a number of command line options:


./build/l3fwd_power [EAL options] -- -p PORTMASK [-P] --config(port,queue,lcore)[,(port,queue,

where,
• -p PORTMASK: Hexadecimal bitmask of ports to configure
• -P: Sets all ports to promiscuous mode so that packets are accepted regardless of the
packet’s Ethernet MAC destination address. Without this option, only packets with the
Ethernet MAC destination address set to the Ethernet address of the port are accepted.
• –config (port,queue,lcore)[,(port,queue,lcore)]: determines which queues from which
ports are mapped to which cores.
• –enable-jumbo: optional, enables jumbo frames
• –max-pkt-len: optional, maximum packet length in decimal (64-9600)
• –no-numa: optional, disables numa awareness
See L3 Forwarding Sample Application for details. The L3fwd-power example reuses the L3fwd
command line options.

Explanation

The following sections provide some explanation of the sample application code. As mentioned
in the overview section, the initialization and run-time paths are identical to those of the L3
forwarding application. The following sections describe aspects that are specific to the L3
Forwarding with Power Management sample application.

Power Library Initialization

The Power library is initialized in the main routine. It changes the P-state governor to userspace
for specific cores that are under control. The Timer library is also initialized and several timers
are created later on, responsible for checking if it needs to scale down frequency at run time
by checking CPU utilization statistics.

Note: Only the power management related initialization is shown.

3.18. L3 Forwarding with Power Management Sample Application 113


DPDK documentation, Release 17.05.0-rc0

int main(int argc, char **argv)


{
struct lcore_conf *qconf;
int ret;
unsigned nb_ports;
uint16_t queueid;
unsigned lcore_id;
uint64_t hz;
uint32_t n_tx_queue, nb_lcores;
uint8_t portid, nb_rx_queue, queue, socketid;

// ...

/* init RTE timer library to be used to initialize per-core timers */

rte_timer_subsystem_init();

// ...

/* per-core initialization */

for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {


if (rte_lcore_is_enabled(lcore_id) == 0)
continue;

/* init power management library for a specified core */

ret = rte_power_init(lcore_id);
if (ret)
rte_exit(EXIT_FAILURE, "Power management library "
"initialization failed on core%d\n", lcore_id);

/* init timer structures for each enabled lcore */

rte_timer_init(&power_timers[lcore_id]);

hz = rte_get_hpet_hz();

rte_timer_reset(&power_timers[lcore_id], hz/TIMER_NUMBER_PER_SECOND, SINGLE, lcore_id,

// ...
}

// ...
}

Monitoring Loads of Rx Queues

In general, the polling nature of the DPDK prevents the OS power management subsystem
from knowing if the network load is actually heavy or light. In this sample, sampling network
load work is done by monitoring received and available descriptors on NIC Rx queues in recent
polls. Based on the number of returned and available Rx descriptors, this example implements
algorithms to generate frequency scaling hints and speculative sleep duration, and use them
to control P-state and C-state of processors via the power management library. Frequency (P-
state) control and sleep state (C-state) control work individually for each logical core, and the
combination of them contributes to a power efficient packet processing solution when serving
light network loads.
The rte_eth_rx_burst() function and the newly-added rte_eth_rx_queue_count() function are

3.18. L3 Forwarding with Power Management Sample Application 114


DPDK documentation, Release 17.05.0-rc0

used in the endless packet processing loop to return the number of received and available Rx
descriptors. And those numbers of specific queue are passed to P-state and C-state heuristic
algorithms to generate hints based on recent network load trends.

Note: Only power control related code is shown.

static
attribute ((noreturn)) int main_loop( attribute ((unused)) void *dummy)
{
// ...

while (1) {
// ...

/**
* Read packet from RX queues
*/

lcore_scaleup_hint = FREQ_CURRENT;
lcore_rx_idle_count = 0;

for (i = 0; i < qconf->n_rx_queue; ++i)


{
rx_queue = &(qconf->rx_queue_list[i]);
rx_queue->idle_hint = 0;
portid = rx_queue->port_id;
queueid = rx_queue->queue_id;

nb_rx = rte_eth_rx_burst(portid, queueid, pkts_burst, MAX_PKT_BURST);


stats[lcore_id].nb_rx_processed += nb_rx;

if (unlikely(nb_rx == 0)) {
/**
* no packet received from rx queue, try to
* sleep for a while forcing CPU enter deeper
* C states.
*/

rx_queue->zero_rx_packet_count++;

if (rx_queue->zero_rx_packet_count <= MIN_ZERO_POLL_COUNT)


continue;

rx_queue->idle_hint = power_idle_heuristic(rx_queue->zero_rx_packet_count);
lcore_rx_idle_count++;
} else {
rx_ring_length = rte_eth_rx_queue_count(portid, queueid);

rx_queue->zero_rx_packet_count = 0;

/**
* do not scale up frequency immediately as
* user to kernel space communication is costly
* which might impact packet I/O for received
* packets.
*/

rx_queue->freq_up_hint = power_freq_scaleup_heuristic(lcore_id, rx_ring_length);


}

/* Prefetch and forward packets */

3.18. L3 Forwarding with Power Management Sample Application 115


DPDK documentation, Release 17.05.0-rc0

// ...
}

if (likely(lcore_rx_idle_count != qconf->n_rx_queue)) {
for (i = 1, lcore_scaleup_hint = qconf->rx_queue_list[0].freq_up_hint; i < qconf->n_rx_
x_queue = &(qconf->rx_queue_list[i]);

if (rx_queue->freq_up_hint > lcore_scaleup_hint)

lcore_scaleup_hint = rx_queue->freq_up_hint;
}

if (lcore_scaleup_hint == FREQ_HIGHEST)

rte_power_freq_max(lcore_id);

else if (lcore_scaleup_hint == FREQ_HIGHER)


rte_power_freq_up(lcore_id);
} else {
/**
* All Rx queues empty in recent consecutive polls,
* sleep in a conservative manner, meaning sleep as
* less as possible.
*/

for (i = 1, lcore_idle_hint = qconf->rx_queue_list[0].idle_hint; i < qconf->n_rx_qu


rx_queue = &(qconf->rx_queue_list[i]);
if (rx_queue->idle_hint < lcore_idle_hint)
lcore_idle_hint = rx_queue->idle_hint;
}

if ( lcore_idle_hint < SLEEP_GEAR1_THRESHOLD)


/**
* execute "pause" instruction to avoid context
* switch for short sleep.
*/
rte_delay_us(lcore_idle_hint);
else
/* long sleep force ruining thread to suspend */
usleep(lcore_idle_hint);

stats[lcore_id].sleep_time += lcore_idle_hint;
}
}
}

P-State Heuristic Algorithm

The power_freq_scaleup_heuristic() function is responsible for generating a frequency hint


for the specified logical core according to available descriptor number returned from
rte_eth_rx_queue_count(). On every poll for new packets, the length of available descriptor
on an Rx queue is evaluated, and the algorithm used for frequency hinting is as follows:
• If the size of available descriptors exceeds 96, the maximum frequency is hinted.
• If the size of available descriptors exceeds 64, a trend counter is incremented by 100.
• If the length of the ring exceeds 32, the trend counter is incremented by 1.
• When the trend counter reached 10000 the frequency hint is changed to the next higher

3.18. L3 Forwarding with Power Management Sample Application 116


DPDK documentation, Release 17.05.0-rc0

frequency.

Note: The assumption is that the Rx queue size is 128 and the thresholds specified above
must be adjusted accordingly based on actual hardware Rx queue size, which are configured
via the rte_eth_rx_queue_setup() function.

In general, a thread needs to poll packets from multiple Rx queues. Most likely, different queue
have different load, so they would return different frequency hints. The algorithm evaluates
all the hints and then scales up frequency in an aggressive manner by scaling up to highest
frequency as long as one Rx queue requires. In this way, we can minimize any negative
performance impact.
On the other hand, frequency scaling down is controlled in the timer callback function. Specif-
ically, if the sleep times of a logical core indicate that it is sleeping more than 25% of the
sampling period, or if the average packet per iteration is less than expectation, the frequency
is decreased by one step.

C-State Heuristic Algorithm

Whenever recent rte_eth_rx_burst() polls return 5 consecutive zero packets, an idle counter
begins incrementing for each successive zero poll. At the same time, the function
power_idle_heuristic() is called to generate speculative sleep duration in order to force log-
ical to enter deeper sleeping C-state. There is no way to control C- state directly, and the
CPUIdle subsystem in OS is intelligent enough to select C-state to enter based on actual sleep
period time of giving logical core. The algorithm has the following sleeping behavior depending
on the idle counter:
• If idle count less than 100, the counter value is used as a microsecond sleep value
through rte_delay_us() which execute pause instructions to avoid costly context switch
but saving power at the same time.
• If idle count is between 100 and 999, a fixed sleep interval of 100 𝜇s is used. A 100 𝜇s
sleep interval allows the core to enter the C1 state while keeping a fast response time in
case new traffic arrives.
• If idle count is greater than 1000, a fixed sleep value of 1 ms is used until the next timer
expiration is used. This allows the core to enter the C3/C6 states.

Note: The thresholds specified above need to be adjusted for different Intel processors and
traffic profiles.

If a thread polls multiple Rx queues and different queue returns different sleep duration values,
the algorithm controls the sleep time in a conservative manner by sleeping for the least possible
time in order to avoid a potential performance impact.

L3 Forwarding with Access Control Sample Application

The L3 Forwarding with Access Control application is a simple example of packet processing
using the DPDK. The application performs a security check on received packets. Packets that

3.19. L3 Forwarding with Access Control Sample Application 117


DPDK documentation, Release 17.05.0-rc0

are in the Access Control List (ACL), which is loaded during initialization, are dropped. Others
are forwarded to the correct port.

Overview

The application demonstrates the use of the ACL library in the DPDK to implement access
control and packet L3 forwarding. The application loads two types of rules at initialization:
• Route information rules, which are used for L3 forwarding
• Access Control List (ACL) rules that blacklist (or block) packets with a specific character-
istic
When packets are received from a port, the application extracts the necessary information
from the TCP/IP header of the received packet and performs a lookup in the rule database to
figure out whether the packets should be dropped (in the ACL range) or forwarded to desired
ports. The initialization and run-time paths are similar to those of the L3 Forwarding Sample
Application. However, there are significant differences in the two applications. For example,
the original L3 forwarding application uses either LPM or an exact match algorithm to perform
forwarding port lookup, while this application uses the ACL library to perform both ACL and
route entry lookup. The following sections provide more detail.
Classification for both IPv4 and IPv6 packets is supported in this application. The application
also assumes that all the packets it processes are TCP/UDP packets and always extracts
source/destination port information from the packets.

Tuple Packet Syntax

The application implements packet classification for the IPv4/IPv6 5-tuple syntax specifically.
The 5-tuple syntax consist of a source IP address, a destination IP address, a source port, a
destination port and a protocol identifier. The fields in the 5-tuple syntax have the following
formats:
• Source IP address and destination IP address : Each is either a 32-bit field (for IPv4),
or a set of 4 32-bit fields (for IPv6) represented by a value and a mask length. For
example, an IPv4 range of 192.168.1.0 to 192.168.1.255 could be represented by a value
= [192, 168, 1, 0] and a mask length = 24.
• Source port and destination port : Each is a 16-bit field, represented by a lower start
and a higher end. For example, a range of ports 0 to 8192 could be represented by lower
= 0 and higher = 8192.
• Protocol identifier : An 8-bit field, represented by a value and a mask, that covers a
range of values. To verify that a value is in the range, use the following expression: “(VAL
& mask) == value”
The trick in how to represent a range with a mask and value is as follows. A range can be
enumerated in binary numbers with some bits that are never changed and some bits that are
dynamically changed. Set those bits that dynamically changed in mask and value with 0. Set
those bits that never changed in the mask with 1, in value with number expected. For example,
a range of 6 to 7 is enumerated as 0b110 and 0b111. Bit 1-7 are bits never changed and bit
0 is the bit dynamically changed. Therefore, set bit 0 in mask and value with 0, set bits 1-7 in
mask with 1, and bits 1-7 in value with number 0b11. So, mask is 0xfe, value is 0x6.

3.19. L3 Forwarding with Access Control Sample Application 118


DPDK documentation, Release 17.05.0-rc0

Note: The library assumes that each field in the rule is in LSB or Little Endian order when cre-
ating the database. It internally converts them to MSB or Big Endian order. When performing
a lookup, the library assumes the input is in MSB or Big Endian order.

Access Rule Syntax

In this sample application, each rule is a combination of the following:


• 5-tuple field: This field has a format described in Section.
• priority field: A weight to measure the priority of the rules. The rule with the higher priority
will ALWAYS be returned if the specific input has multiple matches in the rule database.
Rules with lower priority will NEVER be returned in any cases.
• userdata field: A user-defined field that could be any value. It can be the forwarding port
number if the rule is a route table entry or it can be a pointer to a mapping address if
the rule is used for address mapping in the NAT application. The key point is that it is a
useful reserved field for user convenience.

ACL and Route Rules

The application needs to acquire ACL and route rules before it runs. Route rules are manda-
tory, while ACL rules are optional. To simplify the complexity of the priority field for each rule,
all ACL and route entries are assumed to be in the same file. To read data from the specified
file successfully, the application assumes the following:
• Each rule occupies a single line.
• Only the following four rule line types are valid in this application:
• ACL rule line, which starts with a leading character ‘@’
• Route rule line, which starts with a leading character ‘R’
• Comment line, which starts with a leading character ‘#’
• Empty line, which consists of a space, form-feed (‘f’), newline (‘n’), carriage return (‘r’),
horizontal tab (‘t’), or vertical tab (‘v’).
Other lines types are considered invalid.
• Rules are organized in descending order of priority, which means rules at the head of the
file always have a higher priority than those further down in the file.
• A typical IPv4 ACL rule line should have a format as shown below:

Fig. 3.9: A typical IPv4 ACL rule

3.19. L3 Forwarding with Access Control Sample Application 119


DPDK documentation, Release 17.05.0-rc0

IPv4 addresses are specified in CIDR format as specified in RFC 4632. They consist of the dot
notation for the address and a prefix length separated by ‘/’. For example, 192.168.0.34/32,
where the address is 192.168.0.34 and the prefix length is 32.
Ports are specified as a range of 16-bit numbers in the format MIN:MAX, where MIN and MAX
are the inclusive minimum and maximum values of the range. The range 0:65535 represents all
possible ports in a range. When MIN and MAX are the same value, a single port is represented,
for example, 20:20.
The protocol identifier is an 8-bit value and a mask separated by ‘/’. For example: 6/0xfe
matches protocol values 6 and 7.
• Route rules start with a leading character ‘R’ and have the same format as ACL rules
except an extra field at the tail that indicates the forwarding port number.

Rules File Example

Fig. 3.10: Rules example

Each rule is explained as follows:


• Rule 1 (the first line) tells the application to drop those packets with source IP address =
[1.2.3.*], destination IP address = [192.168.0.36], protocol = [6]/[7]
• Rule 2 (the second line) is similar to Rule 1, except the source IP address is ignored.
It tells the application to forward packets with destination IP address = [192.168.0.36],
protocol = [6]/[7], destined to port 1.
• Rule 3 (the third line) tells the application to forward all packets to port 0. This is some-
thing like a default route entry.
As described earlier, the application assume rules are listed in descending order of priority,
therefore Rule 1 has the highest priority, then Rule 2, and finally, Rule 3 has the lowest priority.
Consider the arrival of the following three packets:
• Packet 1 has source IP address = [1.2.3.4], destination IP address = [192.168.0.36], and
protocol = [6]
• Packet 2 has source IP address = [1.2.4.4], destination IP address = [192.168.0.36], and
protocol = [6]
• Packet 3 has source IP address = [1.2.3.4], destination IP address = [192.168.0.36], and
protocol = [8]
Observe that:
• Packet 1 matches all of the rules
• Packet 2 matches Rule 2 and Rule 3

3.19. L3 Forwarding with Access Control Sample Application 120


DPDK documentation, Release 17.05.0-rc0

• Packet 3 only matches Rule 3


For priority reasons, Packet 1 matches Rule 1 and is dropped. Packet 2 matches Rule 2 and
is forwarded to port 1. Packet 3 matches Rule 3 and is forwarded to port 0.
For more details on the rule file format, please refer to rule_ipv4.db and rule_ipv6.db files
(inside <RTE_SDK>/examples/l3fwd-acl/).

Application Phases

Once the application starts, it transitions through three phases:


• Initialization Phase - Perform the following tasks:
• Parse command parameters. Check the validity of rule file(s) name(s), number of logical
cores, receive and transmit queues. Bind ports, queues and logical cores. Check ACL
search options, and so on.
• Call Environmental Abstraction Layer (EAL) and Poll Mode Driver (PMD) functions to
initialize the environment and detect possible NICs. The EAL creates several threads
and sets affinity to a specific hardware thread CPU based on the configuration specified
by the command line arguments.
• Read the rule files and format the rules into the representation that the ACL library can
recognize. Call the ACL library function to add the rules into the database and compile
them as a trie of pattern sets. Note that application maintains a separate AC contexts for
IPv4 and IPv6 rules.
• Runtime Phase - Process the incoming packets from a port. Packets are processed in
three steps:
– Retrieval: Gets a packet from the receive queue. Each logical core may process
several queues for different ports. This depends on the configuration specified by
command line arguments.
– Lookup: Checks that the packet type is supported (IPv4/IPv6) and performs a 5-
tuple lookup over corresponding AC context. If an ACL rule is matched, the packets
will be dropped and return back to step 1. If a route rule is matched, it indicates the
packet is not in the ACL list and should be forwarded. If there is no matches for the
packet, then the packet is dropped.
– Forwarding: Forwards the packet to the corresponding port.
• Final Phase - Perform the following tasks:
Calls the EAL, PMD driver and ACL library to free resource, then quits.

Compiling the Application

To compile the application:


1. Go to the sample application directory:
export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/l3fwd-acl

2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc

3.19. L3 Forwarding with Access Control Sample Application 121


DPDK documentation, Release 17.05.0-rc0

See the DPDK IPL Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make

Running the Application

The application has a number of command line options:


./build/l3fwd-acl [EAL options] -- -p PORTMASK [-P] --config(port,queue,lcore)[,(port,queue,lco

where,
• -p PORTMASK: Hexadecimal bitmask of ports to configure
• -P: Sets all ports to promiscuous mode so that packets are accepted regardless of the
packet’s Ethernet MAC destination address. Without this option, only packets with the
Ethernet MAC destination address set to the Ethernet address of the port are accepted.
• –config (port,queue,lcore)[,(port,queue,lcore)]: determines which queues from which
ports are mapped to which cores
• –rule_ipv4 FILENAME: Specifies the IPv4 ACL and route rules file
• –rule_ipv6 FILENAME: Specifies the IPv6 ACL and route rules file
• –scalar: Use a scalar function to perform rule lookup
• –enable-jumbo: optional, enables jumbo frames
• –max-pkt-len: optional, maximum packet length in decimal (64-9600)
• –no-numa: optional, disables numa awareness
For example, consider a dual processor socket platform with 8 physical cores, where cores 0-7
and 16-23 appear on socket 0, while cores 8-15 and 24-31 appear on socket 1.
To enable L3 forwarding between two ports, assuming that both ports are in the same socket,
using two cores, cores 1 and 2, (which are in the same socket too), use the following command:
./build/l3fwd-acl -l 1,2 -n 4 -- -p 0x3 --config="(0,0,1),(1,0,2)" --rule_ipv4="./rule_ipv4.db"

In this command:
• The -c option enables cores 1, 2
• The -p option enables ports 0 and 1
• The –config option enables one queue on each port and maps each (port,queue) pair to
a specific core. The following table shows the mapping in this example:
Port Queue lcore Description
0 0 1 Map queue 0 from port 0 to lcore 1.
1 0 2 Map queue 0 from port 1 to lcore 2.
• The –rule_ipv4 option specifies the reading of IPv4 rules sets from the ./ rule_ipv4.db file.
• The –rule_ipv6 option specifies the reading of IPv6 rules sets from the ./ rule_ipv6.db file.
• The –scalar option specifies the performing of rule lookup with a scalar function.

3.19. L3 Forwarding with Access Control Sample Application 122


DPDK documentation, Release 17.05.0-rc0

Explanation

The following sections provide some explanation of the sample application code. The aspects
of port, device and CPU configuration are similar to those of the L3 Forwarding Sample Appli-
cation. The following sections describe aspects that are specific to L3 forwarding with access
control.

Parse Rules from File

As described earlier, both ACL and route rules are assumed to be saved in the same file. The
application parses the rules from the file and adds them to the database by calling the ACL li-
brary function. It ignores empty and comment lines, and parses and validates the rules it reads.
If errors are detected, the application exits with messages to identify the errors encountered.
The application needs to consider the userdata and priority fields. The ACL rules save the index
to the specific rules in the userdata field, while route rules save the forwarding port number.
In order to differentiate the two types of rules, ACL rules add a signature in the userdata field.
As for the priority field, the application assumes rules are organized in descending order of
priority. Therefore, the code only decreases the priority number with each rule it parses.

Setting Up the ACL Context

For each supported AC rule format (IPv4 5-tuple, IPv6 6-tuple) application creates a separate
context handler from the ACL library for each CPU socket on the board and adds parsed rules
into that context.
Note, that for each supported rule type, application needs to calculate the expected offset of
the fields from the start of the packet. That’s why only packets with fixed IPv4/ IPv6 header are
supported. That allows to perform ACL classify straight over incoming packet buffer - no extra
protocol field retrieval need to be performed.
Subsequently, the application checks whether NUMA is enabled. If it is, the application records
the socket IDs of the CPU cores involved in the task.
Finally, the application creates contexts handler from the ACL library, adds rules parsed from
the file into the database and build an ACL trie. It is important to note that the application
creates an independent copy of each database for each socket CPU involved in the task to
reduce the time for remote memory access.

L3 Forwarding in a Virtualization Environment Sample Application

The L3 Forwarding in a Virtualization Environment sample application is a simple example of


packet processing using the DPDK. The application performs L3 forwarding that takes advan-
tage of Single Root I/O Virtualization (SR-IOV) features in a virtualized environment.

Overview

The application demonstrates the use of the hash and LPM libraries in the DPDK to implement
packet forwarding. The initialization and run-time paths are very similar to those of the L3

3.20. L3 Forwarding in a Virtualization Environment Sample Application 123


DPDK documentation, Release 17.05.0-rc0

Forwarding Sample Application. The forwarding decision is taken based on information read
from the input packet.
The lookup method is either hash-based or LPM-based and is selected at compile time. When
the selected lookup method is hash-based, a hash object is used to emulate the flow classifica-
tion stage. The hash object is used in correlation with the flow table to map each input packet
to its flow at runtime.
The hash lookup key is represented by the DiffServ 5-tuple composed of the following fields
read from the input packet: Source IP Address, Destination IP Address, Protocol, Source Port
and Destination Port. The ID of the output interface for the input packet is read from the
identified flow table entry. The set of flows used by the application is statically configured and
loaded into the hash at initialization time. When the selected lookup method is LPM based, an
LPM object is used to emulate the forwarding stage for IPv4 packets. The LPM object is used
as the routing table to identify the next hop for each input packet at runtime.
The LPM lookup key is represented by the Destination IP Address field read from the input
packet. The ID of the output interface for the input packet is the next hop returned by the LPM
lookup. The set of LPM rules used by the application is statically configured and loaded into
the LPM object at the initialization time.

Note: Please refer to Virtual Function Setup Instructions for virtualized test case setup.

Compiling the Application

To compile the application:


1. Go to the sample application directory:
export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/l3fwd-vf

2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc

See the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make

Note: The compiled application is written to the build subdirectory. To have the application
written to a different location, the O=/path/to/build/directory option may be specified in the make
command.

Running the Application

The application has a number of command line options:


./build/l3fwd-vf [EAL options] -- -p PORTMASK --config(port,queue,lcore)[,(port,queue,lcore)]

where,

3.20. L3 Forwarding in a Virtualization Environment Sample Application 124


DPDK documentation, Release 17.05.0-rc0

• –p PORTMASK: Hexadecimal bitmask of ports to configure


• –config (port,queue,lcore)[,(port,queue,lcore]: determines which queues from which
ports are mapped to which cores
• –no-numa: optional, disables numa awareness
For example, consider a dual processor socket platform with 8 physical cores, where cores 0-7
and 16-23 appear on socket 0, while cores 8-15 and 24-31 appear on socket 1.
To enable L3 forwarding between two ports, assuming that both ports are in the same socket,
using two cores, cores 1 and 2, (which are in the same socket too), use the following command:
./build/l3fwd-vf -l 1,2 -n 4 -- -p 0x3 --config="(0,0,1),(1,0,2)"

In this command:
• The -l option enables cores 1 and 2
• The -p option enables ports 0 and 1
• The –config option enables one queue on each port and maps each (port,queue) pair to
a specific core. The following table shows the mapping in this example:
Port Queue lcore Description
0 0 1 Map queue 0 from port 0 to lcore 1
1 0 2 Map queue 0 from port 1 to lcore 2
Refer to the DPDK Getting Started Guide for general information on running applications and
the Environment Abstraction Layer (EAL) options.

Explanation

The operation of this application is similar to that of the basic L3 Forwarding Sample Applica-
tion. See Explanation for more information.

Link Status Interrupt Sample Application

The Link Status Interrupt sample application is a simple example of packet processing using
the Data Plane Development Kit (DPDK) that demonstrates how network link status changes
for a network port can be captured and used by a DPDK application.

Overview

The Link Status Interrupt sample application registers a user space callback for the link sta-
tus interrupt of each port and performs L2 forwarding for each packet that is received on an
RX_PORT. The following operations are performed:
• RX_PORT and TX_PORT are paired with available ports one-by-one according to the
core mask
• The source MAC address is replaced by the TX_PORT MAC address
• The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID

3.21. Link Status Interrupt Sample Application 125


DPDK documentation, Release 17.05.0-rc0

This application can be used to demonstrate the usage of link status interrupt and its user
space callbacks and the behavior of L2 forwarding each time the link status changes.

Compiling the Application

1. Go to the example directory:


export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/link_status_interrupt

2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc

See the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make

Note: The compiled application is written to the build subdirectory. To have the application
written to a different location, the O=/path/to/build/directory option may be specified on the
make command line.

Running the Application

The application requires a number of command line options:


./build/link_status_interrupt [EAL options] -- -p PORTMASK [-q NQ][-T PERIOD]

where,
• -p PORTMASK: A hexadecimal bitmask of the ports to configure
• -q NQ: A number of queues (=ports) per lcore (default is 1)
• -T PERIOD: statistics will be refreshed each PERIOD seconds (0 to disable, 10 default)
To run the application in a linuxapp environment with 4 lcores, 4 memory channels, 16 ports
and 8 RX queues per lcore, issue the command:
$ ./build/link_status_interrupt -c f -n 4-- -q 8 -p ffff

Refer to the DPDK Getting Started Guide for general information on running applications and
the Environment Abstraction Layer (EAL) options.

Explanation

The following sections provide some explanation of the code.

Command Line Arguments

The Link Status Interrupt sample application takes specific parameters, in addition to Environ-
ment Abstraction Layer (EAL) arguments (see Section Running the Application).

3.21. Link Status Interrupt Sample Application 126


DPDK documentation, Release 17.05.0-rc0

Command line parsing is done in the same way as it is done in the L2 Forwarding Sample
Application. See Command Line Arguments for more information.

Mbuf Pool Initialization

Mbuf pool initialization is done in the same way as it is done in the L2 Forwarding Sample
Application. See Mbuf Pool Initialization for more information.

Driver Initialization

The main part of the code in the main() function relates to the initialization of the driver. To fully
understand this code, it is recommended to study the chapters that related to the Poll Mode
Driver in the DPDK Programmer’s Guide and the DPDK API Reference.
if (rte_eal_pci_probe() < 0)
rte_exit(EXIT_FAILURE, "Cannot probe PCI\n");

nb_ports = rte_eth_dev_count();
if (nb_ports == 0)
rte_exit(EXIT_FAILURE, "No Ethernet ports - bye\n");

/*
* Each logical core is assigned a dedicated TX queue on each port.
*/

for (portid = 0; portid < nb_ports; portid++) {


/* skip ports that are not enabled */

if ((lsi_enabled_port_mask & (1 << portid)) == 0)


continue;

/* save the destination port id */

if (nb_ports_in_mask % 2) {
lsi_dst_ports[portid] = portid_last;
lsi_dst_ports[portid_last] = portid;
}
else
portid_last = portid;

nb_ports_in_mask++;

rte_eth_dev_info_get((uint8_t) portid, &dev_info);


}

Observe that:
• rte_eal_pci_probe() parses the devices on the PCI bus and initializes recognized devices.
The next step is to configure the RX and TX queues. For each port, there is only one RX queue
(only one lcore is able to poll a given port). The number of TX queues depends on the number
of available lcores. The rte_eth_dev_configure() function is used to configure the number of
queues for a port:
ret = rte_eth_dev_configure((uint8_t) portid, 1, 1, &port_conf);
if (ret < 0)
rte_exit(EXIT_FAILURE, "Cannot configure device: err=%d, port=%u\n", ret, portid);

The global configuration is stored in a static structure:

3.21. Link Status Interrupt Sample Application 127


DPDK documentation, Release 17.05.0-rc0

static const struct rte_eth_conf port_conf = {


.rxmode = {
.split_hdr_size = 0,
.header_split = 0, /**< Header Split disabled */
.hw_ip_checksum = 0, /**< IP checksum offload disabled */
.hw_vlan_filter = 0, /**< VLAN filtering disabled */
.hw_strip_crc= 0, /**< CRC stripped by hardware */
},
.txmode = {},
.intr_conf = {
.lsc = 1, /**< link status interrupt feature enabled */
},
};

Configuring lsc to 0 (the default) disables the generation of any link status change inter-
rupts in kernel space and no user space interrupt event is received. The public interface
rte_eth_link_get() accesses the NIC registers directly to update the link status. Configuring
lsc to non-zero enables the generation of link status change interrupts in kernel space when a
link status change is present and calls the user space callbacks registered by the application.
The public interface rte_eth_link_get() just reads the link status in a global structure that would
be updated in the interrupt host thread only.

Interrupt Callback Registration

The application can register one or more callbacks to a specific port and interrupt event. An
example callback function that has been written as indicated below.
static void
lsi_event_callback(uint8_t port_id, enum rte_eth_event_type type, void *param)
{
struct rte_eth_link link;

RTE_SET_USED(param);

printf("\n\nIn registered callback...\n");

printf("Event type: %s\n", type == RTE_ETH_EVENT_INTR_LSC ? "LSC interrupt" : "unknown even

rte_eth_link_get_nowait(port_id, &link);

if (link.link_status) {
printf("Port %d Link Up - speed %u Mbps - %s\n\n", port_id, (unsigned)link.link_speed,
(link.link_duplex == ETH_LINK_FULL_DUPLEX) ? ("full-duplex") : ("half-duplex"));
} else
printf("Port %d Link Down\n\n", port_id);
}

This function is called when a link status interrupt is present for the right port. The port_id
indicates which port the interrupt applies to. The type parameter identifies the interrupt event
type, which currently can be RTE_ETH_EVENT_INTR_LSC only, but other types can be added
in the future. The param parameter is the address of the parameter for the callback. This
function should be implemented with care since it will be called in the interrupt host thread,
which is different from the main thread of its caller.
The application registers the lsi_event_callback and a NULL parameter to the link status inter-
rupt event on each port:
rte_eth_dev_callback_register((uint8_t)portid, RTE_ETH_EVENT_INTR_LSC, lsi_event_callback, NULL

3.21. Link Status Interrupt Sample Application 128


DPDK documentation, Release 17.05.0-rc0

This registration can be done only after calling the rte_eth_dev_configure() function and before
calling any other function. If lsc is initialized with 0, the callback is never called since no interrupt
event would ever be present.

RX Queue Initialization

The application uses one lcore to poll one or several ports, depending on the -q option, which
specifies the number of queues per lcore.
For example, if the user specifies -q 4, the application is able to poll four ports with one lcore.
If there are 16 ports on the target (and if the portmask argument is -p ffff), the application will
need four lcores to poll all the ports.
ret = rte_eth_rx_queue_setup((uint8_t) portid, 0, nb_rxd, SOCKET0, &rx_conf, lsi_pktmbuf_pool);
if (ret < 0)
rte_exit(EXIT_FAILURE, "rte_eth_rx_queue_setup: err=%d, port=%u\n", ret, portid);

The list of queues that must be polled for a given lcore is stored in a private structure called
struct lcore_queue_conf.
struct lcore_queue_conf {
unsigned n_rx_port;
unsigned rx_port_list[MAX_RX_QUEUE_PER_LCORE]; unsigned tx_queue_id;
struct mbuf_table tx_mbufs[LSI_MAX_PORTS];
} rte_cache_aligned;

struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];

The n_rx_port and rx_port_list[] fields are used in the main packet processing loop (see Re-
ceive, Process and Transmit Packets).
The global configuration for the RX queues is stored in a static structure:
static const struct rte_eth_rxconf rx_conf = {
.rx_thresh = {
.pthresh = RX_PTHRESH,
.hthresh = RX_HTHRESH,
.wthresh = RX_WTHRESH,
},
};

TX Queue Initialization

Each lcore should be able to transmit on any port. For every port, a single TX queue is
initialized.
/* init one TX queue logical core on each port */

fflush(stdout);

ret = rte_eth_tx_queue_setup(portid, 0, nb_txd, rte_eth_dev_socket_id(portid), &tx_conf);


if (ret < 0)
rte_exit(EXIT_FAILURE, "rte_eth_tx_queue_setup: err=%d,port=%u\n", ret, (unsigned) portid);

The global configuration for TX queues is stored in a static structure:


static const struct rte_eth_txconf tx_conf = {
.tx_thresh = {
.pthresh = TX_PTHRESH,
.hthresh = TX_HTHRESH,

3.21. Link Status Interrupt Sample Application 129


DPDK documentation, Release 17.05.0-rc0

.wthresh = TX_WTHRESH,
},
.tx_free_thresh = RTE_TEST_TX_DESC_DEFAULT + 1, /* disable feature */
};

Receive, Process and Transmit Packets

In the lsi_main_loop() function, the main task is to read ingress packets from the RX queues.
This is done using the following code:
/*
* Read packet from RX queues
*/

for (i = 0; i < qconf->n_rx_port; i++) {


portid = qconf->rx_port_list[i];
nb_rx = rte_eth_rx_burst((uint8_t) portid, 0, pkts_burst, MAX_PKT_BURST);
port_statistics[portid].rx += nb_rx;

for (j = 0; j < nb_rx; j++) {


m = pkts_burst[j];
rte_prefetch0(rte_pktmbuf_mtod(m, void *));
lsi_simple_forward(m, portid);
}
}

Packets are read in a burst of size MAX_PKT_BURST. The rte_eth_rx_burst() function writes
the mbuf pointers in a local table and returns the number of available mbufs in the table.
Then, each mbuf in the table is processed by the lsi_simple_forward() function. The processing
is very simple: processes the TX port from the RX port and then replaces the source and
destination MAC addresses.

Note: In the following code, the two lines for calculating the output port require some expla-
nation. If portId is even, the first line does nothing (as portid & 1 will be 0), and the second line
adds 1. If portId is odd, the first line subtracts one and the second line does nothing. Therefore,
0 goes to 1, and 1 to 0, 2 goes to 3 and 3 to 2, and so on.

static void
lsi_simple_forward(struct rte_mbuf *m, unsigned portid)
{
struct ether_hdr *eth;
void *tmp;
unsigned dst_port = lsi_dst_ports[portid];

eth = rte_pktmbuf_mtod(m, struct ether_hdr *);

/* 02:00:00:00:00:xx */

tmp = &eth->d_addr.addr_bytes[0];

*((uint64_t *)tmp) = 0x000000000002 + (dst_port << 40);

/* src addr */
ether_addr_copy(&lsi_ports_eth_addr[dst_port], &eth->s_addr);

lsi_send_packet(m, dst_port);
}

3.21. Link Status Interrupt Sample Application 130


DPDK documentation, Release 17.05.0-rc0

Then, the packet is sent using the lsi_send_packet(m, dst_port) function. For this test applica-
tion, the processing is exactly the same for all packets arriving on the same RX port. Therefore,
it would have been possible to call the lsi_send_burst() function directly from the main loop to
send all the received packets on the same TX port using the burst-oriented send function,
which is more efficient.
However, in real-life applications (such as, L3 routing), packet N is not necessarily forwarded
on the same port as packet N-1. The application is implemented to illustrate that so the same
approach can be reused in a more complex application.
The lsi_send_packet() function stores the packet in a per-lcore and per-txport table. If the table
is full, the whole packets table is transmitted using the lsi_send_burst() function:
/* Send the packet on an output interface */

static int
lsi_send_packet(struct rte_mbuf *m, uint8_t port)
{
unsigned lcore_id, len;
struct lcore_queue_conf *qconf;

lcore_id = rte_lcore_id();
qconf = &lcore_queue_conf[lcore_id];
len = qconf->tx_mbufs[port].len;
qconf->tx_mbufs[port].m_table[len] = m;
len++;

/* enough pkts to be sent */

if (unlikely(len == MAX_PKT_BURST)) {
lsi_send_burst(qconf, MAX_PKT_BURST, port);
len = 0;
}
qconf->tx_mbufs[port].len = len;

return 0;
}

To ensure that no packets remain in the tables, each lcore does a draining of the TX queue
in its main loop. This technique introduces some latency when there are not many packets to
send. However, it improves performance:
cur_tsc = rte_rdtsc();

/*
* TX burst queue drain
*/

diff_tsc = cur_tsc - prev_tsc;

if (unlikely(diff_tsc > drain_tsc)) {


/* this could be optimized (use queueid instead of * portid), but it is not called so ofte

for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) {


if (qconf->tx_mbufs[portid].len == 0)
continue;

lsi_send_burst(&lcore_queue_conf[lcore_id],
qconf->tx_mbufs[portid].len, (uint8_t) portid);
qconf->tx_mbufs[portid].len = 0;
}

3.21. Link Status Interrupt Sample Application 131


DPDK documentation, Release 17.05.0-rc0

/* if timer is enabled */

if (timer_period > 0) {
/* advance the timer */

timer_tsc += diff_tsc;

/* if timer has reached its timeout */

if (unlikely(timer_tsc >= (uint64_t) timer_period)) {


/* do this only on master core */

if (lcore_id == rte_get_master_lcore()) {
print_stats();

/* reset the timer */


timer_tsc = 0;
}
}
}
prev_tsc = cur_tsc;
}

Load Balancer Sample Application

The Load Balancer sample application demonstrates the concept of isolating the packet I/O
task from the application-specific workload. Depending on the performance target, a number
of logical cores (lcores) are dedicated to handle the interaction with the NIC ports (I/O lcores),
while the rest of the lcores are dedicated to performing the application processing (worker
lcores). The worker lcores are totally oblivious to the intricacies of the packet I/O activity and
use the NIC-agnostic interface provided by software rings to exchange packets with the I/O
cores.

Overview

The architecture of the Load Balance application is presented in the following figure.
For the sake of simplicity, the diagram illustrates a specific case of two I/O RX and two I/O TX
lcores off loading the packet I/O overhead incurred by four NIC ports from four worker cores,
with each I/O lcore handling RX/TX for two NIC ports.

I/O RX Logical Cores

Each I/O RX lcore performs packet RX from its assigned NIC RX rings and then distributes
the received packets to the worker threads. The application allows each I/O RX lcore to com-
municate with any of the worker threads, therefore each (I/O RX lcore, worker lcore) pair is
connected through a dedicated single producer - single consumer software ring.
The worker lcore to handle the current packet is determined by reading a predefined 1-byte
field from the input packet:
worker_id = packet[load_balancing_field] % n_workers
Since all the packets that are part of the same traffic flow are expected to have the same value
for the load balancing field, this scheme also ensures that all the packets that are part of the

3.22. Load Balancer Sample Application 132


DPDK documentation, Release 17.05.0-rc0

Fig. 3.11: Load Balancer Application Architecture

same traffic flow are directed to the same worker lcore (flow affinity) in the same order they
enter the system (packet ordering).

I/O TX Logical Cores

Each I/O lcore owns the packet TX for a predefined set of NIC ports. To enable each worker
thread to send packets to any NIC TX port, the application creates a software ring for each
(worker lcore, NIC TX port) pair, with each I/O TX core handling those software rings that are
associated with NIC ports that it handles.

Worker Logical Cores

Each worker lcore reads packets from its set of input software rings and routes them to the NIC
ports for transmission by dispatching them to output software rings. The routing logic is LPM
based, with all the worker threads sharing the same LPM rules.

Compiling the Application

The sequence of steps used to build the application is:


1. Export the required environment variables:
export RTE_SDK=<Path to the DPDK installation folder>
export RTE_TARGET=x86_64-native-linuxapp-gcc

2. Build the application executable file:


cd ${RTE_SDK}/examples/load_balancer
make

For more details on how to build the DPDK libraries and sample applications, please refer
to the DPDK Getting Started Guide.

3.22. Load Balancer Sample Application 133


DPDK documentation, Release 17.05.0-rc0

Running the Application

To successfully run the application, the command line used to start the application has to be in
sync with the traffic flows configured on the traffic generator side.
For examples of application command lines and traffic generator flows, please refer to the
DPDK Test Report. For more details on how to set up and run the sample applications provided
with DPDK package, please refer to the DPDK Getting Started Guide.

Explanation

Application Configuration

The application run-time configuration is done through the application command line param-
eters. Any parameter that is not specified as mandatory is optional, with the default value
hard-coded in the main.h header file from the application folder.
The list of application command line parameters is listed below:
1. –rx “(PORT, QUEUE, LCORE), ...”: The list of NIC RX ports and queues handled by the
I/O RX lcores. This parameter also implicitly defines the list of I/O RX lcores. This is a
mandatory parameter.
2. –tx “(PORT, LCORE), ... ”: The list of NIC TX ports handled by the I/O TX lcores. This
parameter also implicitly defines the list of I/O TX lcores. This is a mandatory parameter.
3. –w “LCORE, ...”: The list of the worker lcores. This is a mandatory parameter.
4. –lpm “IP / PREFIX => PORT; ...”: The list of LPM rules used by the worker lcores for
packet forwarding. This is a mandatory parameter.
5. –rsz “A, B, C, D”: Ring sizes:
(a) A = The size (in number of buffer descriptors) of each of the NIC RX rings read by
the I/O RX lcores.
(b) B = The size (in number of elements) of each of the software rings used by the I/O
RX lcores to send packets to worker lcores.
(c) C = The size (in number of elements) of each of the software rings used by the
worker lcores to send packets to I/O TX lcores.
(d) D = The size (in number of buffer descriptors) of each of the NIC TX rings written by
I/O TX lcores.
6. –bsz “(A, B), (C, D), (E, F)”: Burst sizes:
(a) A = The I/O RX lcore read burst size from NIC RX.
(b) B = The I/O RX lcore write burst size to the output software rings.
(c) C = The worker lcore read burst size from the input software rings.
(d) D = The worker lcore write burst size to the output software rings.
(e) E = The I/O TX lcore read burst size from the input software rings.
(f) F = The I/O TX lcore write burst size to the NIC TX.

3.22. Load Balancer Sample Application 134


DPDK documentation, Release 17.05.0-rc0

7. –pos-lb POS: The position of the 1-byte field within the input packet used by the I/O RX
lcores to identify the worker lcore for the current packet. This field needs to be within the
first 64 bytes of the input packet.
The infrastructure of software rings connecting I/O lcores and worker lcores is built by the appli-
cation as a result of the application configuration provided by the user through the application
command line parameters.
A specific lcore performing the I/O RX role for a specific set of NIC ports can also perform the
I/O TX role for the same or a different set of NIC ports. A specific lcore cannot perform both
the I/O role (either RX or TX) and the worker role during the same session.
Example:
./load_balancer -c 0xf8 -n 4 -- --rx "(0,0,3),(1,0,3)" --tx "(0,3),(1,3)" --w "4,5,6,7" --lpm "

There is a single I/O lcore (lcore 3) that handles RX and TX for two NIC ports (ports 0 and 1)
that handles packets to/from four worker lcores (lcores 4, 5, 6 and 7) that are assigned worker
IDs 0 to 3 (worker ID for lcore 4 is 0, for lcore 5 is 1, for lcore 6 is 2 and for lcore 7 is 3).
Assuming that all the input packets are IPv4 packets with no VLAN label and the source IP
address of the current packet is A.B.C.D, the worker lcore for the current packet is determined
by byte D (which is byte 29). There are two LPM rules that are used by each worker lcore to
route packets to the output NIC ports.
The following table illustrates the packet flow through the system for several possible traffic
flows:
Flow Source IP Destination IP Worker ID (Worker Output NIC
# Address Address lcore) Port
1 0.0.0.0 1.0.0.1 0 (4) 0
2 0.0.0.1 1.0.1.2 1 (5) 1
3 0.0.0.14 1.0.0.3 2 (6) 0
4 0.0.0.15 1.0.1.4 3 (7) 1

NUMA Support

The application has built-in performance enhancements for the NUMA case:
1. One buffer pool per each CPU socket.
2. One LPM table per each CPU socket.
3. Memory for the NIC RX or TX rings is allocated on the same socket with the lcore han-
dling the respective ring.
In the case where multiple CPU sockets are used in the system, it is recommended to enable
at least one lcore to fulfill the I/O role for the NIC ports that are directly attached to that CPU
socket through the PCI Express* bus. It is always recommended to handle the packet I/O with
lcores from the same CPU socket as the NICs.
Depending on whether the I/O RX lcore (same CPU socket as NIC RX), the worker lcore and
the I/O TX lcore (same CPU socket as NIC TX) handling a specific input packet, are on the
same or different CPU sockets, the following run-time scenarios are possible:
1. AAA: The packet is received, processed and transmitted without going across CPU sock-
ets.

3.22. Load Balancer Sample Application 135


DPDK documentation, Release 17.05.0-rc0

2. AAB: The packet is received and processed on socket A, but as it has to be transmitted
on a NIC port connected to socket B, the packet is sent to socket B through software
rings.
3. ABB: The packet is received on socket A, but as it has to be processed by a worker
lcore on socket B, the packet is sent to socket B through software rings. The packet is
transmitted by a NIC port connected to the same CPU socket as the worker lcore that
processed it.
4. ABC: The packet is received on socket A, it is processed by an lcore on socket B, then
it has to be transmitted out by a NIC connected to socket C. The performance price for
crossing the CPU socket boundary is paid twice for this packet.

Server-Node EFD Sample Application

This sample application demonstrates the use of EFD library as a flow-level load balancer, for
more information about the EFD Library please refer to the DPDK programmer’s guide.
This sample application is a variant of the client-server sample application where a specific
target node is specified for every and each flow (not in a round-robin fashion as the original
load balancing sample application).

Overview

The architecture of the EFD flow-based load balancer sample application is presented in the
following figure.

Fig. 3.12: Using EFD as a Flow-Level Load Balancer

As shown in Fig. 3.12, the sample application consists of a front-end node (server) using the
EFD library to create a load-balancing table for flows, for each flow a target backend worker
node is specified. The EFD table does not store the flow key (unlike a regular hash table), and
hence, it can individually load-balance millions of flows (number of targets * maximum number
of flows fit in a flow table per target) while still fitting in CPU cache.
It should be noted that although they are referred to as nodes, the frontend server and worker
nodes are processes running on the same platform.

Front-end Server

Upon initializing, the frontend server node (process) creates a flow distributor table (based on
the EFD library) which is populated with flow information and its intended target node.
The sample application assigns a specific target node_id (process) for each of the IP destina-
tion addresses as follows:
node_id = i % num_nodes; /* Target node id is generated */
ip_dst = rte_cpu_to_be_32(i); /* Specific ip destination address is
assigned to this target node */

then the pair of <key,target> is inserted into the flow distribution table.

3.23. Server-Node EFD Sample Application 136


DPDK documentation, Release 17.05.0-rc0

The main loop of the server process receives a burst of packets, then for each packet, a flow
key (IP destination address) is extracted. The flow distributor table is looked up and the target
node id is returned. Packets are then enqueued to the specified target node id.
It should be noted that flow distributor table is not a membership test table. I.e. if the key has
already been inserted the target node id will be correct, but for new keys the flow distributor
table will return a value (which can be valid).

Backend Worker Nodes

Upon initializing, the worker node (process) creates a flow table (a regular hash table that
stores the key default size 1M flows) which is populated with only the flow information that
is serviced at this node. This flow key is essential to point out new keys that have not been
inserted before.
The worker node’s main loop is simply receiving packets then doing a hash table lookup. If a
match occurs then statistics are updated for flows serviced by this node. If no match is found
in the local hash table then this indicates that this is a new flow, which is dropped.

Compiling the Application

The sequence of steps used to build the application is:


1. Export the required environment variables:
export RTE_SDK=/path/to/rte_sdk
export RTE_TARGET=x86_64-native-linuxapp-gcc

2. Build the application executable file:


cd ${RTE_SDK}/examples/server_node_efd/
make

For more details on how to build the DPDK libraries and sample applications, please refer
to the DPDK Getting Started Guide.

Running the Application

The application has two binaries to be run: the front-end server and the back-end node.
The frontend server (server) has the following command line options:
./server [EAL options] -- -p PORTMASK -n NUM_NODES -f NUM_FLOWS

Where,
• -p PORTMASK: Hexadecimal bitmask of ports to configure
• -n NUM_NODES: Number of back-end nodes that will be used
• -f NUM_FLOWS: Number of flows to be added in the EFD table (1 million, by default)
The back-end node (node) has the following command line options:
./node [EAL options] -- -n NODE_ID

Where,
• -n NODE_ID: Node ID, which cannot be equal or higher than NUM_MODES

3.23. Server-Node EFD Sample Application 137


DPDK documentation, Release 17.05.0-rc0

First, the server app must be launched, with the number of nodes that will be run. Once it has
been started, the node instances can be run, with different NODE_ID. These instances have
to be run as secondary processes, with --proc-type=secondary in the EAL options, which
will attach to the primary process memory, and therefore, they can access the queues created
by the primary process to distribute packets.
To successfully run the application, the command line used to start the application has to be in
sync with the traffic flows configured on the traffic generator side.
For examples of application command lines and traffic generator flows, please refer to the
DPDK Test Report. For more details on how to set up and run the sample applications provided
with DPDK package, please refer to the DPDK Getting Started Guide for Linux and DPDK
Getting Started Guide for FreeBSD.

Explanation

As described in previous sections, there are two processes in this example.


The first process, the front-end server, creates and populates the EFD table, which is used to
distribute packets to nodes, which the number of flows specified in the command line (1 million,
by default).
static void
create_efd_table(void)
{
uint8_t socket_id = rte_socket_id();

/* create table */
efd_table = rte_efd_create("flow table", num_flows * 2, sizeof(uint32_t),
1 << socket_id, socket_id);

if (efd_table == NULL)
rte_exit(EXIT_FAILURE, "Problem creating the flow table\n");
}

static void
populate_efd_table(void)
{
unsigned int i;
int32_t ret;
uint32_t ip_dst;
uint8_t socket_id = rte_socket_id();
uint64_t node_id;

/* Add flows in table */


for (i = 0; i < num_flows; i++) {
node_id = i % num_nodes;

ip_dst = rte_cpu_to_be_32(i);
ret = rte_efd_update(efd_table, socket_id,
(void *)&ip_dst, (efd_value_t)node_id);
if (ret < 0)
rte_exit(EXIT_FAILURE, "Unable to add entry %u in "
"EFD table\n", i);
}

printf("EFD table: Adding 0x%x keys\n", num_flows);


}

After initialization, packets are received from the enabled ports, and the IPv4 address from the

3.23. Server-Node EFD Sample Application 138


DPDK documentation, Release 17.05.0-rc0

packets is used as a key to look up in the EFD table, which tells the node where the packet
has to be distributed.
static void
process_packets(uint32_t port_num __rte_unused, struct rte_mbuf *pkts[],
uint16_t rx_count, unsigned int socket_id)
{
uint16_t i;
uint8_t node;
efd_value_t data[EFD_BURST_MAX];
const void *key_ptrs[EFD_BURST_MAX];

struct ipv4_hdr *ipv4_hdr;


uint32_t ipv4_dst_ip[EFD_BURST_MAX];

for (i = 0; i < rx_count; i++) {


/* Handle IPv4 header.*/
ipv4_hdr = rte_pktmbuf_mtod_offset(pkts[i], struct ipv4_hdr *,
sizeof(struct ether_hdr));
ipv4_dst_ip[i] = ipv4_hdr->dst_addr;
key_ptrs[i] = (void *)&ipv4_dst_ip[i];
}

rte_efd_lookup_bulk(efd_table, socket_id, rx_count,


(const void **) key_ptrs, data);
for (i = 0; i < rx_count; i++) {
node = (uint8_t) ((uintptr_t)data[i]);

if (node >= num_nodes) {


/*
* Node is out of range, which means that
* flow has not been inserted
*/
flow_dist_stats.drop++;
rte_pktmbuf_free(pkts[i]);
} else {
flow_dist_stats.distributed++;
enqueue_rx_packet(node, pkts[i]);
}
}

for (i = 0; i < num_nodes; i++)


flush_rx_queue(i);
}

The burst of packets received is enqueued in temporary buffers (per node), and enqueued in
the shared ring between the server and the node. After this, a new burst of packets is received
and this process is repeated infinitely.
static void
flush_rx_queue(uint16_t node)
{
uint16_t j;
struct node *cl;

if (cl_rx_buf[node].count == 0)
return;

cl = &nodes[node];
if (rte_ring_enqueue_bulk(cl->rx_q, (void **)cl_rx_buf[node].buffer,
cl_rx_buf[node].count) != 0){
for (j = 0; j < cl_rx_buf[node].count; j++)
rte_pktmbuf_free(cl_rx_buf[node].buffer[j]);

3.23. Server-Node EFD Sample Application 139


DPDK documentation, Release 17.05.0-rc0

cl->stats.rx_drop += cl_rx_buf[node].count;
} else
cl->stats.rx += cl_rx_buf[node].count;

cl_rx_buf[node].count = 0;
}

The second process, the back-end node, receives the packets from the shared ring with the
server and send them out, if they belong to the node.
At initialization, it attaches to the server process memory, to have access to the shared ring,
parameters and statistics.
rx_ring = rte_ring_lookup(get_rx_queue_name(node_id));
if (rx_ring == NULL)
rte_exit(EXIT_FAILURE, "Cannot get RX ring - "
"is server process running?\n");

mp = rte_mempool_lookup(PKTMBUF_POOL_NAME);
if (mp == NULL)
rte_exit(EXIT_FAILURE, "Cannot get mempool for mbufs\n");

mz = rte_memzone_lookup(MZ_SHARED_INFO);
if (mz == NULL)
rte_exit(EXIT_FAILURE, "Cannot get port info structure\n");
info = mz->addr;
tx_stats = &(info->tx_stats[node_id]);
filter_stats = &(info->filter_stats[node_id]);

Then, the hash table that contains the flows that will be handled by the node is created and
populated.
static struct rte_hash *
create_hash_table(const struct shared_info *info)
{
uint32_t num_flows_node = info->num_flows / info->num_nodes;
char name[RTE_HASH_NAMESIZE];
struct rte_hash *h;

/* create table */
struct rte_hash_parameters hash_params = {
.entries = num_flows_node * 2, /* table load = 50% */
.key_len = sizeof(uint32_t), /* Store IPv4 dest IP address */
.socket_id = rte_socket_id(),
.hash_func_init_val = 0,
};

snprintf(name, sizeof(name), "hash_table_%d", node_id);


hash_params.name = name;
h = rte_hash_create(&hash_params);

if (h == NULL)
rte_exit(EXIT_FAILURE,
"Problem creating the hash table for node %d\n",
node_id);
return h;
}

static void
populate_hash_table(const struct rte_hash *h, const struct shared_info *info)
{
unsigned int i;
int32_t ret;
uint32_t ip_dst;

3.23. Server-Node EFD Sample Application 140


DPDK documentation, Release 17.05.0-rc0

uint32_t num_flows_node = 0;
uint64_t target_node;

/* Add flows in table */


for (i = 0; i < info->num_flows; i++) {
target_node = i % info->num_nodes;
if (target_node != node_id)
continue;

ip_dst = rte_cpu_to_be_32(i);

ret = rte_hash_add_key(h, (void *) &ip_dst);


if (ret < 0)
rte_exit(EXIT_FAILURE, "Unable to add entry %u "
"in hash table\n", i);
else
num_flows_node++;

printf("Hash table: Adding 0x%x keys\n", num_flows_node);


}

After initialization, packets are dequeued from the shared ring (from the server) and, like in the
server process, the IPv4 address from the packets is used as a key to look up in the hash table.
If there is a hit, packet is stored in a buffer, to be eventually transmitted in one of the enabled
ports. If key is not there, packet is dropped, since the flow is not handled by the node.
static inline void
handle_packets(struct rte_hash *h, struct rte_mbuf **bufs, uint16_t num_packets)
{
struct ipv4_hdr *ipv4_hdr;
uint32_t ipv4_dst_ip[PKT_READ_SIZE];
const void *key_ptrs[PKT_READ_SIZE];
unsigned int i;
int32_t positions[PKT_READ_SIZE] = {0};

for (i = 0; i < num_packets; i++) {


/* Handle IPv4 header.*/
ipv4_hdr = rte_pktmbuf_mtod_offset(bufs[i], struct ipv4_hdr *,
sizeof(struct ether_hdr));
ipv4_dst_ip[i] = ipv4_hdr->dst_addr;
key_ptrs[i] = &ipv4_dst_ip[i];
}
/* Check if packets belongs to any flows handled by this node */
rte_hash_lookup_bulk(h, key_ptrs, num_packets, positions);

for (i = 0; i < num_packets; i++) {


if (likely(positions[i] >= 0)) {
filter_stats->passed++;
transmit_packet(bufs[i]);
} else {
filter_stats->drop++;
/* Drop packet, as flow is not handled by this node */
rte_pktmbuf_free(bufs[i]);
}
}
}

Finally, note that both processes updates statistics, such as transmitted, received and dropped
packets, which are shown and refreshed by the server app.
static void

3.23. Server-Node EFD Sample Application 141


DPDK documentation, Release 17.05.0-rc0

do_stats_display(void)
{
unsigned int i, j;
const char clr[] = {27, '[', '2', 'J', '\0'};
const char topLeft[] = {27, '[', '1', ';', '1', 'H', '\0'};
uint64_t port_tx[RTE_MAX_ETHPORTS], port_tx_drop[RTE_MAX_ETHPORTS];
uint64_t node_tx[MAX_NODES], node_tx_drop[MAX_NODES];

/* to get TX stats, we need to do some summing calculations */


memset(port_tx, 0, sizeof(port_tx));
memset(port_tx_drop, 0, sizeof(port_tx_drop));
memset(node_tx, 0, sizeof(node_tx));
memset(node_tx_drop, 0, sizeof(node_tx_drop));

for (i = 0; i < num_nodes; i++) {


const struct tx_stats *tx = &info->tx_stats[i];

for (j = 0; j < info->num_ports; j++) {


const uint64_t tx_val = tx->tx[info->id[j]];
const uint64_t drop_val = tx->tx_drop[info->id[j]];

port_tx[j] += tx_val;
port_tx_drop[j] += drop_val;
node_tx[i] += tx_val;
node_tx_drop[i] += drop_val;
}
}

/* Clear screen and move to top left */


printf("%s%s", clr, topLeft);

printf("PORTS\n");
printf("-----\n");
for (i = 0; i < info->num_ports; i++)
printf("Port %u: '%s'\t", (unsigned int)info->id[i],
get_printable_mac_addr(info->id[i]));
printf("\n\n");
for (i = 0; i < info->num_ports; i++) {
printf("Port %u - rx: %9"PRIu64"\t"
"tx: %9"PRIu64"\n",
(unsigned int)info->id[i], info->rx_stats.rx[i],
port_tx[i]);
}

printf("\nSERVER\n");
printf("-----\n");
printf("distributed: %9"PRIu64", drop: %9"PRIu64"\n",
flow_dist_stats.distributed, flow_dist_stats.drop);

printf("\nNODES\n");
printf("-------\n");
for (i = 0; i < num_nodes; i++) {
const unsigned long long rx = nodes[i].stats.rx;
const unsigned long long rx_drop = nodes[i].stats.rx_drop;
const struct filter_stats *filter = &info->filter_stats[i];

printf("Node %2u - rx: %9llu, rx_drop: %9llu\n"


" tx: %9"PRIu64", tx_drop: %9"PRIu64"\n"
" filter_passed: %9"PRIu64", "
"filter_drop: %9"PRIu64"\n",
i, rx, rx_drop, node_tx[i], node_tx_drop[i],
filter->passed, filter->drop);
}

3.23. Server-Node EFD Sample Application 142


DPDK documentation, Release 17.05.0-rc0

printf("\n");
}

Multi-process Sample Application

This chapter describes the example applications for multi-processing that are included in the
DPDK.

Example Applications

Building the Sample Applications

The multi-process example applications are built in the same way as other sample applications,
and as documented in the DPDK Getting Started Guide. To build all the example applications:
1. Set RTE_SDK and go to the example directory:
export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/multi_process

2. Set the target (a default target will be used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc

See the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the applications:
make

Note: If just a specific multi-process application needs to be built, the final make command
can be run just in that application’s directory, rather than at the top-level multi-process directory.

Basic Multi-process Example

The examples/simple_mp folder in the DPDK release contains a basic example application to
demonstrate how two DPDK processes can work together using queues and memory pools to
share information.

Running the Application

To run the application, start one copy of the simple_mp binary in one terminal, passing at least
two cores in the coremask, as follows:
./build/simple_mp -c 3 -n 4 --proc-type=primary

For the first DPDK process run, the proc-type flag can be omitted or set to auto, since all
DPDK processes will default to being a primary instance, meaning they have control over
the hugepage shared memory regions. The process should start successfully and display a
command prompt as follows:

3.24. Multi-process Sample Application 143


DPDK documentation, Release 17.05.0-rc0

$ ./build/simple_mp -c 3 -n 4 --proc-type=primary
EAL: coremask set to 3
EAL: Detected lcore 0 on socket 0
EAL: Detected lcore 1 on socket 0
EAL: Detected lcore 2 on socket 0
EAL: Detected lcore 3 on socket 0
...

EAL: Requesting 2 pages of size 1073741824


EAL: Requesting 768 pages of size 2097152
EAL: Ask a virtual area of 0x40000000 bytes
EAL: Virtual area found at 0x7ff200000000 (size = 0x40000000)
...

EAL: check igb_uio module


EAL: check module finished
EAL: Master core 0 is ready (tid=54e41820)
EAL: Core 1 is ready (tid=53b32700)

Starting core 1

simple_mp >

To run the secondary process to communicate with the primary process, again run the same
binary setting at least two cores in the coremask:
./build/simple_mp -c C -n 4 --proc-type=secondary

When running a secondary process such as that shown above, the proc-type parameter can
again be specified as auto. However, omitting the parameter altogether will cause the process
to try and start as a primary rather than secondary process.
Once the process type is specified correctly, the process starts up, displaying largely similar
status messages to the primary instance as it initializes. Once again, you will be presented
with a command prompt.
Once both processes are running, messages can be sent between them using the send com-
mand. At any stage, either process can be terminated using the quit command.
EAL: Master core 10 is ready (tid=b5f89820) EAL: Master core 8 is ready (tid=864a3820
EAL: Core 11 is ready (tid=84ffe700) EAL: Core 9 is ready (tid=85995700)
Starting core 11 Starting core 9
simple_mp > send hello_secondary simple_mp > core 9: Received 'hello_secon
simple_mp > core 11: Received 'hello_primary' simple_mp > send hello_primary
simple_mp > quit simple_mp > quit

Note: If the primary instance is terminated, the secondary instance must also be shut-down
and restarted after the primary. This is necessary because the primary instance will clear and
reset the shared memory regions on startup, invalidating the secondary process’s pointers.
The secondary process can be stopped and restarted without affecting the primary process.

How the Application Works

The core of this example application is based on using two queues and a single memory pool
in shared memory. These three objects are created at startup by the primary process, since
the secondary process cannot create objects in memory as it cannot reserve memory zones,

3.24. Multi-process Sample Application 144


DPDK documentation, Release 17.05.0-rc0

and the secondary process then uses lookup functions to attach to these objects as it starts
up.
if (rte_eal_process_type() == RTE_PROC_PRIMARY){
send_ring = rte_ring_create(_PRI_2_SEC, ring_size, SOCKET0, flags);
recv_ring = rte_ring_create(_SEC_2_PRI, ring_size, SOCKET0, flags);
message_pool = rte_mempool_create(_MSG_POOL, pool_size, string_size, pool_cache, priv_data_
} else {
recv_ring = rte_ring_lookup(_PRI_2_SEC);
send_ring = rte_ring_lookup(_SEC_2_PRI);
message_pool = rte_mempool_lookup(_MSG_POOL);
}

Note, however, that the named ring structure used as send_ring in the primary process is the
recv_ring in the secondary process.
Once the rings and memory pools are all available in both the primary and secondary pro-
cesses, the application simply dedicates two threads to sending and receiving messages re-
spectively. The receive thread simply dequeues any messages on the receive ring, prints them,
and frees the buffer space used by the messages back to the memory pool. The send thread
makes use of the command-prompt library to interactively request user input for messages to
send. Once a send command is issued by the user, a buffer is allocated from the memory pool,
filled in with the message contents, then enqueued on the appropriate rte_ring.

Symmetric Multi-process Example

The second example of DPDK multi-process support demonstrates how a set of processes can
run in parallel, with each process performing the same set of packet- processing operations.
(Since each process is identical in functionality to the others, we refer to this as symmetric
multi-processing, to differentiate it from asymmetric multi- processing - such as a client-server
mode of operation seen in the next example, where different processes perform different tasks,
yet co-operate to form a packet-processing system.) The following diagram shows the data-
flow through the application, using two processes.
As the diagram shows, each process reads packets from each of the network ports in use.
RSS is used to distribute incoming packets on each port to different hardware RX queues.
Each process reads a different RX queue on each port and so does not contend with any other
process for that queue access. Similarly, each process writes outgoing packets to a different
TX queue on each port.

Running the Application

As with the simple_mp example, the first instance of the symmetric_mp process must be run
as the primary instance, though with a number of other application- specific parameters also
provided after the EAL arguments. These additional parameters are:
• -p <portmask>, where portmask is a hexadecimal bitmask of what ports on the system
are to be used. For example: -p 3 to use ports 0 and 1 only.
• –num-procs <N>, where N is the total number of symmetric_mp instances that will be
run side-by-side to perform packet processing. This parameter is used to configure the
appropriate number of receive queues on each network port.
• –proc-id <n>, where n is a numeric value in the range 0 <= n < N (number of processes,
specified above). This identifies which symmetric_mp instance is being run, so that each

3.24. Multi-process Sample Application 145


DPDK documentation, Release 17.05.0-rc0

Fig. 3.13: Example Data Flow in a Symmetric Multi-process Application

3.24. Multi-process Sample Application 146


DPDK documentation, Release 17.05.0-rc0

process can read a unique receive queue on each network port.


The secondary symmetric_mp instances must also have these parameters specified, and the
first two must be the same as those passed to the primary instance, or errors result.
For example, to run a set of four symmetric_mp instances, running on lcores 1-4, all performing
level-2 forwarding of packets between ports 0 and 1, the following commands can be used
(assuming run as root):
# ./build/symmetric_mp -c 2 -n 4 --proc-type=auto -- -p 3 --num-procs=4 --proc-id=0
# ./build/symmetric_mp -c 4 -n 4 --proc-type=auto -- -p 3 --num-procs=4 --proc-id=1
# ./build/symmetric_mp -c 8 -n 4 --proc-type=auto -- -p 3 --num-procs=4 --proc-id=2
# ./build/symmetric_mp -c 10 -n 4 --proc-type=auto -- -p 3 --num-procs=4 --proc-id=3

Note: In the above example, the process type can be explicitly specified as primary or sec-
ondary, rather than auto. When using auto, the first process run creates all the memory struc-
tures needed for all processes - irrespective of whether it has a proc-id of 0, 1, 2 or 3.

Note: For the symmetric multi-process example, since all processes work in the same manner,
once the hugepage shared memory and the network ports are initialized, it is not necessary
to restart all processes if the primary instance dies. Instead, that process can be restarted
as a secondary, by explicitly setting the proc-type to secondary on the command line. (All
subsequent instances launched will also need this explicitly specified, as auto-detection will
detect no primary processes running and therefore attempt to re-initialize shared memory.)

How the Application Works

The initialization calls in both the primary and secondary instances are the same for the most
part, calling the rte_eal_init(), 1 G and 10 G driver initialization and then rte_eal_pci_probe()
functions. Thereafter, the initialization done depends on whether the process is configured as
a primary or secondary instance.
In the primary instance, a memory pool is created for the packet mbufs and the network ports
to be used are initialized - the number of RX and TX queues per port being determined by the
num-procs parameter passed on the command-line. The structures for the initialized network
ports are stored in shared memory and therefore will be accessible by the secondary process
as it initializes.
if (num_ports & 1)
rte_exit(EXIT_FAILURE, "Application must use an even number of ports\n");

for(i = 0; i < num_ports; i++){


if(proc_type == RTE_PROC_PRIMARY)
if (smp_port_init(ports[i], mp, (uint16_t)num_procs) < 0)
rte_exit(EXIT_FAILURE, "Error initializing ports\n");
}

In the secondary instance, rather than initializing the network ports, the port information ex-
ported by the primary process is used, giving the secondary process access to the hardware
and software rings for each network port. Similarly, the memory pool of mbufs is accessed by
doing a lookup for it by name:
mp = (proc_type == RTE_PROC_SECONDARY) ? rte_mempool_lookup(_SMP_MBUF_POOL) : rte_mempool_creat

3.24. Multi-process Sample Application 147


DPDK documentation, Release 17.05.0-rc0

Once this initialization is complete, the main loop of each process, both primary and secondary,
is exactly the same - each process reads from each port using the queue corresponding to its
proc-id parameter, and writes to the corresponding transmit queue on the output port.

Client-Server Multi-process Example

The third example multi-process application included with the DPDK shows how one can use
a client-server type multi-process design to do packet processing. In this example, a single
server process performs the packet reception from the ports being used and distributes these
packets using round-robin ordering among a set of client processes, which perform the ac-
tual packet processing. In this case, the client applications just perform level-2 forwarding of
packets by sending each packet out on a different network port.
The following diagram shows the data-flow through the application, using two client processes.

Fig. 3.14: Example Data Flow in a Client-Server Symmetric Multi-process Application

Running the Application

The server process must be run initially as the primary process to set up all memory structures
for use by the clients. In addition to the EAL parameters, the application- specific parameters
are:
• -p <portmask >, where portmask is a hexadecimal bitmask of what ports on the system
are to be used. For example: -p 3 to use ports 0 and 1 only.
• -n <num-clients>, where the num-clients parameter is the number of client processes that
will process the packets received by the server application.

Note: In the server process, a single thread, the master thread, that is, the lowest numbered
lcore in the coremask, performs all packet I/O. If a coremask is specified with more than a

3.24. Multi-process Sample Application 148


DPDK documentation, Release 17.05.0-rc0

single lcore bit set in it, an additional lcore will be used for a thread to periodically print packet
count statistics.

Since the server application stores configuration data in shared memory, including the network
ports to be used, the only application parameter needed by a client process is its client instance
ID. Therefore, to run a server application on lcore 1 (with lcore 2 printing statistics) along with
two client processes running on lcores 3 and 4, the following commands could be used:
# ./mp_server/build/mp_server -c 6 -n 4 -- -p 3 -n 2
# ./mp_client/build/mp_client -c 8 -n 4 --proc-type=auto -- -n 0
# ./mp_client/build/mp_client -c 10 -n 4 --proc-type=auto -- -n 1

Note: If the server application dies and needs to be restarted, all client applications also need
to be restarted, as there is no support in the server application for it to run as a secondary
process. Any client processes that need restarting can be restarted without affecting the server
process.

How the Application Works

The server process performs the network port and data structure initialization much as the
symmetric multi-process application does when run as primary. One additional enhancement
in this sample application is that the server process stores its port configuration data in a
memory zone in hugepage shared memory. This eliminates the need for the client processes
to have the portmask parameter passed into them on the command line, as is done for the
symmetric multi-process application, and therefore eliminates mismatched parameters as a
potential source of errors.
In the same way that the server process is designed to be run as a primary process instance
only, the client processes are designed to be run as secondary instances only. They have
no code to attempt to create shared memory objects. Instead, handles to all needed rings
and memory pools are obtained via calls to rte_ring_lookup() and rte_mempool_lookup(). The
network ports for use by the processes are obtained by loading the network port drivers and
probing the PCI bus, which will, as in the symmetric multi-process example, automatically
get access to the network ports using the settings already configured by the primary/server
process.
Once all applications are initialized, the server operates by reading packets from each network
port in turn and distributing those packets to the client queues (software rings, one for each
client process) in round-robin order. On the client side, the packets are read from the rings in
as big of bursts as possible, then routed out to a different network port. The routing used is
very simple. All packets received on the first NIC port are transmitted back out on the second
port and vice versa. Similarly, packets are routed between the 3rd and 4th network ports and
so on. The sending of packets is done by writing the packets directly to the network ports; they
are not transferred back via the server process.
In both the server and the client processes, outgoing packets are buffered before being sent, so
as to allow the sending of multiple packets in a single burst to improve efficiency. For example,
the client process will buffer packets to send, until either the buffer is full or until we receive no
further packets from the server.

3.24. Multi-process Sample Application 149


DPDK documentation, Release 17.05.0-rc0

Master-slave Multi-process Example

The fourth example of DPDK multi-process support demonstrates a master-slave model that
provide the capability of application recovery if a slave process crashes or meets unexpected
conditions. In addition, it also demonstrates the floating process, which can run among different
cores in contrast to the traditional way of binding a process/thread to a specific CPU core, using
the local cache mechanism of mempool structures.
This application performs the same functionality as the L2 Forwarding sample application,
therefore this chapter does not cover that part but describes functionality that is introduced in
this multi-process example only. Please refer to L2 Forwarding Sample Application (in Real
and Virtualized Environments) for more information.
Unlike previous examples where all processes are started from the command line with input
arguments, in this example, only one process is spawned from the command line and that
process creates other processes. The following section describes this in more detail.

Master-slave Process Models

The process spawned from the command line is called the master process in this document. A
process created by the master is called a slave process. The application has only one master
process, but could have multiple slave processes.
Once the master process begins to run, it tries to initialize all the resources such as memory,
CPU cores, driver, ports, and so on, as the other examples do. Thereafter, it creates slave
processes, as shown in the following figure.

Fig. 3.15: Master-slave Process Workflow

The master process calls the rte_eal_mp_remote_launch() EAL function to launch an appli-
cation function for each pinned thread through the pipe. Then, it waits to check if any slave
processes have exited. If so, the process tries to re-initialize the resources that belong to that

3.24. Multi-process Sample Application 150


DPDK documentation, Release 17.05.0-rc0

slave and launch them in the pinned thread entry again. The following section describes the
recovery procedures in more detail.
For each pinned thread in EAL, after reading any data from the pipe, it tries to call the function
that the application specified. In this master specified function, a fork() call creates a slave
process that performs the L2 forwarding task. Then, the function waits until the slave exits, is
killed or crashes. Thereafter, it notifies the master of this event and returns. Finally, the EAL
pinned thread waits until the new function is launched.
After discussing the master-slave model, it is necessary to mention another issue, global and
static variables.
For multiple-thread cases, all global and static variables have only one copy and they can be
accessed by any thread if applicable. So, they can be used to sync or share data among
threads.
In the previous examples, each process has separate global and static variables in memory and
are independent of each other. If it is necessary to share the knowledge, some communication
mechanism should be deployed, such as, memzone, ring, shared memory, and so on. The
global or static variables are not a valid approach to share data among processes. For variables
in this example, on the one hand, the slave process inherits all the knowledge of these variables
after being created by the master. On the other hand, other processes cannot know if one or
more processes modifies them after slave creation since that is the nature of a multiple process
address space. But this does not mean that these variables cannot be used to share or sync
data; it depends on the use case. The following are the possible use cases:
1. The master process starts and initializes a variable and it will never be changed after
slave processes created. This case is OK.
2. After the slave processes are created, the master or slave cores need to change a vari-
able, but other processes do not need to know the change. This case is also OK.
3. After the slave processes are created, the master or a slave needs to change a variable.
In the meantime, one or more other process needs to be aware of the change. In this
case, global and static variables cannot be used to share knowledge. Another communi-
cation mechanism is needed. A simple approach without lock protection can be a heap
buffer allocated by rte_malloc or mem zone.

Slave Process Recovery Mechanism

Before talking about the recovery mechanism, it is necessary to know what is needed before a
new slave instance can run if a previous one exited.
When a slave process exits, the system returns all the resources allocated for this process
automatically. However, this does not include the resources that were allocated by the DPDK.
All the hardware resources are shared among the processes, which include memzone, mem-
pool, ring, a heap buffer allocated by the rte_malloc library, and so on. If the new instance runs
and the allocated resource is not returned, either resource allocation failed or the hardware
resource is lost forever.
When a slave process runs, it may have dependencies on other processes. They could have
execution sequence orders; they could share the ring to communicate; they could share the
same port for reception and forwarding; they could use lock structures to do exclusive access
in some critical path. What happens to the dependent process(es) if the peer leaves? The
consequence are varied since the dependency cases are complex. It depends on what the

3.24. Multi-process Sample Application 151


DPDK documentation, Release 17.05.0-rc0

processed had shared. However, it is necessary to notify the peer(s) if one slave exited. Then,
the peer(s) will be aware of that and wait until the new instance begins to run.
Therefore, to provide the capability to resume the new slave instance if the previous one exited,
it is necessary to provide several mechanisms:
1. Keep a resource list for each slave process. Before a slave process run, the master
should prepare a resource list. After it exits, the master could either delete the allocated
resources and create new ones, or re-initialize those for use by the new instance.
2. Set up a notification mechanism for slave process exit cases. After the specific slave
leaves, the master should be notified and then help to create a new instance. This mech-
anism is provided in Section Master-slave Process Models.
3. Use a synchronization mechanism among dependent processes. The master should
have the capability to stop or kill slave processes that have a dependency on the one
that has exited. Then, after the new instance of exited slave process begins to run,
the dependency ones could resume or run from the start. The example sends a STOP
command to slave processes dependent on the exited one, then they will exit. Thereafter,
the master creates new instances for the exited slave processes.
The following diagram describes slave process recovery.

Fig. 3.16: Slave Process Recovery Process Flow

Floating Process Support

When the DPDK application runs, there is always a -c option passed in to indicate the cores
that are enabled. Then, the DPDK creates a thread for each enabled core. By doing so, it
creates a 1:1 mapping between the enabled core and each thread. The enabled core always
has an ID, therefore, each thread has a unique core ID in the DPDK execution environment.
With the ID, each thread can easily access the structures or resources exclusively belonging
to it without using function parameter passing. It can easily use the rte_lcore_id() function to
get the value in every function that is called.
For threads/processes not created in that way, either pinned to a core or not, they will not own a
unique ID and the rte_lcore_id() function will not work in the correct way. However, sometimes
these threads/processes still need the unique ID mechanism to do easy access on structures
or resources. For example, the DPDK mempool library provides a local cache mechanism
(refer to Local Cache) for fast element allocation and freeing. If using a non-unique ID or a
fake one, a race condition occurs if two or more threads/ processes with the same core ID try
to use the local cache.

3.24. Multi-process Sample Application 152


DPDK documentation, Release 17.05.0-rc0

Therefore, unused core IDs from the passing of parameters with the -c option are used to
organize the core ID allocation array. Once the floating process is spawned, it tries to allocate
a unique core ID from the array and release it on exit.
A natural way to spawn a floating process is to use the fork() function and allocate a unique
core ID from the unused core ID array. However, it is necessary to write new code to provide
a notification mechanism for slave exit and make sure the process recovery mechanism can
work with it.
To avoid producing redundant code, the Master-Slave process model is still used to spawn
floating processes, then cancel the affinity to specific cores. Besides that, clear the core ID as-
signed to the DPDK spawning a thread that has a 1:1 mapping with the core mask. Thereafter,
get a new core ID from the unused core ID allocation array.

Run the Application

This example has a command line similar to the L2 Forwarding sample application with a few
differences.
To run the application, start one copy of the l2fwd_fork binary in one terminal. Unlike the L2
Forwarding example, this example requires at least three cores since the master process will
wait and be accountable for slave process recovery. The command is as follows:
#./build/l2fwd_fork -c 1c -n 4 -- -p 3 -f

This example provides another -f option to specify the use of floating process. If not specified,
the example will use a pinned process to perform the L2 forwarding task.
To verify the recovery mechanism, proceed as follows: First, check the PID of the slave pro-
cesses:
#ps -fe | grep l2fwd_fork
root 5136 4843 29 11:11 pts/1 00:00:05 ./build/l2fwd_fork
root 5145 5136 98 11:11 pts/1 00:00:11 ./build/l2fwd_fork
root 5146 5136 98 11:11 pts/1 00:00:11 ./build/l2fwd_fork

Then, kill one of the slaves:


#kill -9 5145

After 1 or 2 seconds, check whether the slave has resumed:


#ps -fe | grep l2fwd_fork
root 5136 4843 3 11:11 pts/1 00:00:06 ./build/l2fwd_fork
root 5247 5136 99 11:14 pts/1 00:00:01 ./build/l2fwd_fork
root 5248 5136 99 11:14 pts/1 00:00:01 ./build/l2fwd_fork

It can also monitor the traffic generator statics to see whether slave processes have resumed.

Explanation

As described in previous sections, not all global and static variables need to change to be
accessible in multiple processes; it depends on how they are used. In this example, the statics
info on packets dropped/forwarded/received count needs to be updated by the slave process,
and the master needs to see the update and print them out. So, it needs to allocate a heap
buffer using rte_zmalloc. In addition, if the -f option is specified, an array is needed to store
the allocated core ID for the floating process so that the master can return it after a slave has
exited accidentally.

3.24. Multi-process Sample Application 153


DPDK documentation, Release 17.05.0-rc0

static int
l2fwd_malloc_shared_struct(void)
{
port_statistics = rte_zmalloc("port_stat", sizeof(struct l2fwd_port_statistics) * RTE_MAX_E

if (port_statistics == NULL)
return -1;

/* allocate mapping_id array */

if (float_proc) {
int i;

mapping_id = rte_malloc("mapping_id", sizeof(unsigned) * RTE_MAX_LCORE, 0);


if (mapping_id == NULL)
return -1;

for (i = 0 ;i < RTE_MAX_LCORE; i++)


mapping_id[i] = INVALID_MAPPING_ID;

}
return 0;
}

For each slave process, packets are received from one port and forwarded to another port
that another slave is operating on. If the other slave exits accidentally, the port it is operating
on may not work normally, so the first slave cannot forward packets to that port. There is a
dependency on the port in this case. So, the master should recognize the dependency. The
following is the code to detect this dependency:
for (portid = 0; portid < nb_ports; portid++) {
/* skip ports that are not enabled */

if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)


continue;

/* Find pair ports' lcores */

find_lcore = find_pair_lcore = 0;
pair_port = l2fwd_dst_ports[portid];

for (i = 0; i < RTE_MAX_LCORE; i++) {


if (!rte_lcore_is_enabled(i))
continue;

for (j = 0; j < lcore_queue_conf[i].n_rx_port;j++) {


if (lcore_queue_conf[i].rx_port_list[j] == portid) {
lcore = i;
find_lcore = 1;
break;
}

if (lcore_queue_conf[i].rx_port_list[j] == pair_port) {
pair_lcore = i;
find_pair_lcore = 1;
break;
}
}

if (find_lcore && find_pair_lcore)


break;
}

3.24. Multi-process Sample Application 154


DPDK documentation, Release 17.05.0-rc0

if (!find_lcore || !find_pair_lcore)
rte_exit(EXIT_FAILURE, "Not find port=%d pair\\n", portid);

printf("lcore %u and %u paired\\n", lcore, pair_lcore);

lcore_resource[lcore].pair_id = pair_lcore;
lcore_resource[pair_lcore].pair_id = lcore;
}

Before launching the slave process, it is necessary to set up the communication channel be-
tween the master and slave so that the master can notify the slave if its peer process with the
dependency exited. In addition, the master needs to register a callback function in the case
where a specific slave exited.
for (i = 0; i < RTE_MAX_LCORE; i++) {
if (lcore_resource[i].enabled) {
/* Create ring for master and slave communication */

ret = create_ms_ring(i);
if (ret != 0)
rte_exit(EXIT_FAILURE, "Create ring for lcore=%u failed",i);

if (flib_register_slave_exit_notify(i,slave_exit_cb) != 0)
rte_exit(EXIT_FAILURE, "Register master_trace_slave_exit failed");
}
}

After launching the slave process, the master waits and prints out the port statics periodically.
If an event indicating that a slave process exited is detected, it sends the STOP command to
the peer and waits until it has also exited. Then, it tries to clean up the execution environment
and prepare new resources. Finally, the new slave instance is launched.
while (1) {
sleep(1);
cur_tsc = rte_rdtsc();
diff_tsc = cur_tsc - prev_tsc;

/* if timer is enabled */

if (timer_period > 0) {
/* advance the timer */
timer_tsc += diff_tsc;

/* if timer has reached its timeout */


if (unlikely(timer_tsc >= (uint64_t) timer_period)) {
print_stats();

/* reset the timer */


timer_tsc = 0;
}
}

prev_tsc = cur_tsc;

/* Check any slave need restart or recreate */

rte_spinlock_lock(&res_lock);

for (i = 0; i < RTE_MAX_LCORE; i++) {


struct lcore_resource_struct *res = &lcore_resource[i];
struct lcore_resource_struct *pair = &lcore_resource[res->pair_id];

/* If find slave exited, try to reset pair */

3.24. Multi-process Sample Application 155


DPDK documentation, Release 17.05.0-rc0

if (res->enabled && res->flags && pair->enabled) {


if (!pair->flags) {
master_sendcmd_with_ack(pair->lcore_id, CMD_STOP);
rte_spinlock_unlock(&res_lock);
sleep(1);
rte_spinlock_lock(&res_lock);
if (pair->flags)
continue;
}

if (reset_pair(res->lcore_id, pair->lcore_id) != 0)
rte_exit(EXIT_FAILURE, "failed to reset slave");

res->flags = 0;
pair->flags = 0;
}
}
rte_spinlock_unlock(&res_lock);
}

When the slave process is spawned and starts to run, it checks whether the floating process
option is applied. If so, it clears the affinity to a specific core and also sets the unique core
ID to 0. Then, it tries to allocate a new core ID. Since the core ID has changed, the resource
allocated by the master cannot work, so it remaps the resource to the new core ID slot.
static int
l2fwd_launch_one_lcore( attribute ((unused)) void *dummy)
{
unsigned lcore_id = rte_lcore_id();

if (float_proc) {
unsigned flcore_id;

/* Change it to floating process, also change it's lcore_id */

clear_cpu_affinity();

RTE_PER_LCORE(_lcore_id) = 0;

/* Get a lcore_id */

if (flib_assign_lcore_id() < 0 ) {
printf("flib_assign_lcore_id failed\n");
return -1;
}

flcore_id = rte_lcore_id();

/* Set mapping id, so master can return it after slave exited */

mapping_id[lcore_id] = flcore_id;
printf("Org lcore_id = %u, cur lcore_id = %u\n",lcore_id, flcore_id);
remapping_slave_resource(lcore_id, flcore_id);
}

l2fwd_main_loop();

/* return lcore_id before return */


if (float_proc) {
flib_free_lcore_id(rte_lcore_id());
mapping_id[lcore_id] = INVALID_MAPPING_ID;
}

3.24. Multi-process Sample Application 156


DPDK documentation, Release 17.05.0-rc0

return 0;
}

QoS Metering Sample Application

The QoS meter sample application is an example that demonstrates the use of DPDK to pro-
vide QoS marking and metering, as defined by RFC2697 for Single Rate Three Color Marker
(srTCM) and RFC 2698 for Two Rate Three Color Marker (trTCM) algorithm.

Overview

The application uses a single thread for reading the packets from the RX port, metering, mark-
ing them with the appropriate color (green, yellow or red) and writing them to the TX port.
A policing scheme can be applied before writing the packets to the TX port by dropping or
changing the color of the packet in a static manner depending on both the input and output
colors of the packets that are processed by the meter.
The operation mode can be selected as compile time out of the following options:
• Simple forwarding
• srTCM color blind
• srTCM color aware
• srTCM color blind
• srTCM color aware
Please refer to RFC2697 and RFC2698 for details about the srTCM and trTCM configurable
parameters (CIR, CBS and EBS for srTCM; CIR, PIR, CBS and PBS for trTCM).
The color blind modes are functionally equivalent with the color-aware modes when all the
incoming packets are colored as green.

Compiling the Application

1. Go to the example directory:


export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/qos_meter

2. Set the target (a default target is used if not specified):

Note: This application is intended as a linuxapp only.

export RTE_TARGET=x86_64-native-linuxapp-gcc

3. Build the application:


make

3.25. QoS Metering Sample Application 157


DPDK documentation, Release 17.05.0-rc0

Running the Application

The application execution command line is as below:


./qos_meter [EAL options] -- -p PORTMASK

The application is constrained to use a single core in the EAL core mask and 2 ports only in
the application port mask (first port from the port mask is used for RX and the other port in the
core mask is used for TX).
Refer to DPDK Getting Started Guide for general information on running applications and the
Environment Abstraction Layer (EAL) options.

Explanation

Selecting one of the metering modes is done with these defines:


#define APP_MODE_FWD 0
#define APP_MODE_SRTCM_COLOR_BLIND 1
#define APP_MODE_SRTCM_COLOR_AWARE 2
#define APP_MODE_TRTCM_COLOR_BLIND 3
#define APP_MODE_TRTCM_COLOR_AWARE 4

#define APP_MODE APP_MODE_SRTCM_COLOR_BLIND

To simplify debugging (for example, by using the traffic generator RX side MAC address based
packet filtering feature), the color is defined as the LSB byte of the destination MAC address.
The traffic meter parameters are configured in the application source code with following default
values:
struct rte_meter_srtcm_params app_srtcm_params[] = {

{.cir = 1000000 * 46, .cbs = 2048, .ebs = 2048},

};

struct rte_meter_trtcm_params app_trtcm_params[] = {

{.cir = 1000000 * 46, .pir = 1500000 * 46, .cbs = 2048, .pbs = 2048},

};

Assuming the input traffic is generated at line rate and all packets are 64 bytes Ethernet frames
(IPv4 packet size of 46 bytes) and green, the expected output traffic should be marked as
shown in the following table:

Table 3.1: Output Traffic Marking


Mode Green (Mpps) Yellow (Mpps) Red (Mpps)
srTCM blind 1 1 12.88
srTCM color 1 1 12.88
trTCM blind 1 0.5 13.38
trTCM color 1 0.5 13.38
FWD 14.88 0 0
To set up the policing scheme as desired, it is necessary to modify the main.h source file,
where this policy is implemented as a static structure, as follows:

3.25. QoS Metering Sample Application 158


DPDK documentation, Release 17.05.0-rc0

int policer_table[e_RTE_METER_COLORS][e_RTE_METER_COLORS] =
{
{ GREEN, RED, RED},
{ DROP, YELLOW, RED},
{ DROP, DROP, RED}
};

Where rows indicate the input color, columns indicate the output color, and the value that is
stored in the table indicates the action to be taken for that particular case.
There are four different actions:
• GREEN: The packet’s color is changed to green.
• YELLOW: The packet’s color is changed to yellow.
• RED: The packet’s color is changed to red.
• DROP: The packet is dropped.
In this particular case:
• Every packet which input and output color are the same, keeps the same color.
• Every packet which color has improved is dropped (this particular case can’t happen, so
these values will not be used).
• For the rest of the cases, the color is changed to red.

QoS Scheduler Sample Application

The QoS sample application demonstrates the use of the DPDK to provide QoS scheduling.

Overview

The architecture of the QoS scheduler application is shown in the following figure.

Fig. 3.17: QoS Scheduler Application Architecture

3.26. QoS Scheduler Sample Application 159


DPDK documentation, Release 17.05.0-rc0

There are two flavors of the runtime execution for this application, with two or three threads per
each packet flow configuration being used. The RX thread reads packets from the RX port,
classifies the packets based on the double VLAN (outer and inner) and the lower two bytes of
the IP destination address and puts them into the ring queue. The worker thread dequeues the
packets from the ring and calls the QoS scheduler enqueue/dequeue functions. If a separate
TX core is used, these are sent to the TX ring. Otherwise, they are sent directly to the TX port.
The TX thread, if present, reads from the TX ring and write the packets to the TX port.

Compiling the Application

To compile the application:


1. Go to the sample application directory:
export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/qos_sched

2. Set the target (a default target is used if not specified). For example:

Note: This application is intended as a linuxapp only.

export RTE_TARGET=x86_64-native-linuxapp-gcc

3. Build the application:


make

Note: To get statistics on the sample app using the command line interface as described in the
next section, DPDK must be compiled defining CONFIG_RTE_SCHED_COLLECT_STATS,
which can be done by changing the configuration file for the specific target to be compiled.

Running the Application

Note: In order to run the application, a total of at least 4 G of huge pages must be set up for
each of the used sockets (depending on the cores in use).

The application has a number of command line options:


./qos_sched [EAL options] -- <APP PARAMS>

Mandatory application parameters include:


• –pfc “RX PORT, TX PORT, RX LCORE, WT LCORE, TX CORE”: Packet flow configura-
tion. Multiple pfc entities can be configured in the command line, having 4 or 5 items (if
TX core defined or not).
Optional application parameters include:
• -i: It makes the application to start in the interactive mode. In this mode, the application
shows a command line that can be used for obtaining statistics while scheduling is taking
place (see interactive mode below for more information).

3.26. QoS Scheduler Sample Application 160


DPDK documentation, Release 17.05.0-rc0

• –mst n: Master core index (the default value is 1).


• –rsz “A, B, C”: Ring sizes:
• A = Size (in number of buffer descriptors) of each of the NIC RX rings read by the I/O RX
lcores (the default value is 128).
• B = Size (in number of elements) of each of the software rings used by the I/O RX lcores
to send packets to worker lcores (the default value is 8192).
• C = Size (in number of buffer descriptors) of each of the NIC TX rings written by worker
lcores (the default value is 256)
• –bsz “A, B, C, D”: Burst sizes
• A = I/O RX lcore read burst size from the NIC RX (the default value is 64)
• B = I/O RX lcore write burst size to the output software rings, worker lcore read burst size
from input software rings,QoS enqueue size (the default value is 64)
• C = QoS dequeue size (the default value is 32)
• D = Worker lcore write burst size to the NIC TX (the default value is 64)
• –msz M: Mempool size (in number of mbufs) for each pfc (default 2097152)
• –rth “A, B, C”: The RX queue threshold parameters
• A = RX prefetch threshold (the default value is 8)
• B = RX host threshold (the default value is 8)
• C = RX write-back threshold (the default value is 4)
• –tth “A, B, C”: TX queue threshold parameters
• A = TX prefetch threshold (the default value is 36)
• B = TX host threshold (the default value is 0)
• C = TX write-back threshold (the default value is 0)
• –cfg FILE: Profile configuration to load
Refer to DPDK Getting Started Guide for general information on running applications and the
Environment Abstraction Layer (EAL) options.
The profile configuration file defines all the port/subport/pipe/traffic class/queue parameters
needed for the QoS scheduler configuration.
The profile file has the following format:
; port configuration [port]

frame overhead = 24
number of subports per port = 1
number of pipes per subport = 4096
queue sizes = 64 64 64 64

; Subport configuration

[subport 0]
tb rate = 1250000000; Bytes per second
tb size = 1000000; Bytes
tc 0 rate = 1250000000; Bytes per second

3.26. QoS Scheduler Sample Application 161


DPDK documentation, Release 17.05.0-rc0

tc 1 rate = 1250000000; Bytes per second


tc 2 rate = 1250000000; Bytes per second
tc 3 rate = 1250000000; Bytes per second
tc period = 10; Milliseconds
tc oversubscription period = 10; Milliseconds

pipe 0-4095 = 0; These pipes are configured with pipe profile 0

; Pipe configuration

[pipe profile 0]
tb rate = 305175; Bytes per second
tb size = 1000000; Bytes

tc 0 rate = 305175; Bytes per second


tc 1 rate = 305175; Bytes per second
tc 2 rate = 305175; Bytes per second
tc 3 rate = 305175; Bytes per second
tc period = 40; Milliseconds

tc 0 oversubscription weight = 1
tc 1 oversubscription weight = 1
tc 2 oversubscription weight = 1
tc 3 oversubscription weight = 1

tc 0 wrr weights = 1 1 1 1
tc 1 wrr weights = 1 1 1 1
tc 2 wrr weights = 1 1 1 1
tc 3 wrr weights = 1 1 1 1

; RED params per traffic class and color (Green / Yellow / Red)

[red]
tc 0 wred min = 48 40 32
tc 0 wred max = 64 64 64
tc 0 wred inv prob = 10 10 10
tc 0 wred weight = 9 9 9

tc 1 wred min = 48 40 32
tc 1 wred max = 64 64 64
tc 1 wred inv prob = 10 10 10
tc 1 wred weight = 9 9 9

tc 2 wred min = 48 40 32
tc 2 wred max = 64 64 64
tc 2 wred inv prob = 10 10 10
tc 2 wred weight = 9 9 9

tc 3 wred min = 48 40 32
tc 3 wred max = 64 64 64
tc 3 wred inv prob = 10 10 10
tc 3 wred weight = 9 9 9

Interactive mode

These are the commands that are currently working under the command line interface:
• Control Commands
• –quit: Quits the application.
• General Statistics

3.26. QoS Scheduler Sample Application 162


DPDK documentation, Release 17.05.0-rc0

– stats app: Shows a table with in-app calculated statistics.


– stats port X subport Y: For a specific subport, it shows the number of packets that
went through the scheduler properly and the number of packets that were dropped.
The same information is shown in bytes. The information is displayed in a table
separating it in different traffic classes.
– stats port X subport Y pipe Z: For a specific pipe, it shows the number of packets that
went through the scheduler properly and the number of packets that were dropped.
The same information is shown in bytes. This information is displayed in a table
separating it in individual queues.
• Average queue size
All of these commands work the same way, averaging the number of packets throughout a
specific subset of queues.
Two parameters can be configured for this prior to calling any of these commands:
• qavg n X: n is the number of times that the calculation will take place. Bigger numbers
provide higher accuracy. The default value is 10.
• qavg period X: period is the number of microseconds that will be allowed between each
calculation. The default value is 100.
The commands that can be used for measuring average queue size are:
• qavg port X subport Y: Show average queue size per subport.
• qavg port X subport Y tc Z: Show average queue size per subport for a specific traffic
class.
• qavg port X subport Y pipe Z: Show average queue size per pipe.
• qavg port X subport Y pipe Z tc A: Show average queue size per pipe for a specific traffic
class.
• qavg port X subport Y pipe Z tc A q B: Show average queue size of a specific queue.

Example

The following is an example command with a single packet flow configuration:


./qos_sched -c a2 -n 4 -- --pfc "3,2,5,7" --cfg ./profile.cfg

This example uses a single packet flow configuration which creates one RX thread on lcore 5
reading from port 3 and a worker thread on lcore 7 writing to port 2.
Another example with 2 packet flow configurations using different ports but sharing the same
core for QoS scheduler is given below:
./qos_sched -c c6 -n 4 -- --pfc "3,2,2,6,7" --pfc "1,0,2,6,7" --cfg ./profile.cfg

Note that independent cores for the packet flow configurations for each of the RX, WT and TX
thread are also supported, providing flexibility to balance the work.
The EAL coremask is constrained to contain the default mastercore 1 and the RX, WT and TX
cores only.

3.26. QoS Scheduler Sample Application 163


DPDK documentation, Release 17.05.0-rc0

Explanation

The Port/Subport/Pipe/Traffic Class/Queue are the hierarchical entities in a typical QoS appli-
cation:
• A subport represents a predefined group of users.
• A pipe represents an individual user/subscriber.
• A traffic class is the representation of a different traffic type with a specific loss rate, delay
and jitter requirements; such as data voice, video or data transfers.
• A queue hosts packets from one or multiple connections of the same type belonging to
the same user.
The traffic flows that need to be configured are application dependent. This application classi-
fies based on the QinQ double VLAN tags and the IP destination address as indicated in the
following table.

Table 3.2: Entity Types


Level Name Siblings per Parent QoS Functional De- Selected By
scription
Port Ethernet port Physical port

Subport Config (8) Traffic shaped (token Outer VLAN tag


bucket)
Pipe Config (4k) Traffic shaped (token Inner VLAN tag
bucket)
Traffic Class 4 TCs of the same pipe Destination IP ad-
services in strict prior- dress (0.0.X.0)
ity
Queue 4 Queue of the same Destination IP ad-
TC serviced in WRR dress (0.0.0.X)
Please refer to the “QoS Scheduler” chapter in the DPDK Programmer’s Guide for more infor-
mation about these parameters.

Intel® QuickAssist Technology Sample Application

This sample application demonstrates the use of the cryptographic operations provided by
the Intel® QuickAssist Technology from within the DPDK environment. Therefore, building
and running this application requires having both the DPDK and the QuickAssist Technology
Software Library installed, as well as at least one Intel® QuickAssist Technology hardware
device present in the system.
For this sample application, there is a dependency on either of:
• Intel® Communications Chipset 8900 to 8920 Series Software for Linux* package
• Intel® Communications Chipset 8925 to 8955 Series Software for Linux* package

3.27. Intel® QuickAssist Technology Sample Application 164


DPDK documentation, Release 17.05.0-rc0

Overview

An overview of the application is provided in Fig. 3.18. For simplicity, only two NIC ports and
one Intel® QuickAssist Technology device are shown in this diagram, although the number of
NIC ports and Intel® QuickAssist Technology devices can be different.

Fig. 3.18: Intel® QuickAssist Technology Application Block Diagram

The application allows the configuration of the following items:


• Number of NIC ports
• Number of logical cores (lcores)
• Mapping of NIC RX queues to logical cores
Each lcore communicates with every cryptographic acceleration engine in the system through
a pair of dedicated input - output queues. Each lcore has a dedicated NIC TX queue with
every NIC port in the system. Therefore, each lcore reads packets from its NIC RX queues
and cryptographic accelerator output queues and writes packets to its NIC TX queues and
cryptographic accelerator input queues.
Each incoming packet that is read from a NIC RX queue is either directly forwarded to its des-
tination NIC TX port (forwarding path) or first sent to one of the Intel® QuickAssist Technology
devices for either encryption or decryption before being sent out on its destination NIC TX port
(cryptographic path).
The application supports IPv4 input packets only. For each input packet, the decision between
the forwarding path and the cryptographic path is taken at the classification stage based on the
value of the IP source address field read from the input packet. Assuming that the IP source
address is A.B.C.D, then if:
• D = 0: the forwarding path is selected (the packet is forwarded out directly)
• D = 1: the cryptographic path for encryption is selected (the packet is first encrypted and
then forwarded out)

3.27. Intel® QuickAssist Technology Sample Application 165


DPDK documentation, Release 17.05.0-rc0

• D = 2: the cryptographic path for decryption is selected (the packet is first decrypted and
then forwarded out)
For the cryptographic path cases (D = 1 or D = 2), byte C specifies the cipher algorithm and byte
B the cryptographic hash algorithm to be used for the current packet. Byte A is not used and
can be any value. The cipher and cryptographic hash algorithms supported by this application
are listed in the crypto.h header file.
For each input packet, the destination NIC TX port is decided at the forwarding stage (executed
after the cryptographic stage, if enabled for the packet) by looking at the RX port index of the
dst_ports[ ] array, which was initialized at startup, being the outport the adjacent enabled port.
For example, if ports 1,3,5 and 6 are enabled, for input port 1, outport port will be 3 and vice
versa, and for input port 5, output port will be 6 and vice versa.
For the cryptographic path, it is the payload of the IPv4 packet that is encrypted or decrypted.

Setup

Building and running this application requires having both the DPDK package and the Quick-
Assist Technology Software Library installed, as well as at least one Intel® QuickAssist Tech-
nology hardware device present in the system.
For more details on how to build and run DPDK and Intel® QuickAssist Technology applica-
tions, please refer to the following documents:
• DPDK Getting Started Guide
• Intel® Communications Chipset 8900 to 8920 Series Software for Linux* Getting Started
Guide (440005)
• Intel® Communications Chipset 8925 to 8955 Series Software for Linux* Getting Started
Guide (523128)
For more details on the actual platforms used to validate this application, as well as perfor-
mance numbers, please refer to the Test Report, which is accessible by contacting your Intel
representative.

Building the Application

Steps to build the application:


1. Set up the following environment variables:
export RTE_SDK=<Absolute path to the DPDK installation folder>
export ICP_ROOT=<Absolute path to the Intel QAT installation folder>

2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc

Refer to the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
cd ${RTE_SDK}/examples/dpdk_qat
make

3.27. Intel® QuickAssist Technology Sample Application 166


DPDK documentation, Release 17.05.0-rc0

Running the Application

Intel® QuickAssist Technology Configuration Files

The Intel® QuickAssist Technology configuration files used by the application are located in
the config_files folder in the application folder. There following sets of configuration files are
included in the DPDK package:
• Stargo CRB (single CPU socket): located in the stargo folder
– dh89xxcc_qa_dev0.conf
• Shumway CRB (dual CPU socket): located in the shumway folder
– dh89xxcc_qa_dev0.conf
– dh89xxcc_qa_dev1.conf
• Coleto Creek: located in the coleto folder
– dh895xcc_qa_dev0.conf
The relevant configuration file(s) must be copied to the /etc/ directory.
Please note that any change to these configuration files requires restarting the Intel® Quick-
Assist Technology driver using the following command:
# service qat_service restart

Refer to the following documents for information on the Intel® QuickAssist Technology config-
uration files:
• Intel® Communications Chipset 8900 to 8920 Series Software Programmer’s Guide
• Intel® Communications Chipset 8925 to 8955 Series Software Programmer’s Guide
• Intel® Communications Chipset 8900 to 8920 Series Software for Linux* Getting Started
Guide.
• Intel® Communications Chipset 8925 to 8955 Series Software for Linux* Getting Started
Guide.

Traffic Generator Setup and Application Startup

The application has a number of command line options:


dpdk_qat [EAL options] – -p PORTMASK [–no-promisc] [–config
‘(port,queue,lcore)[,(port,queue,lcore)]’]
where,
• -p PORTMASK: Hexadecimal bitmask of ports to configure
• –no-promisc: Disables promiscuous mode for all ports, so that only packets with the
Ethernet MAC destination address set to the Ethernet address of the port are accepted.
By default promiscuous mode is enabled so that packets are accepted regardless of the
packet’s Ethernet MAC destination address.
• –config’(port,queue,lcore)[,(port,queue,lcore)]’: determines which queues from which
ports are mapped to which cores.

3.27. Intel® QuickAssist Technology Sample Application 167


DPDK documentation, Release 17.05.0-rc0

Refer to the L3 Forwarding Sample Application for more detailed descriptions of the –config
command line option.
As an example, to run the application with two ports and two cores, which are using different
Intel® QuickAssist Technology execution engines, performing AES-CBC-128 encryption with
AES-XCBC-MAC-96 hash, the following settings can be used:
• Traffic generator source IP address: 0.9.6.1
• Command line:
./build/dpdk_qat -c 0xff -n 2 -- -p 0x3 --config '(0,0,1),(1,0,2)'

Refer to the DPDK Test Report for more examples of traffic generator setup and the application
startup command lines. If no errors are generated in response to the startup commands, the
application is running correctly.

Quota and Watermark Sample Application

The Quota and Watermark sample application is a simple example of packet processing using
Data Plane Development Kit (DPDK) that showcases the use of a quota as the maximum
number of packets enqueue/dequeue at a time and low and high watermarks to signal low and
high ring usage respectively.
Additionally, it shows how ring watermarks can be used to feedback congestion notifications
to data producers by temporarily stopping processing overloaded rings and sending Ethernet
flow control frames.
This sample application is split in two parts:
• qw - The core quota and watermark sample application
• qwctl - A command line tool to alter quota and watermarks while qw is running

Overview

The Quota and Watermark sample application performs forwarding for each packet that is
received on a given port. The destination port is the adjacent port from the enabled port mask,
that is, if the first four ports are enabled (port mask 0xf), ports 0 and 1 forward into each other,
and ports 2 and 3 forward into each other. The MAC addresses of the forwarded Ethernet
frames are not affected.
Internally, packets are pulled from the ports by the master logical core and put on a variable
length processing pipeline, each stage of which being connected by rings, as shown in Fig.
3.19.
An adjustable quota value controls how many packets are being moved through the pipeline
per enqueue and dequeue. Adjustable watermark values associated with the rings control a
back-off mechanism that tries to prevent the pipeline from being overloaded by:
• Stopping enqueuing on rings for which the usage has crossed the high watermark thresh-
old
• Sending Ethernet pause frames

3.28. Quota and Watermark Sample Application 168


DPDK documentation, Release 17.05.0-rc0

Fig. 3.19: Pipeline Overview

3.28. Quota and Watermark Sample Application 169


DPDK documentation, Release 17.05.0-rc0

• Only resuming enqueuing on a ring once its usage goes below a global low watermark
threshold
This mechanism allows congestion notifications to go up the ring pipeline and eventually lead
to an Ethernet flow control frame being send to the source.
On top of serving as an example of quota and watermark usage, this application can be used to
benchmark ring based processing pipelines performance using a traffic- generator, as shown
in Fig. 3.20.

Fig. 3.20: Ring-based Processing Pipeline Performance Setup

Compiling the Application

1. Go to the example directory:


export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/quota_watermark

2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc

See the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make

3.28. Quota and Watermark Sample Application 170


DPDK documentation, Release 17.05.0-rc0

Running the Application

The core application, qw, has to be started first.


Once it is up and running, one can alter quota and watermarks while it runs using the control
application, qwctl.

Running the Core Application

The application requires a single command line option:


./qw/build/qw [EAL options] -- -p PORTMASK

where,
-p PORTMASK: A hexadecimal bitmask of the ports to configure
To run the application in a linuxapp environment with four logical cores and ports 0 and 2, issue
the following command:
./qw/build/qw -c f -n 4 -- -p 5

Refer to the DPDK Getting Started Guide for general information on running applications and
the Environment Abstraction Layer (EAL) options.

Running the Control Application

The control application requires a number of command line options:


./qwctl/build/qwctl [EAL options] --proc-type=secondary

The –proc-type=secondary option is necessary for the EAL to properly initialize the control
application to use the same huge pages as the core application and thus be able to access its
rings.
To run the application in a linuxapp environment on logical core 0, issue the following command:
./qwctl/build/qwctl -c 1 -n 4 --proc-type=secondary

Refer to the DPDK Getting Started Guide for general information on running applications and
the Environment Abstraction Layer (EAL) options.
qwctl is an interactive command line that let the user change variables in a running instance of
qw. The help command gives a list of available commands:
$ qwctl > help

Code Overview

The following sections provide a quick guide to the application’s source code.

Core Application - qw

EAL and Drivers Setup

The EAL arguments are parsed at the beginning of the main() function:

3.28. Quota and Watermark Sample Application 171


DPDK documentation, Release 17.05.0-rc0

ret = rte_eal_init(argc, argv);


if (ret < 0)
rte_exit(EXIT_FAILURE, "Cannot initialize EAL\n");

argc -= ret;
argv += ret;

Then, a call to init_dpdk(), defined in init.c, is made to initialize the poll mode drivers:
void
init_dpdk(void)
{
int ret;

/* Bind the drivers to usable devices */

ret = rte_eal_pci_probe();
if (ret < 0)
rte_exit(EXIT_FAILURE, "rte_eal_pci_probe(): error %d\n", ret);

if (rte_eth_dev_count() < 2)
rte_exit(EXIT_FAILURE, "Not enough Ethernet port available\n");
}

To fully understand this code, it is recommended to study the chapters that relate to the Poll
Mode Driver in the DPDK Getting Started Guide and the DPDK API Reference.

Shared Variables Setup

The quota and low_watermark shared variables are put into an rte_memzone using a call to
setup_shared_variables():
void
setup_shared_variables(void)
{
const struct rte_memzone *qw_memzone;

qw_memzone = rte_memzone_reserve(QUOTA_WATERMARK_MEMZONE_NAME, 2 * sizeof(int), rte_socket

if (qw_memzone == NULL)
rte_exit(EXIT_FAILURE, "%s\n", rte_strerror(rte_errno));

quota = qw_memzone->addr;
low_watermark = (unsigned int *) qw_memzone->addr + sizeof(int);
}

These two variables are initialized to a default value in main() and can be changed while qw is
running using the qwctl control program.

Application Arguments

The qw application only takes one argument: a port mask that specifies which ports should be
used by the application. At least two ports are needed to run the application and there should
be an even number of ports given in the port mask.
The port mask parsing is done in parse_qw_args(), defined in args.c.

3.28. Quota and Watermark Sample Application 172


DPDK documentation, Release 17.05.0-rc0

Mbuf Pool Initialization

Once the application’s arguments are parsed, an mbuf pool is created. It contains a set of mbuf
objects that are used by the driver and the application to store network packets:
/* Create a pool of mbuf to store packets */

mbuf_pool = rte_mempool_create("mbuf_pool", MBUF_PER_POOL, MBUF_SIZE, 32, sizeof(struct rte_pkt


rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0);

if (mbuf_pool == NULL)
rte_panic("%s\n", rte_strerror(rte_errno));

The rte_mempool is a generic structure used to handle pools of objects. In this case, it is
necessary to create a pool that will be used by the driver, which expects to have some reserved
space in the mempool structure, sizeof(struct rte_pktmbuf_pool_private) bytes.
The number of allocated pkt mbufs is MBUF_PER_POOL, with a size of MBUF_SIZE each. A
per-lcore cache of 32 mbufs is kept. The memory is allocated in on the master lcore’s socket,
but it is possible to extend this code to allocate one mbuf pool per socket.
Two callback pointers are also given to the rte_mempool_create() function:
• The first callback pointer is to rte_pktmbuf_pool_init() and is used to initialize the private
data of the mempool, which is needed by the driver. This function is provided by the mbuf
API, but can be copied and extended by the developer.
• The second callback pointer given to rte_mempool_create() is the mbuf initializer.
The default is used, that is, rte_pktmbuf_init(), which is provided in the rte_mbuf library. If a
more complex application wants to extend the rte_pktmbuf structure for its own needs, a new
function derived from rte_pktmbuf_init() can be created.

Ports Configuration and Pairing

Each port in the port mask is configured and a corresponding ring is created in the master
lcore’s array of rings. This ring is the first in the pipeline and will hold the packets directly
coming from the port.
for (port_id = 0; port_id < RTE_MAX_ETHPORTS; port_id++)
if (is_bit_set(port_id, portmask)) {
configure_eth_port(port_id);
init_ring(master_lcore_id, port_id);
}

pair_ports();

The configure_eth_port() and init_ring() functions are used to configure a port and a ring re-
spectively and are defined in init.c. They make use of the DPDK APIs defined in rte_eth.h and
rte_ring.h.
pair_ports() builds the port_pairs[] array so that its key-value pairs are a mapping between
reception and transmission ports. It is defined in init.c.

3.28. Quota and Watermark Sample Application 173


DPDK documentation, Release 17.05.0-rc0

Logical Cores Assignment

The application uses the master logical core to poll all the ports for new packets and enqueue
them on a ring associated with the port.
Each logical core except the last runs pipeline_stage() after a ring for each used port is ini-
tialized on that core. pipeline_stage() on core X dequeues packets from core X-1’s rings and
enqueue them on its own rings. See Fig. 3.21.
/* Start pipeline_stage() on all the available slave lcore but the last */

for (lcore_id = 0 ; lcore_id < last_lcore_id; lcore_id++) {


if (rte_lcore_is_enabled(lcore_id) && lcore_id != master_lcore_id) {
for (port_id = 0; port_id < RTE_MAX_ETHPORTS; port_id++)
if (is_bit_set(port_id, portmask))
init_ring(lcore_id, port_id);

rte_eal_remote_launch(pipeline_stage, NULL, lcore_id);


}
}

The last available logical core runs send_stage(), which is the last stage of the pipeline de-
queuing packets from the last ring in the pipeline and sending them out on the destination port
setup by pair_ports().
/* Start send_stage() on the last slave core */

rte_eal_remote_launch(send_stage, NULL, last_lcore_id);

Receive, Process and Transmit Packets

In the receive_stage() function running on the master logical core, the main task is to read
ingress packets from the RX ports and enqueue them on the port’s corresponding first ring in
the pipeline. This is done using the following code:
lcore_id = rte_lcore_id();

/* Process each port round robin style */

for (port_id = 0; port_id < RTE_MAX_ETHPORTS; port_id++) {


if (!is_bit_set(port_id, portmask))
continue;

ring = rings[lcore_id][port_id];

if (ring_state[port_id] != RING_READY) {
if (rte_ring_count(ring) > *low_watermark)
continue;
else
ring_state[port_id] = RING_READY;
}

/* Enqueue received packets on the RX ring */

nb_rx_pkts = rte_eth_rx_burst(port_id, 0, pkts, *quota);

ret = rte_ring_enqueue_bulk(ring, (void *) pkts, nb_rx_pkts);


if (ret == -EDQUOT) {
ring_state[port_id] = RING_OVERLOADED;
send_pause_frame(port_id, 1337);

3.28. Quota and Watermark Sample Application 174


DPDK documentation, Release 17.05.0-rc0

Fig. 3.21: Threads and Pipelines

3.28. Quota and Watermark Sample Application 175


DPDK documentation, Release 17.05.0-rc0

}
}

For each port in the port mask, the corresponding ring’s pointer is fetched into ring and that
ring’s state is checked:
• If it is in the RING_READY state, *quota packets are grabbed from the port and put on
the ring. Should this operation make the ring’s usage cross its high watermark, the ring
is marked as overloaded and an Ethernet flow control frame is sent to the source.
• If it is not in the RING_READY state, this port is ignored until the ring’s usage crosses
the *low_watermark value.
The pipeline_stage() function’s task is to process and move packets from the preceding pipeline
stage. This thread is running on most of the logical cores to create and arbitrarily long pipeline.
lcore_id = rte_lcore_id();

previous_lcore_id = get_previous_lcore_id(lcore_id);

for (port_id = 0; port_id < RTE_MAX_ETHPORTS; port_id++) {


if (!is_bit_set(port_id, portmask))
continue;

tx = rings[lcore_id][port_id];
rx = rings[previous_lcore_id][port_id];
if (ring_state[port_id] != RING_READY) {
if (rte_ring_count(tx) > *low_watermark)
continue;
else
ring_state[port_id] = RING_READY;
}

/* Dequeue up to quota mbuf from rx */

nb_dq_pkts = rte_ring_dequeue_burst(rx, pkts, *quota);

if (unlikely(nb_dq_pkts < 0))


continue;

/* Enqueue them on tx */

ret = rte_ring_enqueue_bulk(tx, pkts, nb_dq_pkts);


if (ret == -EDQUOT)
ring_state[port_id] = RING_OVERLOADED;
}

The thread’s logic works mostly like receive_stage(), except that packets are moved from ring
to ring instead of port to ring.
In this example, no actual processing is done on the packets, but pipeline_stage() is an ideal
place to perform any processing required by the application.
Finally, the send_stage() function’s task is to read packets from the last ring in a pipeline and
send them on the destination port defined in the port_pairs[] array. It is running on the last
available logical core only.
lcore_id = rte_lcore_id();

previous_lcore_id = get_previous_lcore_id(lcore_id);

for (port_id = 0; port_id < RTE_MAX_ETHPORTS; port_id++) {


if (!is_bit_set(port_id, portmask)) continue;

3.28. Quota and Watermark Sample Application 176


DPDK documentation, Release 17.05.0-rc0

dest_port_id = port_pairs[port_id];
tx = rings[previous_lcore_id][port_id];

if (rte_ring_empty(tx)) continue;

/* Dequeue packets from tx and send them */

nb_dq_pkts = rte_ring_dequeue_burst(tx, (void *) tx_pkts, *quota);


nb_tx_pkts = rte_eth_tx_burst(dest_port_id, 0, tx_pkts, nb_dq_pkts);
}

For each port in the port mask, up to *quota packets are pulled from the last ring in its pipeline
and sent on the destination port paired with the current port.

Control Application - qwctl

The qwctl application uses the rte_cmdline library to provide the user with an interactive com-
mand line that can be used to modify and inspect parameters in a running qw application.
Those parameters are the global quota and low_watermark value as well as each ring’s built-in
high watermark.

Command Definitions

The available commands are defined in commands.c.


It is advised to use the cmdline sample application user guide as a reference for everything
related to the rte_cmdline library.

Accessing Shared Variables

The setup_shared_variables() function retrieves the shared variables quota and


low_watermark from the rte_memzone previously created by qw.
static void
setup_shared_variables(void)
{
const struct rte_memzone *qw_memzone;

qw_memzone = rte_memzone_lookup(QUOTA_WATERMARK_MEMZONE_NAME);
if (qw_memzone == NULL)
rte_exit(EXIT_FAILURE, "Couldn't find memzone\n");

quota = qw_memzone->addr;

low_watermark = (unsigned int *) qw_memzone->addr + sizeof(int);


}

Timer Sample Application

The Timer sample application is a simple application that demonstrates the use of a timer in
a DPDK application. This application prints some messages from different lcores regularly,
demonstrating the use of timers.

3.29. Timer Sample Application 177


DPDK documentation, Release 17.05.0-rc0

Compiling the Application

1. Go to the example directory:


export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/timer

2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc

See the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make

Running the Application

To run the example in linuxapp environment:


$ ./build/timer -c f -n 4

Refer to the DPDK Getting Started Guide for general information on running applications and
the Environment Abstraction Layer (EAL) options.

Explanation

The following sections provide some explanation of the code.

Initialization and Main Loop

In addition to EAL initialization, the timer subsystem must be initialized, by calling the
rte_timer_subsystem_init() function.
/* init EAL */

ret = rte_eal_init(argc, argv);


if (ret < 0)
rte_panic("Cannot init EAL\n");

/* init RTE timer library */

rte_timer_subsystem_init();

After timer creation (see the next paragraph), the main loop is executed on each slave lcore
using the well-known rte_eal_remote_launch() and also on the master.
/* call lcore_mainloop() on every slave lcore */

RTE_LCORE_FOREACH_SLAVE(lcore_id) {
rte_eal_remote_launch(lcore_mainloop, NULL, lcore_id);
}

/* call it on master lcore too */

(void) lcore_mainloop(NULL);

The main loop is very simple in this example:

3.29. Timer Sample Application 178


DPDK documentation, Release 17.05.0-rc0

while (1) {
/*
* Call the timer handler on each core: as we don't
* need a very precise timer, so only call
* rte_timer_manage() every ~10ms (at 2 GHz). In a real
* application, this will enhance performances as
* reading the HPET timer is not efficient.
*/

cur_tsc = rte_rdtsc();

diff_tsc = cur_tsc - prev_tsc;

if (diff_tsc > TIMER_RESOLUTION_CYCLES) {


rte_timer_manage();
prev_tsc = cur_tsc;
}
}

As explained in the comment, it is better to use the TSC register (as it is a per-lcore register) to
check if the rte_timer_manage() function must be called or not. In this example, the resolution
of the timer is 10 milliseconds.

Managing Timers

In the main() function, the two timers are initialized. This call to rte_timer_init() is necessary
before doing any other operation on the timer structure.
/* init timer structures */

rte_timer_init(&timer0);
rte_timer_init(&timer1);

Then, the two timers are configured:


• The first timer (timer0) is loaded on the master lcore and expires every second. Since the
PERIODICAL flag is provided, the timer is reloaded automatically by the timer subsystem.
The callback function is timer0_cb().
• The second timer (timer1) is loaded on the next available lcore every 333 ms. The SIN-
GLE flag means that the timer expires only once and must be reloaded manually if re-
quired. The callback function is timer1_cb().
/* load timer0, every second, on master lcore, reloaded automatically */

hz = rte_get_hpet_hz();

lcore_id = rte_lcore_id();

rte_timer_reset(&timer0, hz, PERIODICAL, lcore_id, timer0_cb, NULL);

/* load timer1, every second/3, on next lcore, reloaded manually */

lcore_id = rte_get_next_lcore(lcore_id, 0, 1);

rte_timer_reset(&timer1, hz/3, SINGLE, lcore_id, timer1_cb, NULL);

The callback for the first timer (timer0) only displays a message until a global counter reaches
20 (after 20 seconds). In this case, the timer is stopped using the rte_timer_stop() function.
/* timer0 callback */

3.29. Timer Sample Application 179


DPDK documentation, Release 17.05.0-rc0

static void
timer0_cb( attribute ((unused)) struct rte_timer *tim, __attribute ((unused)) void *arg)
{
static unsigned counter = 0;

unsigned lcore_id = rte_lcore_id();

printf("%s() on lcore %u\n", FUNCTION , lcore_id);

/* this timer is automatically reloaded until we decide to stop it, when counter reaches 20

if ((counter ++) == 20)


rte_timer_stop(tim);
}

The callback for the second timer (timer1) displays a message and reloads the timer on the
next lcore, using the rte_timer_reset() function:
/* timer1 callback */

static void
timer1_cb( attribute ((unused)) struct rte_timer *tim, _attribute ((unused)) void *arg)
{
unsigned lcore_id = rte_lcore_id();
uint64_t hz;

printf("%s() on lcore %u\\n", FUNCTION , lcore_id);

/* reload it on another lcore */

hz = rte_get_hpet_hz();

lcore_id = rte_get_next_lcore(lcore_id, 0, 1);

rte_timer_reset(&timer1, hz/3, SINGLE, lcore_id, timer1_cb, NULL);


}

Packet Ordering Application

The Packet Ordering sample app simply shows the impact of reordering a stream. It’s meant
to stress the library with different configurations for performance.

Overview

The application uses at least three CPU cores:


• RX core (maser core) receives traffic from the NIC ports and feeds Worker cores with
traffic through SW queues.
• Worker core (slave core) basically do some light work on the packet. Currently it modifies
the output port of the packet for configurations with more than one port enabled.
• TX Core (slave core) receives traffic from Worker cores through software queues, inserts
out-of-order packets into reorder buffer, extracts ordered packets from the reorder buffer
and sends them to the NIC ports for transmission.

3.30. Packet Ordering Application 180


DPDK documentation, Release 17.05.0-rc0

Compiling the Application

1. Go to the example directory:


export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/helloworld

2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc

See the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make

Running the Application

Refer to DPDK Getting Started Guide for general information on running applications and the
Environment Abstraction Layer (EAL) options.

Application Command Line

The application execution command line is:


./test-pipeline [EAL options] -- -p PORTMASK [--disable-reorder]

The -c EAL CPU_COREMASK option has to contain at least 3 CPU cores. The first CPU core
in the core mask is the master core and would be assigned to RX core, the last to TX core and
the rest to Worker cores.
The PORTMASK parameter must contain either 1 or even enabled port numbers. When setting
more than 1 port, traffic would be forwarded in pairs. For example, if we enable 4 ports, traffic
from port 0 to 1 and from 1 to 0, then the other pair from 2 to 3 and from 3 to 2, having [0,1]
and [2,3] pairs.
The disable-reorder long option does, as its name implies, disable the reordering of traffic,
which should help evaluate reordering performance impact.

VMDQ and DCB Forwarding Sample Application

The VMDQ and DCB Forwarding sample application is a simple example of packet processing
using the DPDK. The application performs L2 forwarding using VMDQ and DCB to divide the
incoming traffic into queues. The traffic splitting is performed in hardware by the VMDQ and
DCB features of the Intel® 82599 and X710/XL710 Ethernet Controllers.

Overview

This sample application can be used as a starting point for developing a new application that
is based on the DPDK and uses VMDQ and DCB for traffic partitioning.
The VMDQ and DCB filters work on MAC and VLAN traffic to divide the traffic into input queues
on the basis of the Destination MAC address, VLAN ID and VLAN user priority fields. VMDQ

3.31. VMDQ and DCB Forwarding Sample Application 181


DPDK documentation, Release 17.05.0-rc0

filters split the traffic into 16 or 32 groups based on the Destination MAC and VLAN ID. Then,
DCB places each packet into one of queues within that group, based upon the VLAN user
priority field.
All traffic is read from a single incoming port (port 0) and output on port 1, without any process-
ing being performed. With Intel® 82599 NIC, for example, the traffic is split into 128 queues
on input, where each thread of the application reads from multiple queues. When run with
8 threads, that is, with the -c FF option, each thread receives and forwards packets from 16
queues.
As supplied, the sample application configures the VMDQ feature to have 32 pools with
4 queues each as indicated in Fig. 3.22. The Intel® 82599 10 Gigabit Ethernet Con-
troller NIC also supports the splitting of traffic into 16 pools of 8 queues. While the
Intel® X710 or XL710 Ethernet Controller NICs support many configurations of VMDQ
pools of 4 or 8 queues each. For simplicity, only 16 or 32 pools is supported in this
sample. And queues numbers for each VMDQ pool can be changed by setting CON-
FIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM in config/common_* file. The nb-pools, nb-
tcs and enable-rss parameters can be passed on the command line, after the EAL parameters:
./build/vmdq_dcb [EAL options] -- -p PORTMASK --nb-pools NP --nb-tcs TC --enable-rss

where, NP can be 16 or 32, TC can be 4 or 8, rss is disabled by default.

Fig. 3.22: Packet Flow Through the VMDQ and DCB Sample Application

In Linux* user space, the application can display statistics with the number of packets received
on each queue. To have the application display the statistics, send a SIGHUP signal to the
running application process.
The VMDQ and DCB Forwarding sample application is in many ways simpler than the L2
Forwarding application (see L2 Forwarding Sample Application (in Real and Virtualized Envi-
ronments)) as it performs unidirectional L2 forwarding of packets from one port to a second
port. No command-line options are taken by this application apart from the standard EAL
command-line options.

Note: Since VMD queues are being used for VMM, this application works correctly when VTd
is disabled in the BIOS or Linux* kernel (intel_iommu=off).

Compiling the Application

1. Go to the examples directory:


export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/vmdq_dcb

2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc

See the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make

3.31. VMDQ and DCB Forwarding Sample Application 182


DPDK documentation, Release 17.05.0-rc0

Running the Application

To run the example in a linuxapp environment:


user@target:~$ ./build/vmdq_dcb -c f -n 4 -- -p 0x3 --nb-pools 32 --nb-tcs 4

Refer to the DPDK Getting Started Guide for general information on running applications and
the Environment Abstraction Layer (EAL) options.

Explanation

The following sections provide some explanation of the code.

Initialization

The EAL, driver and PCI configuration is performed largely as in the L2 Forwarding sample
application, as is the creation of the mbuf pool. See L2 Forwarding Sample Application (in Real
and Virtualized Environments). Where this example application differs is in the configuration of
the NIC port for RX.
The VMDQ and DCB hardware feature is configured at port initialization time by setting the
appropriate values in the rte_eth_conf structure passed to the rte_eth_dev_configure() API.
Initially in the application, a default structure is provided for VMDQ and DCB configuration to
be filled in later by the application.
/* empty vmdq+dcb configuration structure. Filled in programmatically */
static const struct rte_eth_conf vmdq_dcb_conf_default = {
.rxmode = {
.mq_mode = ETH_MQ_RX_VMDQ_DCB,
.split_hdr_size = 0,
.header_split = 0, /**< Header Split disabled */
.hw_ip_checksum = 0, /**< IP checksum offload disabled */
.hw_vlan_filter = 0, /**< VLAN filtering disabled */
.jumbo_frame = 0, /**< Jumbo Frame Support disabled */
},
.txmode = {
.mq_mode = ETH_MQ_TX_VMDQ_DCB,
},
/*
* should be overridden separately in code with
* appropriate values
*/
.rx_adv_conf = {
.vmdq_dcb_conf = {
.nb_queue_pools = ETH_32_POOLS,
.enable_default_pool = 0,
.default_pool = 0,
.nb_pool_maps = 0,
.pool_map = {{0, 0},},
.dcb_tc = {0},
},
.dcb_rx_conf = {
.nb_tcs = ETH_4_TCS,
/** Traffic class each UP mapped to. */
.dcb_tc = {0},
},
.vmdq_rx_conf = {
.nb_queue_pools = ETH_32_POOLS,
.enable_default_pool = 0,

3.31. VMDQ and DCB Forwarding Sample Application 183


DPDK documentation, Release 17.05.0-rc0

.default_pool = 0,
.nb_pool_maps = 0,
.pool_map = {{0, 0},},
},
},
.tx_adv_conf = {
.vmdq_dcb_tx_conf = {
.nb_queue_pools = ETH_32_POOLS,
.dcb_tc = {0},
},
},
};

The get_eth_conf() function fills in an rte_eth_conf structure with the appropriate values, based
on the global vlan_tags array, and dividing up the possible user priority values equally among
the individual queues (also referred to as traffic classes) within each pool. With Intel® 82599
NIC, if the number of pools is 32, then the user priority fields are allocated 2 to a queue.
If 16 pools are used, then each of the 8 user priority fields is allocated to its own queue
within the pool. With Intel® X710/XL710 NICs, if number of tcs is 4, and number of queues
in pool is 8, then the user priority fields are allocated 2 to one tc, and a tc has 2 queues
mapping to it, then RSS will determine the destination queue in 2. For the VLAN IDs, each
one can be allocated to possibly multiple pools of queues, so the pools parameter in the
rte_eth_vmdq_dcb_conf structure is specified as a bitmask value. For destination MAC, each
VMDQ pool will be assigned with a MAC address. In this sample, each VMDQ pool is assigned
to the MAC like 52:54:00:12:<port_id>:<pool_id>, that is, the MAC of VMDQ pool 2 on port 1
is 52:54:00:12:01:02.
const uint16_t vlan_tags[] = {
0, 1, 2, 3, 4, 5, 6, 7,
8, 9, 10, 11, 12, 13, 14, 15,
16, 17, 18, 19, 20, 21, 22, 23,
24, 25, 26, 27, 28, 29, 30, 31
};

/* pool mac addr template, pool mac addr is like: 52 54 00 12 port# pool# */
static struct ether_addr pool_addr_template = {
.addr_bytes = {0x52, 0x54, 0x00, 0x12, 0x00, 0x00}
};

/* Builds up the correct configuration for vmdq+dcb based on the vlan tags array
* given above, and the number of traffic classes available for use. */
static inline int
get_eth_conf(struct rte_eth_conf *eth_conf)
{
struct rte_eth_vmdq_dcb_conf conf;
struct rte_eth_vmdq_rx_conf vmdq_conf;
struct rte_eth_dcb_rx_conf dcb_conf;
struct rte_eth_vmdq_dcb_tx_conf tx_conf;
uint8_t i;

conf.nb_queue_pools = (enum rte_eth_nb_pools)num_pools;


vmdq_conf.nb_queue_pools = (enum rte_eth_nb_pools)num_pools;
tx_conf.nb_queue_pools = (enum rte_eth_nb_pools)num_pools;
conf.nb_pool_maps = num_pools;
vmdq_conf.nb_pool_maps = num_pools;
conf.enable_default_pool = 0;
vmdq_conf.enable_default_pool = 0;
conf.default_pool = 0; /* set explicit value, even if not used */
vmdq_conf.default_pool = 0;

for (i = 0; i < conf.nb_pool_maps; i++) {

3.31. VMDQ and DCB Forwarding Sample Application 184


DPDK documentation, Release 17.05.0-rc0

conf.pool_map[i].vlan_id = vlan_tags[i];
vmdq_conf.pool_map[i].vlan_id = vlan_tags[i];
conf.pool_map[i].pools = 1UL << i ;
vmdq_conf.pool_map[i].pools = 1UL << i;
}
for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++){
conf.dcb_tc[i] = i % num_tcs;
dcb_conf.dcb_tc[i] = i % num_tcs;
tx_conf.dcb_tc[i] = i % num_tcs;
}
dcb_conf.nb_tcs = (enum rte_eth_nb_tcs)num_tcs;
(void)(rte_memcpy(eth_conf, &vmdq_dcb_conf_default, sizeof(*eth_conf)));
(void)(rte_memcpy(&eth_conf->rx_adv_conf.vmdq_dcb_conf, &conf,
sizeof(conf)));
(void)(rte_memcpy(&eth_conf->rx_adv_conf.dcb_rx_conf, &dcb_conf,
sizeof(dcb_conf)));
(void)(rte_memcpy(&eth_conf->rx_adv_conf.vmdq_rx_conf, &vmdq_conf,
sizeof(vmdq_conf)));
(void)(rte_memcpy(&eth_conf->tx_adv_conf.vmdq_dcb_tx_conf, &tx_conf,
sizeof(tx_conf)));
if (rss_enable) {
eth_conf->rxmode.mq_mode= ETH_MQ_RX_VMDQ_DCB_RSS;
eth_conf->rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP |
ETH_RSS_UDP |
ETH_RSS_TCP |
ETH_RSS_SCTP;
}
return 0;
}

......

/* Set mac for each pool.*/


for (q = 0; q < num_pools; q++) {
struct ether_addr mac;
mac = pool_addr_template;
mac.addr_bytes[4] = port;
mac.addr_bytes[5] = q;
printf("Port %u vmdq pool %u set mac %02x:%02x:%02x:%02x:%02x:%02x\n",
port, q,
mac.addr_bytes[0], mac.addr_bytes[1],
mac.addr_bytes[2], mac.addr_bytes[3],
mac.addr_bytes[4], mac.addr_bytes[5]);
retval = rte_eth_dev_mac_addr_add(port, &mac,
q + vmdq_pool_base);
if (retval) {
printf("mac addr add failed at pool %d\n", q);
return retval;
}
}

Once the network port has been initialized using the correct VMDQ and DCB values, the
initialization of the port’s RX and TX hardware rings is performed similarly to that in the L2
Forwarding sample application. See L2 Forwarding Sample Application (in Real and Virtualized
Environments) for more information.

Statistics Display

When run in a linuxapp environment, the VMDQ and DCB Forwarding sample application can
display statistics showing the number of packets read from each RX queue. This is provided

3.31. VMDQ and DCB Forwarding Sample Application 185


DPDK documentation, Release 17.05.0-rc0

by way of a signal handler for the SIGHUP signal, which simply prints to standard output the
packet counts in grid form. Each row of the output is a single pool with the columns being the
queue number within that pool.
To generate the statistics output, use the following command:
user@host$ sudo killall -HUP vmdq_dcb_app

Please note that the statistics output will appear on the terminal where the vmdq_dcb_app is
running, rather than the terminal from which the HUP signal was sent.

Vhost Sample Application

The vhost sample application demonstrates integration of the Data Plane Development Kit
(DPDK) with the Linux* KVM hypervisor by implementing the vhost-net offload API. The sam-
ple application performs simple packet switching between virtual machines based on Media
Access Control (MAC) address or Virtual Local Area Network (VLAN) tag. The splitting of
Ethernet traffic from an external switch is performed in hardware by the Virtual Machine De-
vice Queues (VMDQ) and Data Center Bridging (DCB) features of the Intel® 82599 10 Gigabit
Ethernet Controller.

Testing steps

This section shows the steps how to test a typical PVP case with this vhost-switch sample,
whereas packets are received from the physical NIC port first and enqueued to the VM’s Rx
queue. Through the guest testpmd’s default forwarding mode (io forward), those packets will
be put into the Tx queue. The vhost-switch example, in turn, gets the packets and puts back to
the same physical NIC port.

Build

Follow the Getting Started Guide for Linux on generic info about environment setup and build-
ing DPDK from source.
In this example, you need build DPDK both on the host and inside guest. Also, you need build
this example.
export RTE_SDK=/path/to/dpdk_source
export RTE_TARGET=x86_64-native-linuxapp-gcc

cd ${RTE_SDK}/examples/vhost
make

Start the vswitch example

./vhost-switch -c f -n 4 --socket-mem 1024 \


-- --socket-file /tmp/sock0 --client \
...

Check the Parameters section for the explanations on what do those parameters mean.

3.32. Vhost Sample Application 186


DPDK documentation, Release 17.05.0-rc0

Start the VM

qemu-system-x86_64 -machine accel=kvm -cpu host \


-m $mem -object memory-backend-file,id=mem,size=$mem,mem-path=/dev/hugepages,share=on \
-mem-prealloc -numa node,memdev=mem \
\
-chardev socket,id=char1,path=/tmp/sock0,server \
-netdev type=vhost-user,id=hostnet1,chardev=char1 \
-device virtio-net-pci,netdev=hostnet1,id=net1,mac=52:54:00:00:00:14 \
...

Note: For basic vhost-user support, QEMU 2.2 (or above) is required. For some specific
features, a higher version might be need. Such as QEMU 2.7 (or above) for the reconnect
feature.

Run testpmd inside guest

Make sure you have DPDK built inside the guest. Also make sure the corresponding virtio-net
PCI device is bond to a uio driver, which could be done by:
modprobe uio_pci_generic
$RTE_SDK/usertools/dpdk-devbind.py -b=uio_pci_generic 0000:00:04.0

Then start testpmd for packet forwarding testing.


./x86_64-native-gcc/app/testpmd -c 0x3 -- -i
> start tx_first

Inject packets

While a virtio-net is connected to vhost-switch, a VLAN tag starts with 1000 is assigned to it.
So make sure configure your packet generator with the right MAC and VLAN tag, you should
be able to see following log from the vhost-switch console. It means you get it work:
VHOST_DATA: (0) mac 52:54:00:00:00:14 and vlan 1000 registered

Parameters

–socket-file path Specifies the vhost-user socket file path.


–client DPDK vhost-user will act as the client mode when such option is given. In the client
mode, QEMU will create the socket file. Otherwise, DPDK will create it. Put simply, it’s the
server to create the socket file.
–vm2vm mode The vm2vm parameter sets the mode of packet switching between guests in
the host.
• 0 disables vm2vm, impling that VM’s packets will always go to the NIC port.
• 1 means a normal mac lookup packet routing.
• 2 means hardware mode packet forwarding between guests, it allows packets go to the
NIC port, hardware L2 switch will determine which guest the packet should forward to or
need send to external, which bases on the packet destination MAC address and VLAN
tag.

3.32. Vhost Sample Application 187


DPDK documentation, Release 17.05.0-rc0

–mergeable 0|1 Set 0/1 to disable/enable the mergeable Rx feature. It’s disabled by default.
–stats interval The stats parameter controls the printing of virtio-net device statistics. The
parameter specifies an interval (in unit of seconds) to print statistics, with an interval of 0
seconds disabling statistics.
–rx-retry 0|1 The rx-retry option enables/disables enqueue retries when the guests Rx queue
is full. This feature resolves a packet loss that is observed at high data rates, by allowing it to
delay and retry in the receive path. This option is enabled by default.
–rx-retry-num num The rx-retry-num option specifies the number of retries on an Rx burst, it
takes effect only when rx retry is enabled. The default value is 4.
–rx-retry-delay msec The rx-retry-delay option specifies the timeout (in micro seconds) be-
tween retries on an RX burst, it takes effect only when rx retry is enabled. The default value is
15.
–dequeue-zero-copy Dequeue zero copy will be enabled when this option is given.
–vlan-strip 0|1 VLAN strip option is removed, because different NICs have different behaviors
when disabling VLAN strip. Such feature, which heavily depends on hardware, should be
removed from this example to reduce confusion. Now, VLAN strip is enabled and cannot be
disabled.

Common Issues

• QEMU fails to allocate memory on hugetlbfs, with an error like the following:
file_ram_alloc: can't mmap RAM pages: Cannot allocate memory

When running QEMU the above error indicates that it has failed to allocate memory for
the Virtual Machine on the hugetlbfs. This is typically due to insufficient hugepages being
free to support the allocation request. The number of free hugepages can be checked as
follows:
cat /sys/kernel/mm/hugepages/hugepages-<pagesize>/nr_hugepages

The command above indicates how many hugepages are free to support QEMU’s allo-
cation request.
• vhost-user will not work with QEMU without the -mem-prealloc option
The current implementation works properly only when the guest memory is pre-allocated.
• vhost-user will not work with a QEMU version without shared memory mapping:
Make sure share=on QEMU option is given.
• Failed to build DPDK in VM
Make sure “-cpu host” QEMU option is given.

3.32. Vhost Sample Application 188


DPDK documentation, Release 17.05.0-rc0

Netmap Compatibility Sample Application

Introduction

The Netmap compatibility library provides a minimal set of APIs to give programs written
against the Netmap APIs the ability to be run, with minimal changes to their source code,
using the DPDK to perform the actual packet I/O.
Since Netmap applications use regular system calls, like open(), ioctl() and mmap() to
communicate with the Netmap kernel module performing the packet I/O, the compat_netmap
library provides a set of similar APIs to use in place of those system calls, effectively turning a
Netmap application into a DPDK application.
The provided library is currently minimal and doesn’t support all the features that Netmap
supports, but is enough to run simple applications, such as the bridge example detailed below.
Knowledge of Netmap is required to understand the rest of this section. Please refer to the
Netmap distribution for details about Netmap.

Available APIs

The library provides the following drop-in replacements for system calls usually used in Netmap
applications:
• rte_netmap_close()
• rte_netmap_ioctl()
• rte_netmap_open()
• rte_netmap_mmap()
• rte_netmap_poll()
They use the same signature as their libc counterparts, and can be used as drop-in replace-
ments in most cases.

Caveats

Given the difference between the way Netmap and the DPDK approach packet I/O, there are
caveats and limitations to be aware of when trying to use the compat_netmap library, the
most important of these are listed below. These may change as the library is updated:
• Any system call that can potentially affect file descriptors cannot be used with a descriptor
returned by the rte_netmap_open() function.
Note that:
• The rte_netmap_mmap() function merely returns the address of a DPDK memzone.
The address, length, flags, offset, and other arguments are ignored.
• The rte_netmap_poll() function only supports infinite (negative) or zero time outs.
It effectively turns calls to the poll() system call made in a Netmap application into
polling of the DPDK ports, changing the semantics of the usual POSIX defined poll.

3.33. Netmap Compatibility Sample Application 189


DPDK documentation, Release 17.05.0-rc0

• Not all of Netmap’s features are supported: host rings, slot flags and so on are not
supported or are simply not relevant in the DPDK model.
• The Netmap manual page states that “a device obtained through /dev/netmap also sup-
ports the ioctl supported by network devices”. This is not the case with this compatibility
layer.
• The Netmap kernel module exposes a sysfs interface to change some internal parame-
ters, such as the size of the shared memory region. This interface is not available when
using this compatibility layer.

Porting Netmap Applications

Porting Netmap applications typically involves two major steps:


• Changing the system calls to use their compat_netmap library counterparts.
• Adding further DPDK initialization code.
Since the compat_netmap functions have the same signature as the usual libc calls, the
change is trivial in most cases.
The usual DPDK initialization code involving rte_eal_init() and rte_eal_pci_probe()
has to be added to the Netmap application in the same way it is used in all other DPDK sample
applications. Please refer to the DPDK Programmer’s Guide and example source code for
details about initialization.
In addition of the regular DPDK initialization code, the ported application needs to call
initialization functions for the compat_netmap library, namely rte_netmap_init() and
rte_netmap_init_port().
These two initialization functions take compat_netmap specific data structures as parameters:
struct rte_netmap_conf and struct rte_netmap_port_conf. The structures’ fields
are Netmap related and are self-explanatory for developers familiar with Netmap. They are
defined in $RTE_SDK/examples/netmap_compat/lib/compat_netmap.h.
The bridge application is an example largely based on the bridge example shipped with
the Netmap distribution. It shows how a minimal Netmap application with minimal and
straightforward source code changes can be run on top of the DPDK. Please refer to
$RTE_SDK/examples/netmap_compat/bridge/bridge.c for an example of a ported
application.

Compiling the “bridge” Sample Application

1. Go to the example directory:


export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/netmap_compat

2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc

See the DPDK Getting Started Guide for Linux for possible RTE_TARGET values.
3. Build the application:
make

3.33. Netmap Compatibility Sample Application 190


DPDK documentation, Release 17.05.0-rc0

Running the “bridge” Sample Application

The application requires a single command line option:


./build/bridge [EAL options] -- -i INTERFACE_A [-i INTERFACE_B]

where,
• -i INTERFACE: Interface (DPDK port number) to use.
If a single -i parameter is given, the interface will send back all the traffic it receives. If
two -i parameters are given, the two interfaces form a bridge, where traffic received on
one interface is replicated and sent to the other interface.
For example, to run the application in a linuxapp environment using port 0 and 2:
./build/bridge [EAL options] -- -i 0 -i 2

Refer to the DPDK Getting Started Guide for Linux for general information on running applica-
tions and the Environment Abstraction Layer (EAL) options.
Note that unlike a traditional bridge or the l2fwd sample application, no MAC address changes
are done on the frames. Do not forget to take this into account when configuring a traffic
generators and testing this sample application.

Internet Protocol (IP) Pipeline Application

Application overview

The Internet Protocol (IP) Pipeline application is intended to be a vehicle for rapid development
of packet processing applications running on multi-core CPUs.
The application provides a library of reusable functional blocks called pipelines. These
pipelines can be seen as prefabricated blocks that can be instantiated and inter-connected
through packet queues to create complete applications (super-pipelines).
Pipelines are created and inter-connected through the application configuration file. By using
different configuration files, different applications are effectively created, therefore this appli-
cation can be seen as an application generator. The configuration of each pipeline can be
updated at run-time through the application Command Line Interface (CLI).
Main application components are:
A Library of reusable pipelines
• Each pipeline represents a functional block, e.g. flow classification, firewall, routing,
master, etc.
• Each pipeline type can be instantiated several times in the same application, which each
instance configured separately and mapped to a single CPU core. Each CPU core can
run one or several pipeline instances, which can be of same or different type.
• Pipeline instances are inter-connected through packet queues (for packet processing)
and message queues (for run-time configuration).
• Pipelines are implemented using DPDK Packet Framework.
• More pipeline types can always be built and added to the existing pipeline types.

3.34. Internet Protocol (IP) Pipeline Application 191


DPDK documentation, Release 17.05.0-rc0

The Configuration file


• The configuration file defines the application structure. By using different configuration
files, different applications are created.
• All the application resources are created and configured through the application configu-
ration file: pipeline instances, buffer pools, links (i.e. network interfaces), hardware device
RX/TX queues, software queues, traffic manager devices, EAL startup arguments, etc.
• The configuration file syntax is “define by reference”, meaning that resources are defined
as they are referenced. First time a resource name is detected, it is registered with
default parameters. Optionally, the resource parameters can be further refined through a
configuration file section dedicated to that resource.
• Command Line Interface (CLI)
Global CLI commands: link configuration, etc.
• Common pipeline CLI commands: ping (keep-alive), statistics, etc.
• Pipeline type specific CLI commands: used to configure instances of specific pipeline
type. These commands are registered with the application when the pipeline type is
registered. For example, the commands for routing pipeline instances include: route
add, route delete, route list, etc.
• CLI commands can be grouped into scripts that can be invoked at initialization and at
runtime.

Design goals

Rapid development

This application enables rapid development through quick connectivity of standard components
called pipelines. These components are built using DPDK Packet Framework and encapsulate
packet processing features at different levels: ports, tables, actions, pipelines and complete
applications.
Pipeline instances are instantiated, configured and inter-connected through low complexity
configuration files loaded during application initialization. Each pipeline instance is mapped to
a single CPU core, with each CPU core able to run one or multiple pipeline instances of same
or different types. By loading a different configuration file, a different application is effectively
started.

Flexibility

Each packet processing application is typically represented as a chain of functional stages


which is often called the functional pipeline of the application. These stages are mapped to
CPU cores to create chains of CPU cores (pipeline model), clusters of CPU cores (run-to-
completion model) or chains of clusters of CPU cores (hybrid model).
This application allows all the above programming models. By applying changes to the con-
figuration file, the application provides the flexibility to reshuffle its building blocks in different
ways until the configuration providing the best performance is identified.

3.34. Internet Protocol (IP) Pipeline Application 192


DPDK documentation, Release 17.05.0-rc0

Move pipelines around

The mapping of pipeline instances to CPU cores can be reshuffled through the configuration
file. One or several pipeline instances can be mapped to the same CPU core.

Fig. 3.23: Example of moving pipeline instances across different CPU cores

Move tables around

There is some degree of flexibility for moving tables from one pipeline instance to another.
Based on the configuration arguments passed to each pipeline instance in the configuration
file, specific tables can be enabled or disabled. This way, a specific table can be “moved” from
pipeline instance A to pipeline instance B by simply disabling its associated functionality for
pipeline instance A while enabling it for pipeline instance B.
Due to requirement to have simple syntax for the configuration file, moving tables across dif-
ferent pipeline instances is not as flexible as the mapping of pipeline instances to CPU cores,
or mapping actions to pipeline tables. Complete flexibility in moving tables from one pipeline to
another could be achieved through a complex pipeline description language that would detail
the structural elements of the pipeline (ports, tables and actions) and their connectivity, result-
ing in complex syntax for the configuration file, which is not acceptable. Good configuration file
readability through simple syntax is preferred.
Example: the IP routing pipeline can run the routing function only (with ARP function run by
a different pipeline instance), or it can run both the routing and ARP functions as part of the
same pipeline instance.

Fig. 3.24: Example of moving tables across different pipeline instances

Move actions around

When it makes sense, packet processing actions can be moved from one pipeline instance
to another. Based on the configuration arguments passed to each pipeline instance in the
configuration file, specific actions can be enabled or disabled. This way, a specific action can
be “moved” from pipeline instance A to pipeline instance B by simply disabling its associated
functionality for pipeline instance A while enabling it for pipeline instance B.
Example: The flow actions of accounting, traffic metering, application identification, NAT, etc
can be run as part of the flow classification pipeline instance or split across several flow ac-
tions pipeline instances, depending on the number of flow instances and their compute require-
ments.

Fig. 3.25: Example of moving actions across different tables and pipeline instances

3.34. Internet Protocol (IP) Pipeline Application 193


DPDK documentation, Release 17.05.0-rc0

Performance

Performance of the application is the highest priority requirement. Flexibility is not provided at
the expense of performance.
The purpose of flexibility is to provide an incremental development methodology that allows
monitoring the performance evolution:
• Apply incremental changes in the configuration (e.g. mapping on pipeline instances to
CPU cores) in order to identify the configuration providing the best performance for a
given application;
• Add more processing incrementally (e.g. by enabling more actions for specific pipeline in-
stances) until the application is feature complete while checking the performance impact
at each step.

Debug capabilities

The application provides a significant set of debug capabilities:


• Command Line Interface (CLI) support for statistics polling: pipeline instance ping (keep-
alive checks), pipeline instance statistics per input port/output port/table, link statistics,
etc;
• Logging: Turn on/off application log messages based on priority level;

Running the application

The application startup command line is:


ip_pipeline [-f CONFIG_FILE] [-s SCRIPT_FILE] -p PORT_MASK [-l LOG_LEVEL]

The application startup arguments are:


-f CONFIG_FILE
• Optional: Yes
• Default: ./config/ip_pipeline.cfg
• Argument: Path to the configuration file to be loaded by the application. Please refer to
the Configuration file syntax for details on how to write the configuration file.
-s SCRIPT_FILE
• Optional: Yes
• Default: Not present
• Argument: Path to the CLI script file to be run by the master pipeline at application
startup. No CLI script file will be run at startup of this argument is not present.
-p PORT_MASK
• Optional: No
• Default: N/A

3.34. Internet Protocol (IP) Pipeline Application 194


DPDK documentation, Release 17.05.0-rc0

• Argument: Hexadecimal mask of NIC port IDs to be used by the application. First port
enabled in this mask will be referenced as LINK0 as part of the application configuration
file, next port as LINK1, etc.
-l LOG_LEVEL
• Optional: Yes
• Default: 1 (High priority)
• Argument: Log level to determine which application messages are to be printed to stan-
dard output. Available log levels are: 0 (None), 1 (High priority), 2 (Low priority). Only
application messages whose priority is higher than or equal to the application log level
will be printed.

Application stages

Configuration

During this stage, the application configuration file is parsed and its content is loaded into the
application data structures. In case of any configuration file parse error, an error message is
displayed and the application is terminated. Please refer to the Configuration file syntax for a
description of the application configuration file format.

Configuration checking

In the absence of any parse errors, the loaded content of application data structures is checked
for overall consistency. In case of any configuration check error, an error message is displayed
and the application is terminated.

Initialization

During this stage, the application resources are initialized and the handles to access them are
saved into the application data structures. In case of any initialization error, an error message
is displayed and the application is terminated.
The typical resources to be initialized are: pipeline instances, buffer pools, links (i.e. network
interfaces), hardware device RX/TX queues, software queues, traffic management devices,
etc.

Run-time

Each CPU core runs the pipeline instances assigned to it in time sharing mode and in round
robin order:
1. Packet processing task : The pipeline run-time code is typically a packet processing task
built on top of DPDK Packet Framework rte_pipeline library, which reads bursts of packets
from the pipeline input ports, performs table lookups and executes the identified actions
for all tables in the pipeline, with packet eventually written to pipeline output ports or
dropped.

3.34. Internet Protocol (IP) Pipeline Application 195


DPDK documentation, Release 17.05.0-rc0

2. Message handling task : Each CPU core will also periodically execute the message han-
dling code of each of the pipelines mapped to it. The pipeline message handling code is
processing the messages that are pending in the pipeline input message queues, which
are typically sent by the master CPU core for the on-the-fly pipeline configuration: check
that pipeline is still alive (ping), add/delete entries in the pipeline tables, get statistics, etc.
The frequency of executing the message handling code is usually much smaller than the
frequency of executing the packet processing work.
Please refer to the PIPELINE section for more details about the application pipeline module
encapsulation.

Configuration file syntax

Syntax overview

The syntax of the configuration file is designed to be simple, which favors readability. The
configuration file is parsed using the DPDK library librte_cfgfile, which supports simple INI file
format for configuration files.
As result, the configuration file is split into several sections, with each section containing one
or more entries. The scope of each entry is its section, and each entry specifies a variable that
is assigned a specific value. Any text after the ; character is considered a comment and is
therefore ignored.
The following are application specific: number of sections, name of each section, number of
entries of each section, name of the variables used for each section entry, the value format
(e.g. signed/unsigned integer, string, etc) and range of each section entry variable.
Generic example of configuration file section:
[<section_name>]

<variable_name_1> = <value_1>

; ...

<variable_name_N> = <value_N>

3.34. Internet Protocol (IP) Pipeline Application 196


DPDK documentation, Release 17.05.0-rc0

Application resources present in the configuration file

Table 3.3: Application resource names in the configuration file


Resource type Format Examples
Pipeline PIPELINE<ID> PIPELINE0, PIPELINE1
Mempool MEMPOOL<ID> MEMPOOL0, MEMPOOL1
Link (network interface) LINK<ID> LINK0, LINK1
Link RX queue RXQ<LINK_ID>.<QUEUE_ID> RXQ0.0, RXQ1.5
Link TX queue TXQ<LINK_ID>.<QUEUE_ID> TXQ0.0, TXQ1.5
Software queue SWQ<ID> SWQ0, SWQ1
Traffic Manager TM<LINK_ID> TM0, TM1
KNI (kernel NIC inter- KNI<LINK_ID> KNI0, KNI1
face)
Source SOURCE<ID> SOURCE0, SOURCE1
Sink SINK<ID> SINK0, SINK1
Message queue MSGQ<ID> MSGQ0, MSGQ1,
MSGQ-REQ-PIPELINE<ID> MSGQ-REQ-PIPELINE2,
MSGQ-RSP-PIPELINE<ID> MSGQ-RSP-PIPELINE2,
MSGQ-REQ-CORE-<CORE_ID> MSGQ-REQ-CORE-s0c1,
MSGQ-RSP-CORE-<CORE_ID> MSGQ-RSP-CORE-s0c1
LINK instances are created implicitly based on the PORT_MASK application startup argument.
LINK0 is the first port enabled in the PORT_MASK, port 1 is the next one, etc. The LINK ID
is different than the DPDK PMD-level NIC port ID, which is the actual position in the bitmask
mentioned above. For example, if bit 5 is the first bit set in the bitmask, then LINK0 is hav-
ing the PMD ID of 5. This mechanism creates a contiguous LINK ID space and isolates the
configuration file against changes in the board PCIe slots where NICs are plugged in.
RXQ, TXQ, TM and KNI instances have the LINK ID as part of their name. For example, RXQ2.1,
TXQ2.1 and TM2 are all associated with LINK2.

Rules to parse the configuration file

The main rules used to parse the configuration file are:


1. Application resource name determines the type of resource based on the name prefix.
Example: all software queues need to start with SWQ prefix, so SWQ0 and SWQ5 are valid
software queue names.
2. An application resource is defined by creating a configuration file section with its name.
The configuration file section allows fine tuning on any of the resource parameters. Some
resource parameters are mandatory, in which case it is required to have them specified
as part of the section, while some others are optional, in which case they get assigned
their default value when not present.
Example: section SWQ0 defines a software queue named SWQ0, whose parameters are
detailed as part of this section.
3. An application resource can also be defined by referencing it. Referencing a resource
takes place by simply using its name as part of the value assigned to a variable in any
configuration file section. In this case, the resource is registered with all its parameters

3.34. Internet Protocol (IP) Pipeline Application 197


DPDK documentation, Release 17.05.0-rc0

having their default values. Optionally, a section with the resource name can be added
to the configuration file to fine tune some or all of the resource parameters.
Example: in section PIPELINE3, variable pktq_in includes SWQ5 as part of its list,
which results in defining a software queue named SWQ5; when there is no SWQ5 section
present in the configuration file, SWQ5 gets registered with default parameters.

PIPELINE section

Table 3.4: Configuration file PIPELINE section (1/2)


Section Description Optional Range Default
value
type Pipeline type. Defines the functionality to NO See N/A
be executed. “List of
pipeline
types”
core CPU core to run the current pipeline. YES See CPU
“CPU socket
Core 0, core
notation” 0, hyper-
thread
0
pktq_in Packet queues to serve as input ports for YES List of Empty
the current pipeline instance. The accept- input list
able packet queue types are: RXQ, SWQ, packet
TM and SOURCE. First device in this list is queue
used as pipeline input port 0, second as IDs
pipeline input port 1, etc.
pktq_out Packet queues to serve as output ports YES List of Empty
for the current pipeline instance. The ac- output list
ceptable packet queue types are: TXQ, packet
SWQ, TM and SINK. First device in this list queue
is used as pipeline output port 0, second IDs.
as pipeline output port 1, etc.

3.34. Internet Protocol (IP) Pipeline Application 198


DPDK documentation, Release 17.05.0-rc0

Table 3.5: Configuration file PIPELINE section (2/2)


Section Description Optional Range Default
value
msgq_in Input message queues. These queues YES List of Empty
contain request messages that need mes- list
to be handled by the current pipeline sage
instance. The type and format of queue
request messages is defined by the IDs
pipeline type. For each pipeline in-
stance, there is an input message
queue defined implicitly, whose name is:
MSGQ-REQ-<PIPELINE_ID>. This mes-
sage queue should not be mentioned as
part of msgq_in list.
msgq_out Output message queues. These queues YES List of Empty
are used by the current pipeline instance mes- list
to write response messages as result of sage
request messages being handled. The queue
type and format of response messages IDs
is defined by the pipeline type. For
each pipeline instance, there is an output
message queue defined implicitly, whose
name is: MSGQ-RSP-<PIPELINE_ID>.
This message queue should not be men-
tioned as part of msgq_out list.
timer_period Time period, measured in milliseconds, YES milliseconds1 ms
for handling the input message queues.
<any other> Arguments to be passed to the current Depends Depends Depends
pipeline instance. Format of the argu- on on on
ments, their type, whether each argument pipeline pipeline pipeline
is optional or mandatory and its default type type type
value (when optional) are defined by the
pipeline type. The value of the arguments
is applicable to the current pipeline in-
stance only.

CPU core notation

The CPU Core notation is:


<CPU core> ::= [s|S<CPU socket ID>][c|C]<CPU core ID>[h|H]

For example:
CPU socket 0, core 0, hyper-thread 0: 0, c0, s0c0

CPU socket 0, core 0, hyper-thread 1: 0h, c0h, s0c0h

CPU socket 3, core 9, hyper-thread 1: s3c9h

3.34. Internet Protocol (IP) Pipeline Application 199


DPDK documentation, Release 17.05.0-rc0

MEMPOOL section

Table 3.6: Configuration file MEMPOOL section


Section Description Optional Type Default value
buffer_size Buffer size (in bytes) for the current YES uint32_t 2048 +
buffer pool. sizeof(struct
rte_mbuf) +
HEADROOM
pool_size Number of buffers in the current YES uint32_t 32K
buffer pool.
cache_size Per CPU thread cache size (in num- YES uint32_t 256
ber of buffers) for the current buffer
pool.
cpu CPU socket ID where to allocate YES uint32_t 0
memory for the current buffer pool.

LINK section

Table 3.7: Configuration file LINK section


Section entry Description Optional Type Default
value
arp_q NIC RX queue where ARP packets YES 0 .. 127 0 (default
should be filtered. queue)
tcp_syn_local_q NIC RX queue where TCP packets with YES 0 .. 127 0 (default
SYN flag should be filtered. queue)
ip_local_q NIC RX queue where IP packets with lo- YES 0 .. 127 0 (default
cal destination should be filtered. When queue)
TCP, UDP and SCTP local queues are
defined, they take higher priority than this
queue.
tcp_local_q NIC RX queue where TCP packets with YES 0 .. 127 0 (default
local destination should be filtered. queue)
udp_local_q NIC RX queue where TCP packets with YES 0 .. 127 0 (default
local destination should be filtered. queue)
sctp_local_q NIC RX queue where TCP packets with YES 0 .. 127 0 (default
local destination should be filtered. queue)
promisc Indicates whether current link should be YES YES/NO YES
started in promiscuous mode.

3.34. Internet Protocol (IP) Pipeline Application 200


DPDK documentation, Release 17.05.0-rc0

RXQ section

Table 3.8: Configuration file RXQ section


Section Description Optional Type Default
value
mempool Mempool to use for buffer allocation for YES uint32_t MEMPOOL0
current NIC RX queue. The mempool ID
has to be associated with a valid instance
defined in the mempool entry of the global
section.
Size NIC RX queue size (number of descrip- YES uint32_t 128
tors)
burst Read burst size (number of descriptors) YES uint32_t 32

TXQ section

Table 3.9: Configuration file TXQ section


Section Description Optional Type Default
value
size NIC TX queue size (number of descrip- YES uint32_t 512
tors) power of 2
>0
burst Write burst size (number of descriptors) YES uint32_t 32
power of 2
0 < burst <
size
dropless When dropless is set to NO, packets can YES YES/NO NO
be dropped if not enough free slots are
currently available in the queue, so the
write operation to the queue is non- block-
ing. When dropless is set to YES, pack-
ets cannot be dropped if not enough free
slots are currently available in the queue,
so the write operation to the queue is
blocking, as the write operation is retried
until enough free slots become available
and all the packets are successfully writ-
ten to the queue.
n_retries Number of retries. Valid only when drop- YES uint32_t 0
less is set to YES. When set to 0, it indi-
cates unlimited number of retries.

3.34. Internet Protocol (IP) Pipeline Application 201


DPDK documentation, Release 17.05.0-rc0

SWQ section

Table 3.10: Configuration file SWQ section


Section Description Optional Type Default
value
size Queue size (number of packets) YES uint32_t 256
power of
2
burst_read Read burst size (number of packets) YES uint32_t 32
power
of 2 0 <
burst <
size
burst_write Write burst size (number of packets) YES uint32_t 32
power
of 2 0 <
burst <
size
dropless When dropless is set to NO, packets can YES YES/NO NO
be dropped if not enough free slots are
currently available in the queue, so the
write operation to the queue is non- block-
ing. When dropless is set to YES, pack-
ets cannot be dropped if not enough free
slots are currently available in the queue,
so the write operation to the queue is
blocking, as the write operation is retried
until enough free slots become available
and all the packets are successfully writ-
ten to the queue.
n_retries Number of retries. Valid only when drop- YES uint32_t 0
less is set to YES. When set to 0, it indi-
cates unlimited number of retries.
cpu CPU socket ID where to allocate memory YES uint32_t 0
for this SWQ.

TM section

Table 3.11: Configuration file TM section


Section Description Optional Type Default
value
Cfg File name to parse for the TM configura- YES string tm_profile
tion to be applied. The syntax of this file
is described in the examples/qos_sched
DPDK application documentation.
burst_read Read burst size (number of packets) YES uint32_t 64
burst_write Write burst size (number of packets) YES uint32_t 32

3.34. Internet Protocol (IP) Pipeline Application 202


DPDK documentation, Release 17.05.0-rc0

KNI section

Table 3.12: Configuration file KNI section


Section Description Optional Type Default
value
core CPU core to run the KNI kernel thread. YES See Not set
When core config is set, the KNI kernel “CPU
thread will be bound to the particular core. Core
When core config is not set, the KNI ker- notation”
nel thread will be scheduled by the OS.
mempool Mempool to use for buffer allocation for YES uint32_t MEMPOOL0
current KNI port. The mempool ID has
to be associated with a valid instance de-
fined in the mempool entry of the global
section.
burst_read Read burst size (number of packets) YES uint32_t 32
power
of 2 0 <
burst <
size
burst_write Write burst size (number of packets) YES uint32_t 32
power
of 2 0 <
burst <
size
dropless When dropless is set to NO, packets can YES YES/NO NO
be dropped if not enough free slots are
currently available in the queue, so the
write operation to the queue is non- block-
ing. When dropless is set to YES, pack-
ets cannot be dropped if not enough free
slots are currently available in the queue,
so the write operation to the queue is
blocking, as the write operation is retried
until enough free slots become available
and all the packets are successfully writ-
ten to the queue.
n_retries Number of retries. Valid only when drop- YES uint64_t 0
less is set to YES. When set to 0, it indi-
cates unlimited number of retries.

SOURCE section

Table 3.13: Configuration file SOURCE section


Section Description Optional Type Default
value
Mempool Mempool to use for buffer allocation. YES uint32_t MEMPOOL0
Burst Read burst size (number of packets) uint32_t 32

3.34. Internet Protocol (IP) Pipeline Application 203


DPDK documentation, Release 17.05.0-rc0

SINK section

Currently, there are no parameters to be passed to a sink device, so SINK section is not
allowed.

MSGQ section

Table 3.14: Configuration file MSGQ section


Section Description Optional Type Default
value
size Queue size (number of packets) YES uint32_t 64
!= 0
power of
2
cpu CPU socket ID where to allocate memory YES uint32_t 0
for the current queue.

EAL section

The application generates the EAL parameters rather than reading them from the command
line.
The CPU core mask parameter is generated based on the core entry of all PIPELINE sections.
All the other EAL parameters can be set from this section of the application configuration file.

Library of pipeline types

Pipeline module

A pipeline is a self-contained module that implements a packet processing function and is


typically implemented on top of the DPDK Packet Framework librte_pipeline library. The appli-
cation provides a run-time mechanism to register different pipeline types.
Depending on the required configuration, each registered pipeline type (pipeline class) is in-
stantiated one or several times, with each pipeline instance (pipeline object) assigned to one of
the available CPU cores. Each CPU core can run one or more pipeline instances, which might
be of same or different types. For more information of the CPU core threading model, please
refer to the Run-time section.

Pipeline type

Each pipeline type is made up of a back-end and a front-end. The back-end represents the
packet processing engine of the pipeline, typically implemented using the DPDK Packet Frame-
work libraries, which reads packets from the input packet queues, handles them and eventually
writes them to the output packet queues or drops them. The front-end represents the run-time
configuration interface of the pipeline, which is exposed as CLI commands. The front-end
communicates with the back-end through message queues.

3.34. Internet Protocol (IP) Pipeline Application 204


DPDK documentation, Release 17.05.0-rc0

Table 3.15: Pipeline back-end


Field Field type Description
name
f_init Function Function to initialize the back-end of the current pipeline instance. Typ-
pointer ical work implemented by this function for the current pipeline instance:
Memory allocation; Parse the pipeline type specific arguments; Initial-
ize the pipeline input ports, output ports and tables, interconnect input
ports to tables; Set the message handlers.
f_free Function Function to free the resources allocated by the back-end of the current
pointer pipeline instance.
f_run Function Set to NULL for pipelines implemented using the DPDK library li-
pointer brte_pipeline (typical case), and to non-NULL otherwise. This mech-
anism is made available to support quick integration of legacy code.
This function is expected to provide the packet processing related code
to be called as part of the CPU thread dispatch loop, so this function is
not allowed to contain an infinite loop.
f_timer Function Function to read the pipeline input message queues, handle the re-
pointer quest messages, create response messages and write the response
queues. The format of request and response messages is defined
by each pipeline type, with the exception of some requests which are
mandatory for all pipelines (e.g. ping, statistics).
f_track Function See section Tracking pipeline output port to physical link
pointer
Table 3.16: Pipeline front-end
Field Field type Description
name
f_init Function Function to initialize the front-end of the current pipeline instance.
pointer
f_free Function Function to free the resources allocated by the front-end of the current
pointer pipeline instance.
cmds Array of CLI Array of CLI commands to be registered to the application CLI for the
commands current pipeline type. Even though the CLI is executed by a different
pipeline (typically, this is the master pipeline), from modularity perspec-
tive is more efficient to keep the message client side (part of the front-
end) together with the message server side (part of the back-end).

Tracking pipeline output port to physical link

Each pipeline instance is a standalone block that does not have visibility into the other pipeline
instances or the application-level pipeline inter-connectivity. In some cases, it is useful for a
pipeline instance to get application level information related to pipeline connectivity, such as to
identify the output link (e.g. physical NIC port) where one of its output ports connected, either
directly or indirectly by traversing other pipeline instances.
Tracking can be successful or unsuccessful. Typically, tracking for a specific pipeline instance
is successful when each one of its input ports can be mapped to a single output port, meaning
that all packets read from the current input port can only go out on a single output port. De-
pending on the pipeline type, some exceptions may be allowed: a small portion of the packets,
considered exception packets, are sent out on an output port that is pre-configured for this

3.34. Internet Protocol (IP) Pipeline Application 205


DPDK documentation, Release 17.05.0-rc0

purpose.
For pass-through pipeline type, the tracking is always successful. For pipeline types as flow
classification, firewall or routing, the tracking is only successful when the number of output
ports for the current pipeline instance is 1.
This feature is used by the IP routing pipeline for adding/removing implicit routes every time a
link is brought up/down.

Table copies

Fast table copy: pipeline table used by pipeline for the packet processing task, updated through
messages, table data structures are optimized for lookup operation.
Slow table copy: used by the configuration layer, typically updated through CLI commands,
kept in sync with the fast copy (its update triggers the fast copy update). Required for executing
advanced table queries without impacting the packet processing task, therefore the slow copy
is typically organized using different criteria than the fast copy.
Examples:
• Flow classification: Search through current set of flows (e.g. list all flows with a specific
source IP address);
• Firewall: List rules in descending order of priority;
• Routing table: List routes sorted by prefix depth and their type (local, remote, default);
• ARP: List entries sorted per output interface.

Packet meta-data

Packet meta-data field offsets provided as argument to pipeline instances are essentially defin-
ing the data structure for the packet meta-data used by the current application use-case. It is
very useful to put it in the configuration file as a comment in order to facilitate the readability of
the configuration file.
The reason to use field offsets for defining the data structure for the packet meta-data is due
to the C language limitation of not being able to define data structures at run-time. Feature to
consider: have the configuration file parser automatically generate and print the data structure
defining the packet meta-data for the current application use-case.
Packet meta-data typically contains:
1. Pure meta-data: intermediate data per packet that is computed internally, passed be-
tween different tables of the same pipeline instance (e.g. lookup key for the ARP table
is obtained from the routing table), or between different pipeline instances (e.g. flow ID,
traffic metering color, etc);
2. Packet fields: typically, packet header fields that are read directly from the packet, or read
from the packet and saved (duplicated) as a working copy at a different location within
the packet meta-data (e.g. Diffserv 5-tuple, IP destination address, etc).
Several strategies are used to design the packet meta-data, as described in the next subsec-
tions.

3.34. Internet Protocol (IP) Pipeline Application 206


DPDK documentation, Release 17.05.0-rc0

Store packet meta-data in a different cache line as the packet headers This approach is
able to support protocols with variable header length, like MPLS, where the offset of IP header
from the start of the packet (and, implicitly, the offset of the IP header in the packet buffer) is not
fixed. Since the pipelines typically require the specification of a fixed offset to the packet fields
(e.g. Diffserv 5-tuple, used by the flow classification pipeline, or the IP destination address,
used by the IP routing pipeline), the workaround is to have the packet RX pipeline copy these
fields at fixed offsets within the packet meta-data.
As this approach duplicates some of the packet fields, it requires accessing more cache lines
per packet for filling in selected packet meta-data fields (on RX), as well as flushing selected
packet meta-data fields into the packet (on TX).
Example:
; struct app_pkt_metadata {
; uint32_t ip_da;
; uint32_t hash;
; uint32_t flow_id;
; uint32_t color;
; } __attribute__((__packed__));
;

[PIPELINE1]
; Packet meta-data offsets
ip_da_offset = 0; Used by: routing
hash_offset = 4; Used by: RX, flow classification
flow_id_offset = 8; Used by: flow classification, flow actions
color_offset = 12; Used by: flow actions, routing

Overlay the packet meta-data in the same cache line with the packet headers This ap-
proach is minimizing the number of cache line accessed per packet by storing the packet
metadata in the same cache line with the packet headers. To enable this strategy, either some
headroom is reserved for meta-data at the beginning of the packet headers cache line (e.g. if
16 bytes are needed for meta-data, then the packet headroom can be set to 128+16 bytes, so
that NIC writes the first byte of the packet at offset 16 from the start of the first packet cache
line), or meta-data is reusing the space of some packet headers that are discarded from the
packet (e.g. input Ethernet header).
Example:
; struct app_pkt_metadata {
; uint8_t headroom[RTE_PKTMBUF_HEADROOM]; /* 128 bytes (default) */
; union {
; struct {
; struct ether_hdr ether; /* 14 bytes */
; struct qinq_hdr qinq; /* 8 bytes */
; };
; struct {
; uint32_t hash;
; uint32_t flow_id;
; uint32_t color;
; };
; };
; struct ipv4_hdr ip; /* 20 bytes */
; } __attribute__((__packed__));
;
[PIPELINE2]
; Packet meta-data offsets
qinq_offset = 142; Used by: RX, flow classification
ip_da_offset = 166; Used by: routing

3.34. Internet Protocol (IP) Pipeline Application 207


DPDK documentation, Release 17.05.0-rc0

hash_offset = 128; Used by: RX, flow classification


flow_id_offset = 132; Used by: flow classification, flow actions
color_offset = 136; Used by: flow actions, routing

3.34. Internet Protocol (IP) Pipeline Application 208


DPDK documentation, Release 17.05.0-rc0

List of pipeline types

Table 3.17: List of pipeline types provided with the application


Name Table(s) Actions Messages
Pass-through Passthrough
1. Pkt metadata 1. Ping
Note: depending
build 2. Stats
on port type, can
2. Flow hash
be used for RX,
3. Pkt checks
TX, IP fragmenta-
4. Load balancing
tion, IP reassem-
bly or Traffic Man-
agement
Flow classifica- Exact match
1. Flow ID 1. Ping
tion • Key = byte array
2. Flow stats 2. Stats
(source: pkt metadata)
3. Metering 3. Flow stats
• Data = action depen-
4. Network Address 4. Action stats
dent
5. Translation (NAT) 5. Flow add/ update/
delete
6. Default flow add/
update/ delete
7. Action update

Flow actions Array


1. Flow stats 1. Ping
• Key = Flow ID (source:
2. Metering 2. Stats
pkt metadata)
3. Network Address 3. Action stats
• Data = action depen-
4. Translation (NAT) 4. Action update
dent

Firewall ACL
1. Allow/Drop 1. Ping
• Key = n-tuple (source:
2. Stats
pkt headers)
3. Rule add/ update/
• Data = none
delete
4. Default rule add/
update/ delete

IP routing LPM (IPv4 or IPv6, depend-


1. TTL decrement 1. Ping
ing on pipeline type)
and 2. Stats
• Key = IP destination
2. IPv4 checksum 3. Route add/ up-
(source: pkt metadata)
3. update date/ delete
• Data = Dependent on
4. Header 4. Default route add/
actions and next hop
5. encapsulation update/ delete
type
6. (based on next 5. ARP entry add/
Hash table (for ARP, only
hop update/ delete
when ARP is enabled)
7. type) 6. Default ARP en-
• Key = (Port ID, next hop
try add/ update/
IP address) (source:
delete
pkt meta-data)
• Data: MAC address

3.34. Internet Protocol (IP) Pipeline Application 209


DPDK documentation, Release 17.05.0-rc0

Command Line Interface (CLI)

Global CLI commands

Table 3.18: Global CLI commands


Command Description Syntax
run Run CLI commands script file. run <file> <file> = path to file with
CLI commands to execute
quit Gracefully terminate the applica- quit
tion.

CLI commands for link configuration

Table 3.19: List of run-time configuration commands for link configuration


Command Description Syntax
link config Link configuration link <link ID> config <IP address>
<depth>
link up Link up link <link ID> up
link down Link down link <link ID> down
link ls Link list link ls

CLI commands common for all pipeline types

Table 3.20: CLI commands mandatory for all pipelines


Command Description Syntax
ping Check whether specific pipeline p <pipeline ID> ping
instance is alive. The master
pipeline sends a ping request mes-
sage to given pipeline instance
and waits for a response message
back. Timeout message is dis-
played when the response mes-
sage is not received before the
timer expires.
stats Display statistics for specific p <pipeline ID> stats port in <port
pipeline input port, output port or in ID> p <pipeline ID> stats port out
table. <port out ID> p <pipeline ID> stats
table <table ID>
input port enable Enable given input port for specific p <pipeline ID> port in <port ID> en-
pipeline instance. able
input port disable Disable given input port for specific p <pipeline ID> port in <port ID>
pipeline instance. disable

Pipeline type specific CLI commands

The pipeline specific CLI commands are part of the pipeline type front-end.

3.34. Internet Protocol (IP) Pipeline Application 210


DPDK documentation, Release 17.05.0-rc0

Test Pipeline Application

The Test Pipeline application illustrates the use of the DPDK Packet Framework tool suite. Its
purpose is to demonstrate the performance of single-table DPDK pipelines.

Overview

The application uses three CPU cores:


• Core A (“RX core”) receives traffic from the NIC ports and feeds core B with traffic through
SW queues.
• Core B (“Pipeline core”) implements a single-table DPDK pipeline whose type is se-
lectable through specific command line parameter. Core B receives traffic from core A
through software queues, processes it according to the actions configured in the table
entries that are hit by the input packets and feeds it to core C through another set of
software queues.
• Core C (“TX core”) receives traffic from core B through software queues and sends it to
the NIC ports for transmission.

Fig. 3.26: Test Pipeline Application

Compiling the Application

1. Go to the app/test directory:


export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/app/test/test-pipeline

2. Set the target (a default target is used if not specified):


export RTE_TARGET=x86_64-native-linuxapp-gcc

3. Build the application:


make

Running the Application

3.35. Test Pipeline Application 211


DPDK documentation, Release 17.05.0-rc0

Application Command Line

The application execution command line is:


./test-pipeline [EAL options] -- -p PORTMASK --TABLE_TYPE

The -c EAL CPU core mask option has to contain exactly 3 CPU cores. The first CPU core in
the core mask is assigned for core A, the second for core B and the third for core C.
The PORTMASK parameter must contain 2 or 4 ports.

Table Types and Behavior

Table 3.21 describes the table types used and how they are populated.
The hash tables are pre-populated with 16 million keys. For hash tables, the following param-
eters can be selected:
• Configurable key size implementation or fixed (specialized) key size implementa-
tion (e.g. hash-8-ext or hash-spec-8-ext). The key size specialized implementations
are expected to provide better performance for 8-byte and 16-byte key sizes, while the
key-size-non-specialized implementation is expected to provide better performance for
larger key sizes;
• Key size (e.g. hash-spec-8-ext or hash-spec-16-ext). The available options are 8, 16
and 32 bytes;
• Table type (e.g. hash-spec-16-ext or hash-spec-16-lru). The available options are ext
(extendable bucket) or lru (least recently used).

3.35. Test Pipeline Application 212


DPDK documentation, Release 17.05.0-rc0

Table 3.21: Table Types


# TABLE_TYPE Description of Core Pre-added Table En-
B Table tries
1 none Core B is not im- N/A
plementing a DPDK
pipeline. Core B is
implementing a pass-
through from its in-
put set of software
queues to its out-
put set of software
queues.
2 stub Stub table. Core B N/A
is implementing the
same pass-through
functionality as de-
scribed for the “none”
option by using the
DPDK Packet Frame-
work by using one
stub table for each
input NIC port.
3 hash-[spec]-8-lru LRU hash table with 8- 16 million entries are
byte key size and 16 successfully added to
million entries. the hash table with the
following key format:
[4-byte index, 4 bytes
of 0]
The action configured
for all table entries is
“Sendto output port”,
with the output port
index uniformly dis-
tributed for the range
of output ports.
The default table rule
(used in the case of a
lookup miss) is to drop
the packet.
At run time, core A is
creating the following
lookup key and storing
it into the packet meta
data for core B to use
for table lookup:
[destination IPv4 ad-
dress, 4 bytes of 0]
4 hash-[spec]-8-ext Extendable bucket Same as hash-[spec]-
hash table with 8-byte 8-lru table entries,
key size and 16 million above.
entries.
5 hash-[spec]-16-lru LRU hash table with 16 million entries are
3.35. Test Pipeline Application 16-byte key size and 213 to
successfully added
16 million entries. the hash table with the
following key format:
DPDK documentation, Release 17.05.0-rc0

Input Traffic

Regardless of the table type used for the core B pipeline, the same input traffic can be used
to hit all table entries with uniform distribution, which results in uniform distribution of packets
sent out on the set of output NIC ports. The profile for input traffic is TCP/IPv4 packets with:
• destination IP address as A.B.C.D with A fixed to 0 and B, C,D random
• source IP address fixed to 0.0.0.0
• destination TCP port fixed to 0
• source TCP port fixed to 0

Distributor Sample Application

The distributor sample application is a simple example of packet distribution to cores using the
Data Plane Development Kit (DPDK).

Overview

The distributor application performs the distribution of packets that are received on an
RX_PORT to different cores. When processed by the cores, the destination port of a packet
is the port from the enabled port mask adjacent to the one on which the packet was received,
that is, if the first four ports are enabled (port mask 0xf), ports 0 and 1 RX/TX into each other,
and ports 2 and 3 RX/TX into each other.
This application can be used to benchmark performance using the traffic generator as shown
in the figure below.

Fig. 3.27: Performance Benchmarking Setup (Basic Environment)

Compiling the Application

1. Go to the sample application directory:


export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/distributor

2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc

See the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make

Running the Application

1. The application has a number of command line options:


./build/distributor_app [EAL options] -- -p PORTMASK

3.36. Distributor Sample Application 214


DPDK documentation, Release 17.05.0-rc0

where,
• -p PORTMASK: Hexadecimal bitmask of ports to configure
2. To run the application in linuxapp environment with 10 lcores, 4 ports, issue the com-
mand:
$ ./build/distributor_app -c 0x4003fe -n 4 -- -p f

3. Refer to the DPDK Getting Started Guide for general information on running applications
and the Environment Abstraction Layer (EAL) options.

Explanation

The distributor application consists of three types of threads: a receive thread (lcore_rx()), a set
of worker threads(lcore_worker()) and a transmit thread(lcore_tx()). How these threads work
together is shown in Fig. 3.28 below. The main() function launches threads of these three
types. Each thread has a while loop which will be doing processing and which is terminated
only upon SIGINT or ctrl+C. The receive and transmit threads communicate using a software
ring (rte_ring structure).
The receive thread receives the packets using rte_eth_rx_burst() and gives them to the distrib-
utor (using rte_distributor_process() API) which will be called in context of the receive thread
itself. The distributor distributes the packets to workers threads based on the tagging of the
packet - indicated by the hash field in the mbuf. For IP traffic, this field is automatically filled by
the NIC with the “usr” hash value for the packet, which works as a per-flow tag.
More than one worker thread can exist as part of the application, and these worker threads
do simple packet processing by requesting packets from the distributor, doing a simple XOR
operation on the input port mbuf field (to indicate the output port which will be used later for
packet transmission) and then finally returning the packets back to the distributor in the RX
thread.
Meanwhile, the receive thread will call the distributor api rte_distributor_returned_pkts() to get
the packets processed, and will enqueue them to a ring for transfer to the TX thread for trans-
mission on the output port. The transmit thread will dequeue the packets from the ring and
transmit them on the output port specified in packet mbuf.
Users who wish to terminate the running of the application have to press ctrl+C (or send SIG-
INT to the app). Upon this signal, a signal handler provided in the application will terminate all
running threads gracefully and print final statistics to the user.

Fig. 3.28: Distributor Sample Application Layout

Debug Logging Support

Debug logging is provided as part of the application; the user needs to uncomment the line
“#define DEBUG” defined in start of the application in main.c to enable debug logs.

Statistics

Upon SIGINT (or) ctrl+C, the print_stats() function displays the count of packets processed at
the different stages in the application.

3.36. Distributor Sample Application 215


DPDK documentation, Release 17.05.0-rc0

Application Initialization

Command line parsing is done in the same way as it is done in the L2 Forwarding Sample
Application. See Command Line Arguments.
Mbuf pool initialization is done in the same way as it is done in the L2 Forwarding Sample
Application. See Mbuf Pool Initialization.
Driver Initialization is done in same way as it is done in the L2 Forwarding Sample Application.
See Driver Initialization.
RX queue initialization is done in the same way as it is done in the L2 Forwarding Sample
Application. See RX Queue Initialization.
TX queue initialization is done in the same way as it is done in the L2 Forwarding Sample
Application. See TX Queue Initialization.

VM Power Management Application

Introduction

Applications running in Virtual Environments have an abstract view of the underlying hard-
ware on the Host, in particular applications cannot see the binding of virtual to physical hard-
ware. When looking at CPU resourcing, the pinning of Virtual CPUs(vCPUs) to Host Physical
CPUs(pCPUS) is not apparent to an application and this pinning may change over time. Fur-
thermore, Operating Systems on virtual machines do not have the ability to govern their own
power policy; the Machine Specific Registers (MSRs) for enabling P-State transitions are not
exposed to Operating Systems running on Virtual Machines(VMs).
The Virtual Machine Power Management solution shows an example of how a DPDK applica-
tion can indicate its processing requirements using VM local only information(vCPU/lcore) to
a Host based Monitor which is responsible for accepting requests for frequency changes for a
vCPU, translating the vCPU to a pCPU via libvirt and affecting the change in frequency.
The solution is comprised of two high-level components:
1. Example Host Application
Using a Command Line Interface(CLI) for VM->Host communication channel manage-
ment allows adding channels to the Monitor, setting and querying the vCPU to pCPU
pinning, inspecting and manually changing the frequency for each CPU. The CLI runs on
a single lcore while the thread responsible for managing VM requests runs on a second
lcore.
VM requests arriving on a channel for frequency changes are passed to the librte_power
ACPI cpufreq sysfs based library. The Host Application relies on both qemu-kvm and
libvirt to function.
2. librte_power for Virtual Machines
Using an alternate implementation for the librte_power API, requests for frequency
changes are forwarded to the host monitor rather than the APCI cpufreq sysfs interface
used on the host.
The l3fwd-power application will use this implementation when deployed on a VM (see
L3 Forwarding with Power Management Sample Application).

3.37. VM Power Management Application 216


DPDK documentation, Release 17.05.0-rc0

Fig. 3.29: Highlevel Solution

Overview

VM Power Management employs qemu-kvm to provide communications channels between the


host and VMs in the form of Virtio-Serial which appears as a paravirtualized serial device on
a VM and can be configured to use various backends on the host. For this example each
Virtio-Serial endpoint on the host is configured as AF_UNIX file socket, supporting poll/select
and epoll for event notification. In this example each channel endpoint on the host is monitored
via epoll for EPOLLIN events. Each channel is specified as qemu-kvm arguments or as libvirt
XML for each VM, where each VM can have a number of channels up to a maximum of 64 per
VM, in this example each DPDK lcore on a VM has exclusive access to a channel.
To enable frequency changes from within a VM, a request via the librte_power interface is
forwarded via Virtio-Serial to the host, each request contains the vCPU and power com-
mand(scale up/down/min/max). The API for host and guest librte_power is consistent across
environments, with the selection of VM or Host Implementation determined at automatically at
runtime based on the environment.
Upon receiving a request, the host translates the vCPU to a pCPU via the libvirt API before
forwarding to the host librte_power.

Fig. 3.30: VM request to scale frequency

Performance Considerations

While Haswell Microarchitecture allows for independent power control for each core, earlier
Microarchtectures do not offer such fine grained control. When deployed on pre-Haswell plat-
forms greater care must be taken in selecting which cores are assigned to a VM, for instance
a core will not scale down until its sibling is similarly scaled.

Configuration

BIOS

Enhanced Intel SpeedStep® Technology must be enabled in the platform BIOS if the
power management feature of DPDK is to be used. Otherwise, the sys file folder
/sys/devices/system/cpu/cpu0/cpufreq will not exist, and the CPU frequency-based power
management cannot be used. Consult the relevant BIOS documentation to determine how
these settings can be accessed.

Host Operating System

The Host OS must also have the apci_cpufreq module installed, in some cases the intel_pstate
driver may be the default Power Management environment. To enable acpi_cpufreq and dis-
able intel_pstate, add the following to the grub Linux command line:
intel_pstate=disable

3.37. VM Power Management Application 217


DPDK documentation, Release 17.05.0-rc0

Upon rebooting, load the acpi_cpufreq module:


modprobe acpi_cpufreq

Hypervisor Channel Configuration

Virtio-Serial channels are configured via libvirt XML:


<name>{vm_name}</name>
<controller type='virtio-serial' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</controller>
<channel type='unix'>
<source mode='bind' path='/tmp/powermonitor/{vm_name}.{channel_num}'/>
<target type='virtio' name='virtio.serial.port.poweragent.{vm_channel_num}'/>
<address type='virtio-serial' controller='0' bus='0' port='{N}'/>
</channel>

Where a single controller of type virtio-serial is created and up to 32 channels can be asso-
ciated with a single controller and multiple controllers can be specified. The convention is to
use the name of the VM in the host path {vm_name} and to increment {channel_num} for each
channel, likewise the port value {N} must be incremented for each channel.
Each channel on the host will appear in path, the directory /tmp/powermonitor/ must first be
created and given qemu permissions
mkdir /tmp/powermonitor/
chown qemu:qemu /tmp/powermonitor

Note that files and directories within /tmp are generally removed upon rebooting the host and
the above steps may need to be carried out after each reboot.
The serial device as it appears on a VM is configured with the target element attribute
name and must be in the form of virtio.serial.port.poweragent.{vm_channel_num}, where
vm_channel_num is typically the lcore channel to be used in DPDK VM applications.
Each channel on a VM will be present at /dev/virtio-
ports/virtio.serial.port.poweragent.{vm_channel_num}

Compiling and Running the Host Application

Compiling

1. export RTE_SDK=/path/to/rte_sdk
2. cd ${RTE_SDK}/examples/vm_power_manager
3. make

Running

The application does not have any specific command line options other than EAL:
./build/vm_power_mgr [EAL options]

The application requires exactly two cores to run, one core is dedicated to the CLI, while the
other is dedicated to the channel endpoint monitor, for example to run on cores 0 & 1 on a
system with 4 memory channels:

3.37. VM Power Management Application 218


DPDK documentation, Release 17.05.0-rc0

./build/vm_power_mgr -c 0x3 -n 4

After successful initialization the user is presented with VM Power Manager CLI:
vm_power>

Virtual Machines can now be added to the VM Power Manager:


vm_power> add_vm {vm_name}

When a {vm_name} is specified with the add_vm command a lookup is performed with libvirt
to ensure that the VM exists, {vm_name} is used as an unique identifier to associate channels
with a particular VM and for executing operations on a VM within the CLI. VMs do not have to
be running in order to add them.
A number of commands can be issued via the CLI in relation to VMs:
Remove a Virtual Machine identified by {vm_name} from the VM Power Manager.
rm_vm {vm_name}

Add communication channels for the specified VM, the virtio channels must be en-
abled in the VM configuration(qemu/libvirt) and the associated VM must be active.
{list} is a comma-separated list of channel numbers to add, using the keyword ‘all’
will attempt to add all channels for the VM:
add_channels {vm_name} {list}|all

Enable or disable the communication channels in {list}(comma-separated) for the


specified VM, alternatively list can be replaced with keyword ‘all’. Disabled channels
will still receive packets on the host, however the commands they specify will be
ignored. Set status to ‘enabled’ to begin processing requests again:
set_channel_status {vm_name} {list}|all enabled|disabled

Print to the CLI the information on the specified VM, the information lists the number
of vCPUS, the pinning to pCPU(s) as a bit mask, along with any communication
channels associated with each VM, along with the status of each channel:
show_vm {vm_name}

Set the binding of Virtual CPU on VM with name {vm_name} to the Physical CPU
mask:
set_pcpu_mask {vm_name} {vcpu} {pcpu}

Set the binding of Virtual CPU on VM to the Physical CPU:


set_pcpu {vm_name} {vcpu} {pcpu}

Manual control and inspection can also be carried in relation CPU frequency scaling:
Get the current frequency for each core specified in the mask:
show_cpu_freq_mask {mask}

Set the current frequency for the cores specified in {core_mask} by scaling each
up/down/min/max:
set_cpu_freq {core_mask} up|down|min|max

Get the current frequency for the specified core:


show_cpu_freq {core_num}

Set the current frequency for the specified core by scaling up/down/min/max:

3.37. VM Power Management Application 219


DPDK documentation, Release 17.05.0-rc0

set_cpu_freq {core_num} up|down|min|max

Compiling and Running the Guest Applications

For compiling and running l3fwd-power, see L3 Forwarding with Power Management Sample
Application.
A guest CLI is also provided for validating the setup.
For both l3fwd-power and guest CLI, the channels for the VM must be monitored by the host
application using the add_channels command on the host.

Compiling

1. export RTE_SDK=/path/to/rte_sdk
2. cd ${RTE_SDK}/examples/vm_power_manager/guest_cli
3. make

Running

The application does not have any specific command line options other than EAL:
./build/vm_power_mgr [EAL options]

The application for example purposes uses a channel for each lcore enabled, for example to
run on cores 0,1,2,3 on a system with 4 memory channels:
./build/guest_vm_power_mgr -c 0xf -n 4

After successful initialization the user is presented with VM Power Manager Guest CLI:
vm_power(guest)>

To change the frequency of a lcore, use the set_cpu_freq command. Where {core_num} is the
lcore and channel to change frequency by scaling up/down/min/max.
set_cpu_freq {core_num} up|down|min|max

TEP termination Sample Application

The TEP (Tunnel End point) termination sample application simulates a VXLAN Tunnel End-
point (VTEP) termination in DPDK, which is used to demonstrate the offload and filtering ca-
pabilities of Intel® XL710 10/40 Gigabit Ethernet Controller for VXLAN packet. This sample
uses the basic virtio devices management mechanism from vhost example, and also uses the
us-vHost interface and tunnel filtering mechanism to direct a specified traffic to a specific VM.
In addition, this sample is also designed to show how tunneling protocols can be handled.

Background

With virtualization, overlay networks allow a network structure to be built or imposed across
physical nodes which is abstracted away from the actual underlining physical network connec-
tions. This allows network isolation, QOS, etc to be provided on a per client basis.

3.38. TEP termination Sample Application 220


DPDK documentation, Release 17.05.0-rc0

Fig. 3.31: Overlay Networking.

In a typical setup, the network overlay tunnel is terminated at the Virtual/Tunnel End Point
(VEP/TEP). The TEP is normally located at the physical host level ideally in the software switch.
Due to processing constraints and the inevitable bottleneck that the switch becomes, the ability
to offload overlay support features becomes an important requirement. Intel® XL710 10/40
Gigabit Ethernet network card provides hardware filtering and offload capabilities to support
overlay networks implementations such as MAC in UDP and MAC in GRE.

Sample Code Overview

The DPDK TEP termination sample code demonstrates the offload and filtering capabilities of
Intel® XL710 10/40 Gigabit Ethernet Controller for VXLAN packet.
The sample code is based on vhost library. The vhost library is developed for user space
Ethernet switch to easily integrate with vhost functionality.
The sample will support the followings:
• Tunneling packet recognition.
• The port of UDP tunneling is configurable
• Directing incoming traffic to the correct queue based on the tunnel filter type. The sup-
ported filter type are listed below.
– Inner MAC and VLAN and tenant ID
– Inner MAC and tenant ID, and Outer MAC
– Inner MAC and tenant ID
The tenant ID will be assigned from a static internal table based on the us-vhost device
ID. Each device will receive a unique device ID. The inner MAC will be learned by the first
packet transmitted from a device.
• Decapsulation of RX VXLAN traffic. This is a software only operation.
• Encapsulation of TX VXLAN traffic. This is a software only operation.
• Inner IP and inner L4 checksum offload.
• TSO offload support for tunneling packet.
The following figure shows the framework of the TEP termination sample application based on
DPDK vhost lib.

Fig. 3.32: TEP termination Framework Overview

Supported Distributions

The example in this section have been validated with the following distributions:
• Fedora* 18
• Fedora* 19

3.38. TEP termination Sample Application 221


DPDK documentation, Release 17.05.0-rc0

• Fedora* 20

Compiling the Sample Code

1. Compile vhost lib:


To enable vhost, turn on vhost library in the configure file config/common_linuxapp.
CONFIG_RTE_LIBRTE_VHOST=y

2. Go to the examples directory:


export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/tep_termination

3. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc

See the DPDK Getting Started Guide for possible RTE_TARGET values.
4. Build the application:
cd ${RTE_SDK}
make config ${RTE_TARGET}
make install ${RTE_TARGET}
cd ${RTE_SDK}/examples/tep_termination
make

Running the Sample Code

1. Go to the examples directory:


export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/tep_termination

2. Run the tep_termination sample code:


user@target:~$ ./build/app/tep_termination -c f -n 4 --huge-dir /mnt/huge --
-p 0x1 --dev-basename tep-termination --nb-devices 4
--udp-port 4789 --filter-type 1

Note: Please note the huge-dir parameter instructs the DPDK to allocate its memory from the
2 MB page hugetlbfs.

Parameters

The same parameters with the vhost sample.


Refer to Parameters for detailed explanation.
Number of Devices.
The nb-devices option specifies the number of virtIO device. The default value is 2.
user@target:~$ ./build/app/tep_termination -c f -n 4 --huge-dir /mnt/huge --
--nb-devices 2

3.38. TEP termination Sample Application 222


DPDK documentation, Release 17.05.0-rc0

Tunneling UDP port.


The udp-port option is used to specify the destination UDP number for UDP tunneling packet.
The default value is 4789.
user@target:~$ ./build/app/tep_termination -c f -n 4 --huge-dir /mnt/huge --
--nb-devices 2 --udp-port 4789

Filter Type.
The filter-type option is used to specify which filter type is used to filter UDP tunneling packet to
a specified queue. The default value is 1, which means the filter type of inner MAC and tenant
ID is used.
user@target:~$ ./build/app/tep_termination -c f -n 4 --huge-dir /mnt/huge --
--nb-devices 2 --udp-port 4789 --filter-type 1

TX Checksum.
The tx-checksum option is used to enable or disable the inner header checksum offload. The
default value is 0, which means the checksum offload is disabled.
user@target:~$ ./build/app/tep_termination -c f -n 4 --huge-dir /mnt/huge --
--nb-devices 2 --tx-checksum

TCP segment size.


The tso-segsz option specifies the TCP segment size for TSO offload for tunneling packet. The
default value is 0, which means TSO offload is disabled.
user@target:~$ ./build/app/tep_termination -c f -n 4 --huge-dir /mnt/huge --
--tx-checksum --tso-segsz 800

Decapsulation option.
The decap option is used to enable or disable decapsulation operation for received VXLAN
packet. The default value is 1.
user@target:~$ ./build/app/tep_termination -c f -n 4 --huge-dir /mnt/huge --
--nb-devices 4 --udp-port 4789 --decap 1

Encapsulation option.
The encap option is used to enable or disable encapsulation operation for transmitted packet.
The default value is 1.
user@target:~$ ./build/app/tep_termination -c f -n 4 --huge-dir /mnt/huge --
--nb-devices 4 --udp-port 4789 --encap 1

Running the Virtual Machine (QEMU)

Refer to Start the VM.

Running DPDK in the Virtual Machine

Refer to Run testpmd inside guest.

3.38. TEP termination Sample Application 223


DPDK documentation, Release 17.05.0-rc0

Passing Traffic to the Virtual Machine Device

For a virtio-net device to receive traffic, the traffic’s Layer 2 header must include both the virtio-
net device’s MAC address. The DPDK sample code behaves in a similar manner to a learning
switch in that it learns the MAC address of the virtio-net devices from the first transmitted
packet. On learning the MAC address, the DPDK vhost sample code prints a message with
the MAC address and tenant ID virtio-net device. For example:
DATA: (0) MAC_ADDRESS cc:bb:bb:bb:bb:bb and VNI 1000 registered

The above message indicates that device 0 has been registered with MAC address
cc:bb:bb:bb:bb:bb and VNI 1000. Any packets received on the NIC with these values are
placed on the devices receive queue.

PTP Client Sample Application

The PTP (Precision Time Protocol) client sample application is a simple example of using the
DPDK IEEE1588 API to communicate with a PTP master clock to synchronize the time on the
NIC and, optionally, on the Linux system.
Note, PTP is a time syncing protocol and cannot be used within DPDK as a time-stamping
mechanism. See the following for an explanation of the protocol: Precision Time Protocol.

Limitations

The PTP sample application is intended as a simple reference implementation of a PTP client
using the DPDK IEEE1588 API. In order to keep the application simple the following assump-
tions are made:
• The first discovered master is the master for the session.
• Only L2 PTP packets are supported.
• Only the PTP v2 protocol is supported.
• Only the slave clock is implemented.

How the Application Works

Fig. 3.33: PTP Synchronization Protocol

The PTP synchronization in the sample application works as follows:


• Master sends Sync message - the slave saves it as T2.
• Master sends Follow Up message and sends time of T1.
• Slave sends Delay Request frame to PTP Master and stores T3.
• Master sends Delay Response T4 time which is time of received T3.
The adjustment for slave can be represented as:
adj = -[(T2-T1)-(T4 - T3)]/2

3.39. PTP Client Sample Application 224


DPDK documentation, Release 17.05.0-rc0

If the command line parameter -T 1 is used the application also synchronizes the PTP PHC
clock with the Linux kernel clock.

Compiling the Application

To compile the application, export the path to the DPDK source tree and edit the
config/common_linuxapp configuration file to enable IEEE1588:
export RTE_SDK=/path/to/rte_sdk

# Edit common_linuxapp and set the following options:


CONFIG_RTE_LIBRTE_IEEE1588=y

Set the target, for example:


export RTE_TARGET=x86_64-native-linuxapp-gcc

See the DPDK Getting Started Guide for possible RTE_TARGET values.
Build the application as follows:
# Recompile DPDK.
make install T=$RTE_TARGET

# Compile the application.


cd ${RTE_SDK}/examples/ptpclient
make

Running the Application

To run the example in a linuxapp environment:


./build/ptpclient -c 2 -n 4 -- -p 0x1 -T 0

Refer to DPDK Getting Started Guide for general information on running applications and the
Environment Abstraction Layer (EAL) options.
• -p portmask: Hexadecimal portmask.
• -T 0: Update only the PTP slave clock.
• -T 1: Update the PTP slave clock and synchronize the Linux Kernel to the PTP clock.

Code Explanation

The following sections provide an explanation of the main components of the code.
All DPDK library functions used in the sample code are prefixed with rte_ and are explained
in detail in the DPDK API Documentation.

The Main Function

The main() function performs the initialization and calls the execution threads for each lcore.
The first task is to initialize the Environment Abstraction Layer (EAL). The argc and argv
arguments are provided to the rte_eal_init() function. The value returned is the number
of parsed arguments:

3.39. PTP Client Sample Application 225


DPDK documentation, Release 17.05.0-rc0

int ret = rte_eal_init(argc, argv);


if (ret < 0)
rte_exit(EXIT_FAILURE, "Error with EAL initialization\n");

And than we parse application specific arguments


argc -= ret;
argv += ret;

ret = ptp_parse_args(argc, argv);


if (ret < 0)
rte_exit(EXIT_FAILURE, "Error with PTP initialization\n");

The main() also allocates a mempool to hold the mbufs (Message Buffers) used by the ap-
plication:
mbuf_pool = rte_mempool_create("MBUF_POOL",
NUM_MBUFS * nb_ports,
MBUF_SIZE,
MBUF_CACHE_SIZE,
sizeof(struct rte_pktmbuf_pool_private),
rte_pktmbuf_pool_init, NULL,
rte_pktmbuf_init, NULL,
rte_socket_id(),
0);

Mbufs are the packet buffer structure used by DPDK. They are explained in detail in the “Mbuf
Library” section of the DPDK Programmer’s Guide.
The main() function also initializes all the ports using the user defined port_init() function
with portmask provided by user:
for (portid = 0; portid < nb_ports; portid++)
if ((ptp_enabled_port_mask & (1 << portid)) != 0) {

if (port_init(portid, mbuf_pool) == 0) {
ptp_enabled_ports[ptp_enabled_port_nb] = portid;
ptp_enabled_port_nb++;
} else {
rte_exit(EXIT_FAILURE, "Cannot init port %"PRIu8 "\n",
portid);
}
}

Once the initialization is complete, the application is ready to launch a function on an lcore. In
this example lcore_main() is called on a single lcore.
lcore_main();

The lcore_main() function is explained below.

The Lcores Main

As we saw above the main() function calls an application function on the available lcores.
The main work of the application is done within the loop:
for (portid = 0; portid < ptp_enabled_port_nb; portid++) {

portid = ptp_enabled_ports[portid];
nb_rx = rte_eth_rx_burst(portid, 0, &m, 1);

if (likely(nb_rx == 0))

3.39. PTP Client Sample Application 226


DPDK documentation, Release 17.05.0-rc0

continue;

if (m->ol_flags & PKT_RX_IEEE1588_PTP)


parse_ptp_frames(portid, m);

rte_pktmbuf_free(m);
}

Packets are received one by one on the RX ports and, if required, PTP response packets are
transmitted on the TX ports.
If the offload flags in the mbuf indicate that the packet is a PTP packet then the packet is parsed
to determine which type:
if (m->ol_flags & PKT_RX_IEEE1588_PTP)
parse_ptp_frames(portid, m);

All packets are freed explicitly using rte_pktmbuf_free().


The forwarding loop can be interrupted and the application closed using Ctrl-C.

PTP parsing

The parse_ptp_frames() function processes PTP packets, implementing slave PTP


IEEE1588 L2 functionality.
void
parse_ptp_frames(uint8_t portid, struct rte_mbuf *m) {
struct ptp_header *ptp_hdr;
struct ether_hdr *eth_hdr;
uint16_t eth_type;

eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);


eth_type = rte_be_to_cpu_16(eth_hdr->ether_type);

if (eth_type == PTP_PROTOCOL) {
ptp_data.m = m;
ptp_data.portid = portid;
ptp_hdr = (struct ptp_header *)(rte_pktmbuf_mtod(m, char *)
+ sizeof(struct ether_hdr));

switch (ptp_hdr->msgtype) {
case SYNC:
parse_sync(&ptp_data);
break;
case FOLLOW_UP:
parse_fup(&ptp_data);
break;
case DELAY_RESP:
parse_drsp(&ptp_data);
print_clock_info(&ptp_data);
break;
default:
break;
}
}
}

There are 3 types of packets on the RX path which we must parse to create a minimal imple-
mentation of the PTP slave client:
• SYNC packet.

3.39. PTP Client Sample Application 227


DPDK documentation, Release 17.05.0-rc0

• FOLLOW UP packet
• DELAY RESPONSE packet.
When we parse the FOLLOW UP packet we also create and send a DELAY_REQUEST
packet. Also when we parse the DELAY RESPONSE packet, and all conditions are met we
adjust the PTP slave clock.

Performance Thread Sample Application

The performance thread sample application is a derivative of the standard L3 forwarding appli-
cation that demonstrates different threading models.

Overview

For a general description of the L3 forwarding applications capabilities please refer to the doc-
umentation of the standard application in L3 Forwarding Sample Application.
The performance thread sample application differs from the standard L3 forwarding example
in that it divides the TX and RX processing between different threads, and makes it possible to
assign individual threads to different cores.
Three threading models are considered:
1. When there is one EAL thread per physical core.
2. When there are multiple EAL threads per physical core.
3. When there are multiple lightweight threads per EAL thread.
Since DPDK release 2.0 it is possible to launch applications using the --lcores EAL param-
eter, specifying cpu-sets for a physical core. With the performance thread sample application
its is now also possible to assign individual RX and TX functions to different cores.
As an alternative to dividing the L3 forwarding work between different EAL threads the perfor-
mance thread sample introduces the possibility to run the application threads as lightweight
threads (L-threads) within one or more EAL threads.
In order to facilitate this threading model the example includes a primitive cooperative sched-
uler (L-thread) subsystem. More details of the L-thread subsystem can be found in The L-
thread subsystem.
Note: Whilst theoretically possible it is not anticipated that multiple L-thread schedulers would
be run on the same physical core, this mode of operation should not be expected to yield useful
performance and is considered invalid.

Compiling the Application

The application is located in the sample application folder in the performance-thread folder.
1. Go to the example applications folder
export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/performance-thread/l3fwd-thread

2. Set the target (a default target is used if not specified). For example:

3.40. Performance Thread Sample Application 228


DPDK documentation, Release 17.05.0-rc0

export RTE_TARGET=x86_64-native-linuxapp-gcc

See the DPDK Linux Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make

Running the Application

The application has a number of command line options:


./build/l3fwd-thread [EAL options] --
-p PORTMASK [-P]
--rx(port,queue,lcore,thread)[,(port,queue,lcore,thread)]
--tx(lcore,thread)[,(lcore,thread)]
[--enable-jumbo] [--max-pkt-len PKTLEN]] [--no-numa]
[--hash-entry-num] [--ipv6] [--no-lthreads] [--stat-lcore lcore]
[--parse-ptype]

Where:
• -p PORTMASK: Hexadecimal bitmask of ports to configure.
• -P: optional, sets all ports to promiscuous mode so that packets are accepted regardless
of the packet’s Ethernet MAC destination address. Without this option, only packets
with the Ethernet MAC destination address set to the Ethernet address of the port are
accepted.
• --rx (port,queue,lcore,thread)[,(port,queue,lcore,thread)]: the list
of NIC RX ports and queues handled by the RX lcores and threads. The parameters
are explained below.
• --tx (lcore,thread)[,(lcore,thread)]: the list of TX threads identifying the
lcore the thread runs on, and the id of RX thread with which it is associated. The param-
eters are explained below.
• --enable-jumbo: optional, enables jumbo frames.
• --max-pkt-len: optional, maximum packet length in decimal (64-9600).
• --no-numa: optional, disables numa awareness.
• --hash-entry-num: optional, specifies the hash entry number in hex to be setup.
• --ipv6: optional, set it if running ipv6 packets.
• --no-lthreads: optional, disables l-thread model and uses EAL threading model. See
below.
• --stat-lcore: optional, run CPU load stats collector on the specified lcore.
• --parse-ptype: optional, set to use software to analyze packet type. Without this
option, hardware will check the packet type.
The parameters of the --rx and --tx options are:
• --rx parameters

3.40. Performance Thread Sample Application 229


DPDK documentation, Release 17.05.0-rc0

port RX port
queue RX queue that will be read on the specified RX port
lcore Core to use for the thread
thread Thread id (continuously from 0 to N)
• --tx parameters
lcore Core to use for L3 route match and transmit
thread Id of RX thread to be associated with this TX thread
The l3fwd-thread application allows you to start packet processing in two threading models:
L-Threads (default) and EAL Threads (when the --no-lthreads parameter is used). For
consistency all parameters are used in the same way for both models.

Running with L-threads

When the L-thread model is used (default option), lcore and thread parameters in --rx/--tx
are used to affinitize threads to the selected scheduler.
For example, the following places every l-thread on different lcores:
l3fwd-thread -c ff -n 2 -- -P -p 3 \
--rx="(0,0,0,0)(1,0,1,1)" \
--tx="(2,0)(3,1)"

The following places RX l-threads on lcore 0 and TX l-threads on lcore 1 and 2 and so on:
l3fwd-thread -c ff -n 2 -- -P -p 3 \
--rx="(0,0,0,0)(1,0,0,1)" \
--tx="(1,0)(2,1)"

Running with EAL threads

When the --no-lthreads parameter is used, the L-threading model is turned off and EAL
threads are used for all processing. EAL threads are enumerated in the same way as L-
threads, but the --lcores EAL parameter is used to affinitize threads to the selected cpu-set
(scheduler). Thus it is possible to place every RX and TX thread on different lcores.
For example, the following places every EAL thread on different lcores:
l3fwd-thread -c ff -n 2 -- -P -p 3 \
--rx="(0,0,0,0)(1,0,1,1)" \
--tx="(2,0)(3,1)" \
--no-lthreads

To affinitize two or more EAL threads to one cpu-set, the EAL --lcores parameter is used.
The following places RX EAL threads on lcore 0 and TX EAL threads on lcore 1 and 2 and so
on:
l3fwd-thread -c ff -n 2 --lcores="(0,1)@0,(2,3)@1" -- -P -p 3 \
--rx="(0,0,0,0)(1,0,1,1)" \
--tx="(2,0)(3,1)" \
--no-lthreads

3.40. Performance Thread Sample Application 230


DPDK documentation, Release 17.05.0-rc0

Examples

For selected scenarios the command line configuration of the application for L-threads and its
corresponding EAL threads command line can be realized as follows:
1. Start every thread on different scheduler (1:1):
l3fwd-thread -c ff -n 2 -- -P -p 3 \
--rx="(0,0,0,0)(1,0,1,1)" \
--tx="(2,0)(3,1)"

EAL thread equivalent:


l3fwd-thread -c ff -n 2 -- -P -p 3 \
--rx="(0,0,0,0)(1,0,1,1)" \
--tx="(2,0)(3,1)" \
--no-lthreads

2. Start all threads on one core (N:1).


Start 4 L-threads on lcore 0:
l3fwd-thread -c ff -n 2 -- -P -p 3 \
--rx="(0,0,0,0)(1,0,0,1)" \
--tx="(0,0)(0,1)"

Start 4 EAL threads on cpu-set 0:


l3fwd-thread -c ff -n 2 --lcores="(0-3)@0" -- -P -p 3 \
--rx="(0,0,0,0)(1,0,0,1)" \
--tx="(2,0)(3,1)" \
--no-lthreads

3. Start threads on different cores (N:M).


Start 2 L-threads for RX on lcore 0, and 2 L-threads for TX on lcore 1:
l3fwd-thread -c ff -n 2 -- -P -p 3 \
--rx="(0,0,0,0)(1,0,0,1)" \
--tx="(1,0)(1,1)"

Start 2 EAL threads for RX on cpu-set 0, and 2 EAL threads for TX on cpu-set 1:
l3fwd-thread -c ff -n 2 --lcores="(0-1)@0,(2-3)@1" -- -P -p 3 \
--rx="(0,0,0,0)(1,0,1,1)" \
--tx="(2,0)(3,1)" \
--no-lthreads

Explanation

To a great extent the sample application differs little from the standard L3 forwarding appli-
cation, and readers are advised to familiarize themselves with the material covered in the L3
Forwarding Sample Application documentation before proceeding.
The following explanation is focused on the way threading is handled in the performance thread
example.

Mode of operation with EAL threads

The performance thread sample application has split the RX and TX functionality into two
different threads, and the RX and TX threads are interconnected via software rings. With
respect to these rings the RX threads are producers and the TX threads are consumers.

3.40. Performance Thread Sample Application 231


DPDK documentation, Release 17.05.0-rc0

On initialization the TX and RX threads are started according to the command line parameters.
The RX threads poll the network interface queues and post received packets to a TX thread
via a corresponding software ring.
The TX threads poll software rings, perform the L3 forwarding hash/LPM match, and assemble
packet bursts before performing burst transmit on the network interface.
As with the standard L3 forward application, burst draining of residual packets is performed
periodically with the period calculated from elapsed time using the timestamps counter.
The diagram below illustrates a case with two RX threads and three TX threads.

Mode of operation with L-threads

Like the EAL thread configuration the application has split the RX and TX functionality into
different threads, and the pairs of RX and TX threads are interconnected via software rings.
On initialization an L-thread scheduler is started on every EAL thread. On all but the master
EAL thread only a a dummy L-thread is initially started. The L-thread started on the master
EAL thread then spawns other L-threads on different L-thread schedulers according the the
command line parameters.
The RX threads poll the network interface queues and post received packets to a TX thread
via the corresponding software ring.
The ring interface is augmented by means of an L-thread condition variable that enables the
TX thread to be suspended when the TX ring is empty. The RX thread signals the condition
whenever it posts to the TX ring, causing the TX thread to be resumed.
Additionally the TX L-thread spawns a worker L-thread to take care of polling the software
rings, whilst it handles burst draining of the transmit buffer.
The worker threads poll the software rings, perform L3 route lookup and assemble packet
bursts. If the TX ring is empty the worker thread suspends itself by waiting on the condition
variable associated with the ring.
Burst draining of residual packets, less than the burst size, is performed by the TX thread which
sleeps (using an L-thread sleep function) and resumes periodically to flush the TX buffer.
This design means that L-threads that have no work, can yield the CPU to other L-threads and
avoid having to constantly poll the software rings.
The diagram below illustrates a case with two RX threads and three TX functions (each com-
prising a thread that processes forwarding and a thread that periodically drains the output
buffer of residual packets).

CPU load statistics

It is possible to display statistics showing estimated CPU load on each core. The statistics
indicate the percentage of CPU time spent: processing received packets (forwarding), polling
queues/rings (waiting for work), and doing any other processing (context switch and other
overhead).

3.40. Performance Thread Sample Application 232


DPDK documentation, Release 17.05.0-rc0

When enabled statistics are gathered by having the application threads set and clear flags
when they enter and exit pertinent code sections. The flags are then sampled in real time by
a statistics collector thread running on another core. This thread displays the data in real time
on the console.
This feature is enabled by designating a statistics collector core, using the --stat-lcore
parameter.

The L-thread subsystem

The L-thread subsystem resides in the examples/performance-thread/common directory and


is built and linked automatically when building the l3fwd-thread example.
The subsystem provides a simple cooperative scheduler to enable arbitrary functions to run as
cooperative threads within a single EAL thread. The subsystem provides a pthread like API
that is intended to assist in reuse of legacy code written for POSIX pthreads.
The following sections provide some detail on the features, constraints, performance and port-
ing considerations when using L-threads.

Comparison between L-threads and POSIX pthreads

The fundamental difference between the L-thread and pthread models is the way in which
threads are scheduled. The simplest way to think about this is to consider the case of a
processor with a single CPU. To run multiple threads on a single CPU, the scheduler must fre-
quently switch between the threads, in order that each thread is able to make timely progress.
This is the basis of any multitasking operating system.
This section explores the differences between the pthread model and the L-thread model as
implemented in the provided L-thread subsystem. If needed a theoretical discussion of pre-
emptive vs cooperative multi-threading can be found in any good text on operating system
design.

Scheduling and context switching

The POSIX pthread library provides an application programming interface to create and syn-
chronize threads. Scheduling policy is determined by the host OS, and may be configurable.
The OS may use sophisticated rules to determine which thread should be run next, threads
may suspend themselves or make other threads ready, and the scheduler may employ a time
slice giving each thread a maximum time quantum after which it will be preempted in favor of
another thread that is ready to run. To complicate matters further threads may be assigned
different scheduling priorities.
By contrast the L-thread subsystem is considerably simpler. Logically the L-thread scheduler
performs the same multiplexing function for L-threads within a single pthread as the OS sched-
uler does for pthreads within an application process. The L-thread scheduler is simply the main
loop of a pthread, and in so far as the host OS is concerned it is a regular pthread just like any
other. The host OS is oblivious about the existence of and not at all involved in the scheduling
of L-threads.
The other and most significant difference between the two models is that L-threads are sched-
uled cooperatively. L-threads cannot not preempt each other, nor can the L-thread scheduler

3.40. Performance Thread Sample Application 233


DPDK documentation, Release 17.05.0-rc0

preempt a running L-thread (i.e. there is no time slicing). The consequence is that programs
implemented with L-threads must possess frequent rescheduling points, meaning that they
must explicitly and of their own volition return to the scheduler at frequent intervals, in order to
allow other L-threads an opportunity to proceed.
In both models switching between threads requires that the current CPU context is saved and
a new context (belonging to the next thread ready to run) is restored. With pthreads this
context switching is handled transparently and the set of CPU registers that must be preserved
between context switches is as per an interrupt handler.
An L-thread context switch is achieved by the thread itself making a function call to the L-thread
scheduler. Thus it is only necessary to preserve the callee registers. The caller is responsible
to save and restore any other registers it is using before a function call, and restore them on
return, and this is handled by the compiler. For X86_64 on both Linux and BSD the System
V calling convention is used, this defines registers RSP, RBP, and R12-R15 as callee-save
registers (for more detailed discussion a good reference is X86 Calling Conventions).
Taking advantage of this, and due to the absence of preemption, an L-thread context switch is
achieved with less than 20 load/store instructions.
The scheduling policy for L-threads is fixed, there is no prioritization of L-threads, all L-threads
are equal and scheduling is based on a FIFO ready queue.
An L-thread is a struct containing the CPU context of the thread (saved on context switch)
and other useful items. The ready queue contains pointers to threads that are ready to run.
The L-thread scheduler is a simple loop that polls the ready queue, reads from it the next
thread ready to run, which it resumes by saving the current context (the current position in the
scheduler loop) and restoring the context of the next thread from its thread struct. Thus an
L-thread is always resumed at the last place it yielded.
A well behaved L-thread will call the context switch regularly (at least once in its main loop)
thus returning to the scheduler’s own main loop. Yielding inserts the current thread at the back
of the ready queue, and the process of servicing the ready queue is repeated, thus the system
runs by flipping back and forth the between L-threads and scheduler loop.
In the case of pthreads, the preemptive scheduling, time slicing, and support for thread prioriti-
zation means that progress is normally possible for any thread that is ready to run. This comes
at the price of a relatively heavier context switch and scheduling overhead.
With L-threads the progress of any particular thread is determined by the frequency of
rescheduling opportunities in the other L-threads. This means that an errant L-thread mo-
nopolizing the CPU might cause scheduling of other threads to be stalled. Due to the lower
cost of context switching, however, voluntary rescheduling to ensure progress of other threads,
if managed sensibly, is not a prohibitive overhead, and overall performance can exceed that of
an application using pthreads.

Mutual exclusion

With pthreads preemption means that threads that share data must observe some form of
mutual exclusion protocol.
The fact that L-threads cannot preempt each other means that in many cases mutual exclusion
devices can be completely avoided.
Locking to protect shared data can be a significant bottleneck in multi-threaded applications

3.40. Performance Thread Sample Application 234


DPDK documentation, Release 17.05.0-rc0

so a carefully designed cooperatively scheduled program can enjoy significant performance


advantages.
So far we have considered only the simplistic case of a single core CPU, when multiple CPUs
are considered things are somewhat more complex.
First of all it is inevitable that there must be multiple L-thread schedulers, one running on each
EAL thread. So long as these schedulers remain isolated from each other the above assertions
about the potential advantages of cooperative scheduling hold true.
A configuration with isolated cooperative schedulers is less flexible than the pthread model
where threads can be affinitized to run on any CPU. With isolated schedulers scaling of appli-
cations to utilize fewer or more CPUs according to system demand is very difficult to achieve.
The L-thread subsystem makes it possible for L-threads to migrate between schedulers running
on different CPUs. Needless to say if the migration means that threads that share data end up
running on different CPUs then this will introduce the need for some kind of mutual exclusion
system.
Of course rte_ring software rings can always be used to interconnect threads running on
different cores, however to protect other kinds of shared data structures, lock free constructs
or else explicit locking will be required. This is a consideration for the application design.
In support of this extended functionality, the L-thread subsystem implements thread safe mu-
texes and condition variables.
The cost of affinitizing and of condition variable signaling is significantly lower than the equiv-
alent pthread operations, and so applications using these features will see a performance
benefit.

Thread local storage

As with applications written for pthreads an application written for L-threads can take advantage
of thread local storage, in this case local to an L-thread. An application may save and retrieve
a single pointer to application data in the L-thread struct.
For legacy and backward compatibility reasons two alternative methods are also offered, the
first is modelled directly on the pthread get/set specific APIs, the second approach is mod-
elled on the RTE_PER_LCORE macros, whereby PER_LTHREAD macros are introduced, in both
cases the storage is local to the L-thread.

Constraints and performance implications when using L-threads

API compatibility

The L-thread subsystem provides a set of functions that are logically equivalent to the cor-
responding functions offered by the POSIX pthread library, however not all pthread functions
have a corresponding L-thread equivalent, and not all features available to pthreads are imple-
mented for L-threads.
The pthread library offers considerable flexibility via programmable attributes that can be as-
sociated with threads, mutexes, and condition variables.
By contrast the L-thread subsystem has fixed functionality, the scheduler policy cannot be
varied, and L-threads cannot be prioritized. There are no variable attributes associated with

3.40. Performance Thread Sample Application 235


DPDK documentation, Release 17.05.0-rc0

any L-thread objects. L-threads, mutexes and conditional variables, all have fixed functionality.
(Note: reserved parameters are included in the APIs to facilitate possible future support for
attributes).
The table below lists the pthread and equivalent L-thread APIs with notes on differences and/or
constraints. Where there is no L-thread entry in the table, then the L-thread subsystem pro-
vides no equivalent function.

Table 3.22: Pthread and equivalent L-thread APIs.

Pthread function L-thread function Notes


pthread_barrier_destroy
pthread_barrier_init
pthread_barrier_wait
pthread_cond_broadcast lthread_cond_broadcast See note 1
pthread_cond_destroy lthread_cond_destroy
pthread_cond_init lthread_cond_init
pthread_cond_signal lthread_cond_signal See note 1
pthread_cond_timedwait
pthread_cond_wait lthread_cond_wait See note 5
pthread_create lthread_create See notes 2, 3
pthread_detach lthread_detach See note 4
pthread_equal
pthread_exit lthread_exit
pthread_getspecific lthread_getspecific
pthread_getcpuclockid
pthread_join lthread_join
pthread_key_create lthread_key_create
pthread_key_delete lthread_key_delete
pthread_mutex_destroy lthread_mutex_destroy
pthread_mutex_init lthread_mutex_init
pthread_mutex_lock lthread_mutex_lock See note 6
pthread_mutex_trylock lthread_mutex_trylock See note 6
pthread_mutex_timedlock
pthread_mutex_unlock lthread_mutex_unlock
pthread_once
pthread_rwlock_destroy
pthread_rwlock_init
pthread_rwlock_rdlock
pthread_rwlock_timedrdlock
pthread_rwlock_timedwrlock
pthread_rwlock_tryrdlock
pthread_rwlock_trywrlock
pthread_rwlock_unlock
pthread_rwlock_wrlock
pthread_self lthread_current
pthread_setspecific lthread_setspecific
pthread_spin_init See note 10
pthread_spin_destroy See note 10
pthread_spin_lock See note 10
Continued on next page

3.40. Performance Thread Sample Application 236


DPDK documentation, Release 17.05.0-rc0

Table 3.22 – continued from previous page


Pthread function L-thread function Notes
pthread_spin_trylock See note 10
pthread_spin_unlock See note 10
pthread_cancel lthread_cancel
pthread_setcancelstate
pthread_setcanceltype
pthread_testcancel
pthread_getschedparam
pthread_setschedparam
pthread_yield lthread_yield See note 7
pthread_setaffinity_np lthread_set_affinity See notes 2, 3, 8
lthread_sleep See note 9
lthread_sleep_clks See note 9

Note 1:
Neither lthread signal nor broadcast may be called concurrently by L-threads running on differ-
ent schedulers, although multiple L-threads running in the same scheduler may freely perform
signal or broadcast operations. L-threads running on the same or different schedulers may
always safely wait on a condition variable.
Note 2:
Pthread attributes may be used to affinitize a pthread with a cpu-set. The L-thread subsystem
does not support a cpu-set. An L-thread may be affinitized only with a single CPU at any time.
Note 3:
If an L-thread is intended to run on a different NUMA node than the node that creates the
thread then, when calling lthread_create() it is advantageous to specify the destination
core as a parameter of lthread_create(). See Memory allocation and NUMA awareness
for details.
Note 4:
An L-thread can only detach itself, and cannot detach other L-threads.
Note 5:
A wait operation on a pthread condition variable is always associated with and protected by
a mutex which must be owned by the thread at the time it invokes pthread_wait(). By
contrast L-thread condition variables are thread safe (for waiters) and do not use an associated
mutex. Multiple L-threads (including L-threads running on other schedulers) can safely wait on
a L-thread condition variable. As a consequence the performance of an L-thread condition
variables is typically an order of magnitude faster than its pthread counterpart.
Note 6:
Recursive locking is not supported with L-threads, attempts to take a lock recursively will be
detected and rejected.
Note 7:
lthread_yield() will save the current context, insert the current thread to the back of the
ready queue, and resume the next ready thread. Yielding increases ready queue backlog, see
Ready queue backlog for more details about the implications of this.

3.40. Performance Thread Sample Application 237


DPDK documentation, Release 17.05.0-rc0

N.B. The context switch time as measured from immediately before the call to
lthread_yield() to the point at which the next ready thread is resumed, can be an or-
der of magnitude faster that the same measurement for pthread_yield.
Note 8:
lthread_set_affinity() is similar to a yield apart from the fact that the yielding thread
is inserted into a peer ready queue of another scheduler. The peer ready queue is actually a
separate thread safe queue, which means that threads appearing in the peer ready queue can
jump any backlog in the local ready queue on the destination scheduler.
The context switch time as measured from the time just before the call to
lthread_set_affinity() to just after the same thread is resumed on the new
scheduler can be orders of magnitude faster than the same measurement for
pthread_setaffinity_np().
Note 9:
Although there is no pthread_sleep() function, lthread_sleep() and
lthread_sleep_clks() can be used wherever sleep(), usleep() or nanosleep()
might ordinarily be used. The L-thread sleep functions suspend the current thread, start an
rte_timer and resume the thread when the timer matures. The rte_timer_manage()
entry point is called on every pass of the scheduler loop. This means that the worst case jitter
on timer expiry is determined by the longest period between context switches of any running
L-threads.
In a synthetic test with many threads sleeping and resuming then the measured jitter is typically
orders of magnitude lower than the same measurement made for nanosleep().
Note 10:
Spin locks are not provided because they are problematical in a cooperative environment, see
Locks and spinlocks for a more detailed discussion on how to avoid spin locks.

Thread local storage

Of the three L-thread local storage options the simplest and most efficient is storing a single
application data pointer in the L-thread struct.
The PER_LTHREAD macros involve a run time computation to obtain the address of the variable
being saved/retrieved and also require that the accesses are de-referenced via a pointer. This
means that code that has used RTE_PER_LCORE macros being ported to L-threads might need
some slight adjustment (see Thread local storage for hints about porting code that makes use
of thread local storage).
The get/set specific APIs are consistent with their pthread counterparts both in use and in
performance.

Memory allocation and NUMA awareness

All memory allocation is from DPDK huge pages, and is NUMA aware. Each scheduler main-
tains its own caches of objects: lthreads, their stacks, TLS, mutexes and condition variables.
These caches are implemented as unbounded lock free MPSC queues. When objects are
created they are always allocated from the caches on the local core (current EAL thread).

3.40. Performance Thread Sample Application 238


DPDK documentation, Release 17.05.0-rc0

If an L-thread has been affinitized to a different scheduler, then it can always safely free re-
sources to the caches from which they originated (because the caches are MPSC queues).
If the L-thread has been affinitized to a different NUMA node then the memory resources
associated with it may incur longer access latency.
The commonly used pattern of setting affinity on entry to a thread after it has started, means
that memory allocation for both the stack and TLS will have been made from caches on the
NUMA node on which the threads creator is running. This has the side effect that access
latency will be sub-optimal after affinitizing.
This side effect can be mitigated to some extent (although not completely) by specifying the
destination CPU as a parameter of lthread_create() this causes the L-thread’s stack and
TLS to be allocated when it is first scheduled on the destination scheduler, if the destination is
a on another NUMA node it results in a more optimal memory allocation.
Note that the lthread struct itself remains allocated from memory on the creating node, this is
unavoidable because an L-thread is known everywhere by the address of this struct.

Object cache sizing

The per lcore object caches pre-allocate objects in bulk whenever a request to allocate an
object finds a cache empty. By default 100 objects are pre-allocated, this is defined by
LTHREAD_PREALLOC in the public API header file lthread_api.h. This means that the caches
constantly grow to meet system demand.
In the present implementation there is no mechanism to reduce the cache sizes if system
demand reduces. Thus the caches will remain at their maximum extent indefinitely.
A consequence of the bulk pre-allocation of objects is that every 100 (default value) additional
new object create operations results in a call to rte_malloc(). For creation of objects such
as L-threads, which trigger the allocation of even more objects (i.e. their stacks and TLS) then
this can cause outliers in scheduling performance.
If this is a problem the simplest mitigation strategy is to dimension the system, by setting the
bulk object pre-allocation size to some large number that you do not expect to be exceeded.
This means the caches will be populated once only, the very first time a thread is created.

Ready queue backlog

One of the more subtle performance considerations is managing the ready queue backlog.
The fewer threads that are waiting in the ready queue then the faster any particular thread will
get serviced.
In a naive L-thread application with N L-threads simply looping and yielding, this backlog will
always be equal to the number of L-threads, thus the cost of a yield to a particular L-thread will
be N times the context switch time.
This side effect can be mitigated by arranging for threads to be suspended and wait to be
resumed, rather than polling for work by constantly yielding. Blocking on a mutex or condition
variable or even more obviously having a thread sleep if it has a low frequency workload are all
mechanisms by which a thread can be excluded from the ready queue until it really does need
to be run. This can have a significant positive impact on performance.

3.40. Performance Thread Sample Application 239


DPDK documentation, Release 17.05.0-rc0

Initialization, shutdown and dependencies

The L-thread subsystem depends on DPDK for huge page allocation and de-
pends on the rte_timer subsystem. The DPDK EAL initialization and
rte_timer_subsystem_init() MUST be completed before the L-thread sub system
can be used.
Thereafter initialization of the L-thread subsystem is largely transparent to the application.
Constructor functions ensure that global variables are properly initialized. Other than global
variables each scheduler is initialized independently the first time that an L-thread is created
by a particular EAL thread.
If the schedulers are to be run as isolated and independent schedulers, with no intention that L-
threads running on different schedulers will migrate between schedulers or synchronize with L-
threads running on other schedulers, then initialization consists simply of creating an L-thread,
and then running the L-thread scheduler.
If there will be interaction between L-threads running on different schedulers, then it is impor-
tant that the starting of schedulers on different EAL threads is synchronized.
To achieve this an additional initialization step is necessary, this is simply to set the number of
schedulers by calling the API function lthread_num_schedulers_set(n), where n is the
number of EAL threads that will run L-thread schedulers. Setting the number of schedulers to
a number greater than 0 will cause all schedulers to wait until the others have started before
beginning to schedule L-threads.
The L-thread scheduler is started by calling the function lthread_run() and should be called
from the EAL thread and thus become the main loop of the EAL thread.
The function lthread_run(), will not return until all threads running on the
scheduler have exited, and the scheduler has been explicitly stopped by calling
lthread_scheduler_shutdown(lcore) or lthread_scheduler_shutdown_all().
All these function do is tell the scheduler that it can exit when there are no longer any running
L-threads, neither function forces any running L-thread to terminate. Any desired application
shutdown behavior must be designed and built into the application to ensure that L-threads
complete in a timely manner.
Important Note: It is assumed when the scheduler exits that the application is terminating
for good, the scheduler does not free resources before exiting and running the scheduler a
subsequent time will result in undefined behavior.

Porting legacy code to run on L-threads

Legacy code originally written for a pthread environment may be ported to L-threads if the
considerations about differences in scheduling policy, and constraints discussed in the previous
sections can be accommodated.
This section looks in more detail at some of the issues that may have to be resolved when
porting code.

3.40. Performance Thread Sample Application 240


DPDK documentation, Release 17.05.0-rc0

pthread API compatibility

The first step is to establish exactly which pthread APIs the legacy application uses, and to
understand the requirements of those APIs. If there are corresponding L-lthread APIs, and
where the default pthread functionality is used by the application then, notwithstanding the
other issues discussed here, it should be feasible to run the application with L-threads. If the
legacy code modifies the default behavior using attributes then if may be necessary to make
some adjustments to eliminate those requirements.

Blocking system API calls

It is important to understand what other system services the application may be using, bearing
in mind that in a cooperatively scheduled environment a thread cannot block without stalling
the scheduler and with it all other cooperative threads. Any kind of blocking system call, for
example file or socket IO, is a potential problem, a good tool to analyze the application for this
purpose is the strace utility.
There are many strategies to resolve these kind of issues, each with it merits. Possible solu-
tions include:
• Adopting a polled mode of the system API concerned (if available).
• Arranging for another core to perform the function and synchronizing with that core via
constructs that will not block the L-thread.
• Affinitizing the thread to another scheduler devoted (as a matter of policy) to handling
threads wishing to make blocking calls, and then back again when finished.

Locks and spinlocks

Locks and spinlocks are another source of blocking behavior that for the same reasons as
system calls will need to be addressed.
If the application design ensures that the contending L-threads will always run on the same
scheduler then it its probably safe to remove locks and spin locks completely.
The only exception to the above rule is if for some reason the code performs any kind of context
switch whilst holding the lock (e.g. yield, sleep, or block on a different lock, or on a condition
variable). This will need to determined before deciding to eliminate a lock.
If a lock cannot be eliminated then an L-thread mutex can be substituted for either kind of lock.
An L-thread blocking on an L-thread mutex will be suspended and will cause another ready
L-thread to be resumed, thus not blocking the scheduler. When default behavior is required, it
can be used as a direct replacement for a pthread mutex lock.
Spin locks are typically used when lock contention is likely to be rare and where the period
during which the lock may be held is relatively short. When the contending L-threads are
running on the same scheduler then an L-thread blocking on a spin lock will enter an infinite
loop stopping the scheduler completely (see Infinite loops below).
If the application design ensures that contending L-threads will always run on different sched-
ulers then it might be reasonable to leave a short spin lock that rarely experiences contention
in place.

3.40. Performance Thread Sample Application 241


DPDK documentation, Release 17.05.0-rc0

If after all considerations it appears that a spin lock can neither be eliminated completely,
replaced with an L-thread mutex, or left in place as is, then an alternative is to loop on a flag,
with a call to lthread_yield() inside the loop (n.b. if the contending L-threads might ever
run on different schedulers the flag will need to be manipulated atomically).
Spinning and yielding is the least preferred solution since it introduces ready queue backlog
(see also Ready queue backlog).

Sleeps and delays

Yet another kind of blocking behavior (albeit momentary) are delay functions like sleep(),
usleep(), nanosleep() etc. All will have the consequence of stalling the L-thread scheduler
and unless the delay is very short (e.g. a very short nanosleep) calls to these functions will
need to be eliminated.
The simplest mitigation strategy is to use the L-thread sleep API functions, of which two
variants exist, lthread_sleep() and lthread_sleep_clks(). These functions start an
rte_timer against the L-thread, suspend the L-thread and cause another ready L-thread to be
resumed. The suspended L-thread is resumed when the rte_timer matures.

Infinite loops

Some applications have threads with loops that contain no inherent rescheduling opportunity,
and rely solely on the OS time slicing to share the CPU. In a cooperative environment this will
stop everything dead. These kind of loops are not hard to identify, in a debug session you will
find the debugger is always stopping in the same loop.
The simplest solution to this kind of problem is to insert an explicit lthread_yield() or
lthread_sleep() into the loop. Another solution might be to include the function performed
by the loop into the execution path of some other loop that does in fact yield, if this is possible.

Thread local storage

If the application uses thread local storage, the use case should be studied carefully.
In a legacy pthread application either or both the __thread prefix, or the pthread set/get
specific APIs may have been used to define storage local to a pthread.
In some applications it may be a reasonable assumption that the data could or in fact most
likely should be placed in L-thread local storage.
If the application (like many DPDK applications) has assumed a certain relationship between
a pthread and the CPU to which it is affinitized, there is a risk that thread local storage may
have been used to save some data items that are correctly logically associated with the CPU,
and others items which relate to application context for the thread. Only a good understanding
of the application will reveal such cases.
If the application requires an that an L-thread is to be able to move between schedulers then
care should be taken to separate these kinds of data, into per lcore, and per L-thread storage.
In this way a migrating thread will bring with it the local data it needs, and pick up the new
logical core specific values from pthread local storage at its new home.

3.40. Performance Thread Sample Application 242


DPDK documentation, Release 17.05.0-rc0

Pthread shim

A convenient way to get something working with legacy code can be to use a shim that adapts
pthread API calls to the corresponding L-thread ones. This approach will not mitigate any of
the porting considerations mentioned in the previous sections, but it will reduce the amount
of code churn that would otherwise been involved. It is a reasonable approach to evaluate
L-threads, before investing effort in porting to the native L-thread APIs.

Overview

The L-thread subsystem includes an example pthread shim. This is a partial implementation
but does contain the API stubs needed to get basic applications running. There is a simple
“hello world” application that demonstrates the use of the pthread shim.
A subtlety of working with a shim is that the application will still need to make use of the
genuine pthread library functions, at the very least in order to create the EAL threads in which
the L-thread schedulers will run. This is the case with DPDK initialization, and exit.
To deal with the initialization and shutdown scenarios, the shim is capable of switching on or
off its adaptor functionality, an application can control this behavior by the calling the function
pt_override_set(). The default state is disabled.
The pthread shim uses the dynamic linker loader and saves the loaded addresses of the gen-
uine pthread API functions in an internal table, when the shim functionality is enabled it per-
forms the adaptor function, when disabled it invokes the genuine pthread function.
The function pthread_exit() has additional special handling. The standard system header
file pthread.h declares pthread_exit() with __attribute__((noreturn)) this is an
optimization that is possible because the pthread is terminating and this enables the compiler to
omit the normal handling of stack and protection of registers since the function is not expected
to return, and in fact the thread is being destroyed. These optimizations are applied in both the
callee and the caller of the pthread_exit() function.
In our cooperative scheduling environment this behavior is inadmissible. The pthread is the
L-thread scheduler thread, and, although an L-thread is terminating, there must be a return to
the scheduler in order that the system can continue to run. Further, returning from a function
with attribute noreturn is invalid and may result in undefined behavior.
The solution is to redefine the pthread_exit function with a macro, causing it to be mapped
to a stub function in the shim that does not have the noreturn attribute. This macro is defined
in the file pthread_shim.h. The stub function is otherwise no different than any of the other
stub functions in the shim, and will switch between the real pthread_exit() function or the
lthread_exit() function as required. The only difference is that the mapping to the stub by
macro substitution.
A consequence of this is that the file pthread_shim.h must be included in legacy code
wishing to make use of the shim. It also means that dynamic linkage of a pre-compiled binary
that did not include pthread_shim.h is not be supported.
Given the requirements for porting legacy code outlined in Porting legacy code to run on L-
threads most applications will require at least some minimal adjustment and recompilation to
run on L-threads so pre-compiled binaries are unlikely to be met in practice.
In summary the shim approach adds some overhead but can be a useful tool to help establish
the feasibility of a code reuse project. It is also a fairly straightforward task to extend the shim

3.40. Performance Thread Sample Application 243


DPDK documentation, Release 17.05.0-rc0

if necessary.
Note: Bearing in mind the preceding discussions about the impact of making blocking calls
then switching the shim in and out on the fly to invoke any pthread API this might block is
something that should typically be avoided.

Building and running the pthread shim

The shim example application is located in the sample application in the performance-thread
folder
To build and run the pthread shim example
1. Go to the example applications folder
export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/performance-thread/pthread_shim

2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc

See the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make

4. To run the pthread_shim example


lthread-pthread-shim -c core_mask -n number_of_channels

L-thread Diagnostics

When debugging you must take account of the fact that the L-threads are run in a single
pthread. The current scheduler is defined by RTE_PER_LCORE(this_sched), and the cur-
rent lthread is stored at RTE_PER_LCORE(this_sched)->current_lthread. Thus on a
breakpoint in a GDB session the current lthread can be obtained by displaying the pthread
local variable per_lcore_this_sched->current_lthread.
Another useful diagnostic feature is the possibility to trace significant events in the life of an
L-thread, this feature is enabled by changing the value of LTHREAD_DIAG from 0 to 1 in the
file lthread_diag_api.h.
Tracing of events can be individually masked, and the mask may be programmed at run time.
An unmasked event results in a callback that provides information about the event. The default
callback simply prints trace information. The default mask is 0 (all events off) the mask can be
modified by calling the function lthread_diagniostic_set_mask().
It is possible register a user callback function to implement more sophisticated diagnostic func-
tions. Object creation events (lthread, mutex, and condition variable) accept, and store in the
created object, a user supplied reference value returned by the callback function.
The lthread reference value is passed back in all subsequent event callbacks, the mutex and
APIs are provided to retrieve the reference value from mutexes and condition variables. This
enables a user to monitor, count, or filter for specific events, on specific objects, for example
to monitor for a specific thread signaling a specific condition variable, or to monitor on all timer
events, the possibilities and combinations are endless.

3.40. Performance Thread Sample Application 244


DPDK documentation, Release 17.05.0-rc0

The callback function can be set by calling the function lthread_diagnostic_enable()


supplying a callback function pointer and an event mask.
Setting LTHREAD_DIAG also enables counting of statistics about cache and queue usage, and
these statistics can be displayed by calling the function lthread_diag_stats_display().
This function also performs a consistency check on the caches and queues. The function
should only be called from the master EAL thread after all slave threads have stopped and
returned to the C main program, otherwise the consistency check will fail.

IPsec Security Gateway Sample Application

The IPsec Security Gateway application is an example of a “real world” application using DPDK
cryptodev framework.

Overview

The application demonstrates the implementation of a Security Gateway (not IPsec compliant,
see the Constraints section below) using DPDK based on RFC4301, RFC4303, RFC3602 and
RFC2404.
Internet Key Exchange (IKE) is not implemented, so only manual setting of Security Policies
and Security Associations is supported.
The Security Policies (SP) are implemented as ACL rules, the Security Associations (SA) are
stored in a table and the routing is implemented using LPM.
The application classifies the ports as Protected and Unprotected. Thus, traffic received on an
Unprotected or Protected port is consider Inbound or Outbound respectively.
The Path for IPsec Inbound traffic is:
• Read packets from the port.
• Classify packets between IPv4 and ESP.
• Perform Inbound SA lookup for ESP packets based on their SPI.
• Perform Verification/Decryption.
• Remove ESP and outer IP header
• Inbound SP check using ACL of decrypted packets and any other IPv4 packets.
• Routing.
• Write packet to port.
The Path for the IPsec Outbound traffic is:
• Read packets from the port.
• Perform Outbound SP check using ACL of all IPv4 traffic.
• Perform Outbound SA lookup for packets that need IPsec protection.
• Add ESP and outer IP header.
• Perform Encryption/Digest.

3.41. IPsec Security Gateway Sample Application 245


DPDK documentation, Release 17.05.0-rc0

• Routing.
• Write packet to port.

Constraints

• No IPv6 options headers.


• No AH mode.
• Supported algorithms: AES-CBC, AES-CTR, AES-GCM, HMAC-SHA1 and NULL.
• Each SA must be handle by a unique lcore (1 RX queue per port).
• No chained mbufs.

Compiling the Application

To compile the application:


1. Go to the sample application directory:
export RTE_SDK=/path/to/rte_sdk
cd ${RTE_SDK}/examples/ipsec-secgw

2. Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc

See the DPDK Getting Started Guide for possible RTE_TARGET values.
3. Build the application:
make

4. [Optional] Build the application for debugging: This option adds some extra flags, disables
compiler optimizations and is verbose:
make DEBUG=1

Running the Application

The application has a number of command line options:


./build/ipsec-secgw [EAL options] --
-p PORTMASK -P -u PORTMASK
--config (port,queue,lcore)[,(port,queue,lcore]
--single-sa SAIDX
-f CONFIG_FILE_PATH

Where:
• -p PORTMASK: Hexadecimal bitmask of ports to configure.
• -P: optional. Sets all ports to promiscuous mode so that packets are accepted regardless
of the packet’s Ethernet MAC destination address. Without this option, only packets
with the Ethernet MAC destination address set to the Ethernet address of the port are
accepted (default is enabled).
• -u PORTMASK: hexadecimal bitmask of unprotected ports

3.41. IPsec Security Gateway Sample Application 246


DPDK documentation, Release 17.05.0-rc0

• --config (port,queue,lcore)[,(port,queue,lcore)]: determines which


queues from which ports are mapped to which cores.
• --single-sa SAIDX: use a single SA for outbound traffic, bypassing the SP on both
Inbound and Outbound. This option is meant for debugging/performance purposes.
• -f CONFIG_FILE_PATH: the full path of text-based file containing all configuration
items for running the application (See Configuration file syntax section below). -f
CONFIG_FILE_PATH must be specified. ONLY the UNIX format configuration file is
accepted.
The mapping of lcores to port/queues is similar to other l3fwd applications.
For example, given the following command line:
./build/ipsec-secgw -l 20,21 -n 4 --socket-mem 0,2048 \
--vdev "cryptodev_null_pmd" -- -p 0xf -P -u 0x3 \
--config="(0,0,20),(1,0,20),(2,0,21),(3,0,21)" \
-f /path/to/config_file \

where each options means:


• The -l option enables cores 20 and 21.
• The -n option sets memory 4 channels.
• The --socket-mem to use 2GB on socket 1.
• The --vdev "cryptodev_null_pmd" option creates virtual NULL cryptodev PMD.
• The -p option enables ports (detected) 0, 1, 2 and 3.
• The -P option enables promiscuous mode.
• The -u option sets ports 1 and 2 as unprotected, leaving 2 and 3 as protected.
• The --config option enables one queue per port with the following mapping:
Port Queue lcore Description
0 0 20 Map queue 0 from port 0 to lcore 20.
1 0 20 Map queue 0 from port 1 to lcore 20.
2 0 21 Map queue 0 from port 2 to lcore 21.
3 0 21 Map queue 0 from port 3 to lcore 21.
• The -f /path/to/config_file option enables the application read and parse the
configuration file specified, and configures the application with a given set of SP, SA and
Routing entries accordingly. The syntax of the configuration file will be explained below
in more detail. Please note the parser only accepts UNIX format text file. Other formats
such as DOS/MAC format will cause a parse error.
Refer to the DPDK Getting Started Guide for general information on running applications and
the Environment Abstraction Layer (EAL) options.
The application would do a best effort to “map” crypto devices to cores, with hardware devices
having priority. Basically, hardware devices if present would be assigned to a core before
software ones. This means that if the application is using a single core and both hardware and
software crypto devices are detected, hardware devices will be used.
A way to achieve the case where you want to force the use of virtual crypto devices is to
whitelist the Ethernet devices needed and therefore implicitly blacklisting all hardware crypto
devices.

3.41. IPsec Security Gateway Sample Application 247


DPDK documentation, Release 17.05.0-rc0

For example, something like the following command line:


./build/ipsec-secgw -l 20,21 -n 4 --socket-mem 0,2048 \
-w 81:00.0 -w 81:00.1 -w 81:00.2 -w 81:00.3 \
--vdev "cryptodev_aesni_mb_pmd" --vdev "cryptodev_null_pmd" \
-- \
-p 0xf -P -u 0x3 --config="(0,0,20),(1,0,20),(2,0,21),(3,0,21)" \
-f sample.cfg

Configurations

The following sections provide the syntax of configurations to initialize your SP, SA and Routing
tables. Configurations shall be specified in the configuration file to be passed to the application.
The file is then parsed by the application. The successful parsing will result in the appropriate
rules being applied to the tables accordingly.

Configuration File Syntax

As mention in the overview, the Security Policies are ACL rules. The application parsers the
rules specified in the configuration file and passes them to the ACL table, and replicates them
per socket in use.
Following are the configuration file syntax.

General rule syntax

The parse treats one line in the configuration file as one configuration item (unless the line
concatenation symbol exists). Every configuration item shall follow the syntax of either SP, SA,
or Routing rules specified below.
The configuration parser supports the following special symbols:
• Comment symbol #. Any character from this symbol to the end of line is treated as
comment and will not be parsed.
• Line concatenation symbol \. This symbol shall be placed in the end of the line to be
concatenated to the line below. Multiple lines’ concatenation is supported.

SP rule syntax

The SP rule syntax is shown as follows:


sp <ip_ver> <dir> esp <action> <priority> <src_ip> <dst_ip>
<proto> <sport> <dport>

where each options means:


<ip_ver>
• IP protocol version
• Optional: No
• Available options:
– ipv4: IP protocol version 4

3.41. IPsec Security Gateway Sample Application 248


DPDK documentation, Release 17.05.0-rc0

– ipv6: IP protocol version 6


<dir>
• The traffic direction
• Optional: No
• Available options:
– in: inbound traffic
– out: outbound traffic
<action>
• IPsec action
• Optional: No
• Available options:
– protect <SA_idx>: the specified traffic is protected by SA rule with id SA_idx
– bypass: the specified traffic traffic is bypassed
– discard: the specified traffic is discarded
<priority>
• Rule priority
• Optional: Yes, default priority 0 will be used
• Syntax: pri <id>
<src_ip>
• The source IP address and mask
• Optional: Yes, default address 0.0.0.0 and mask of 0 will be used
• Syntax:
– src X.X.X.X/Y for IPv4
– src XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX/Y for IPv6
<dst_ip>
• The destination IP address and mask
• Optional: Yes, default address 0.0.0.0 and mask of 0 will be used
• Syntax:
– dst X.X.X.X/Y for IPv4
– dst XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX/Y for IPv6
<proto>
• The protocol start and end range
• Optional: yes, default range of 0 to 0 will be used
• Syntax: proto X:Y

3.41. IPsec Security Gateway Sample Application 249


DPDK documentation, Release 17.05.0-rc0

<sport>
• The source port start and end range
• Optional: yes, default range of 0 to 0 will be used
• Syntax: sport X:Y
<dport>
• The destination port start and end range
• Optional: yes, default range of 0 to 0 will be used
• Syntax: dport X:Y
Example SP rules:
sp ipv4 out esp protect 105 pri 1 dst 192.168.115.0/24 sport 0:65535 \
dport 0:65535

sp ipv6 in esp bypass pri 1 dst 0000:0000:0000:0000:5555:5555:\


0000:0000/96 sport 0:65535 dport 0:65535

SA rule syntax

The successfully parsed SA rules will be stored in an array table.


The SA rule syntax is shown as follows:
sa <dir> <spi> <cipher_algo> <cipher_key> <auth_algo> <auth_key>
<mode> <src_ip> <dst_ip>

where each options means:


<dir>
• The traffic direction
• Optional: No
• Available options:
– in: inbound traffic
– out: outbound traffic
<spi>
• The SPI number
• Optional: No
• Syntax: unsigned integer number
<cipher_algo>
• Cipher algorithm
• Optional: No
• Available options:
– null: NULL algorithm
– aes-128-cbc: AES-CBC 128-bit algorithm

3.41. IPsec Security Gateway Sample Application 250


DPDK documentation, Release 17.05.0-rc0

– aes-128-ctr : AES-CTR 128-bit algorithm


– aes-128-gcm: AES-GCM 128-bit algorithm
• Syntax: cipher_algo <your algorithm>
<cipher_key>
• Cipher key, NOT available when ‘null’ algorithm is used
• Optional: No, must followed by <cipher_algo> option
• Syntax: Hexadecimal bytes (0x0-0xFF) concatenate by colon symbol ‘:’. The number of
bytes should be as same as the specified cipher algorithm key size.
For example: cipher_key A1:B2:C3:D4:A1:B2:C3:D4:A1:B2:C3:D4: A1:B2:C3:D4
<auth_algo>
• Authentication algorithm
• Optional: No
• Available options:
– null: NULL algorithm
– sha1-hmac: HMAC SHA1 algorithm
– aes-128-gcm: AES-GCM 128-bit algorithm
<auth_key>
• Authentication key, NOT available when ‘null’ or ‘aes-128-gcm’ algorithm is used.
• Optional: No, must followed by <auth_algo> option
• Syntax: Hexadecimal bytes (0x0-0xFF) concatenate by colon symbol ‘:’. The number of
bytes should be as same as the specified authentication algorithm key size.
For example: auth_key A1:B2:C3:D4:A1:B2:C3:D4:A1:B2:C3:D4:A1:B2:C3:D4:
A1:B2:C3:D4
<mode>
• The operation mode
• Optional: No
• Available options:
– ipv4-tunnel: Tunnel mode for IPv4 packets
– ipv6-tunnel: Tunnel mode for IPv6 packets
– transport: transport mode
• Syntax: mode XXX
<src_ip>
• The source IP address. This option is not available when transport mode is used
• Optional: Yes, default address 0.0.0.0 will be used
• Syntax:

3.41. IPsec Security Gateway Sample Application 251


DPDK documentation, Release 17.05.0-rc0

– src X.X.X.X for IPv4


– src XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX for IPv6
<dst_ip>
• The destination IP address. This option is not available when transport mode is used
• Optional: Yes, default address 0.0.0.0 will be used
• Syntax:
– dst X.X.X.X for IPv4
– dst XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX for IPv6
Example SA rules:
sa out 5 cipher_algo null auth_algo null mode ipv4-tunnel \
src 172.16.1.5 dst 172.16.2.5

sa out 25 cipher_algo aes-128-cbc \


cipher_key c3:c3:c3:c3:c3:c3:c3:c3:c3:c3:c3:c3:c3:c3:c3:c3 \
auth_algo sha1-hmac \
auth_key c3:c3:c3:c3:c3:c3:c3:c3:c3:c3:c3:c3:c3:c3:c3:c3:c3:c3:c3:c3 \
mode ipv6-tunnel \
src 1111:1111:1111:1111:1111:1111:1111:5555 \
dst 2222:2222:2222:2222:2222:2222:2222:5555

sa in 105 cipher_algo aes-128-gcm \


cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \
auth_algo aes-128-gcm \
mode ipv4-tunnel src 172.16.2.5 dst 172.16.1.5

Routing rule syntax

The Routing rule syntax is shown as follows:


rt <ip_ver> <src_ip> <dst_ip> <port>

where each options means:


<ip_ver>
• IP protocol version
• Optional: No
• Available options:
– ipv4: IP protocol version 4
– ipv6: IP protocol version 6
<src_ip>
• The source IP address and mask
• Optional: Yes, default address 0.0.0.0 and mask of 0 will be used
• Syntax:
– src X.X.X.X/Y for IPv4
– src XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX/Y for IPv6

3.41. IPsec Security Gateway Sample Application 252


DPDK documentation, Release 17.05.0-rc0

<dst_ip>
• The destination IP address and mask
• Optional: Yes, default address 0.0.0.0 and mask of 0 will be used
• Syntax:
– dst X.X.X.X/Y for IPv4
– dst XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX/Y for IPv6
<port>
• The traffic output port id
• Optional: yes, default output port 0 will be used
• Syntax: port X
Example SP rules:
rt ipv4 dst 172.16.1.5/32 port 0

rt ipv6 dst 1111:1111:1111:1111:1111:1111:1111:5555/116 port 0

Figures
Fig. 3.1 Packet Flow
Fig. 3.2 Kernel NIC Application Packet Flow
Fig. 3.4 Performance Benchmark Setup (Basic Environment)
Fig. 3.5 Performance Benchmark Setup (Virtualized Environment)
Fig. 3.6 Performance Benchmark Setup (Basic Environment)
Fig. 3.7 Performance Benchmark Setup (Virtualized Environment)
Fig. 3.3 Encryption flow Through the L2 Forwarding with Crypto Application
Fig. 3.9 A typical IPv4 ACL rule
Fig. 3.10 Rules example
Fig. 3.11 Load Balancer Application Architecture
Fig. 3.13 Example Data Flow in a Symmetric Multi-process Application
Fig. 3.14 Example Data Flow in a Client-Server Symmetric Multi-process Application
Fig. 3.15 Master-slave Process Workflow
Fig. 3.16 Slave Process Recovery Process Flow
Fig. 3.17 QoS Scheduler Application Architecture
Fig. 3.18 Intel® QuickAssist Technology Application Block Diagram
Fig. 3.19 Pipeline Overview
Fig. 3.20 Ring-based Processing Pipeline Performance Setup
Fig. 3.21 Threads and Pipelines
Fig. 3.22 Packet Flow Through the VMDQ and DCB Sample Application

3.41. IPsec Security Gateway Sample Application 253


DPDK documentation, Release 17.05.0-rc0

Fig. 3.26 Test Pipeline Application


Fig. 3.27 Performance Benchmarking Setup (Basic Environment)
Fig. 3.28 Distributor Sample Application Layout
Fig. 3.29 Highlevel Solution
Fig. 3.30 VM request to scale frequency Fig. 3.31 Overlay Networking. Fig. 3.32 TEP termi-
nation Framework Overview
Fig. 3.33 PTP Synchronization Protocol
Fig. 3.12 Using EFD as a Flow-Level Load Balancer
Tables
Table 3.1 Output Traffic Marking
Table 3.2 Entity Types
Table 3.21 Table Types

3.41. IPsec Security Gateway Sample Application 254


CHAPTER 4

Programmer’s Guide

Introduction

This document provides software architecture information, development environment informa-


tion and optimization guidelines.
For programming examples and for instructions on compiling and running each sample appli-
cation, see the DPDK Sample Applications User Guide for details.
For general information on compiling and running applications, see the DPDK Getting Started
Guide.

Documentation Roadmap

The following is a list of DPDK documents in the suggested reading order:


• Release Notes : Provides release-specific information, including supported features,
limitations, fixed issues, known issues and so on. Also, provides the answers to frequently
asked questions in FAQ format.
• Getting Started Guide : Describes how to install and configure the DPDK software;
designed to get users up and running quickly with the software.
• FreeBSD* Getting Started Guide : A document describing the use of the DPDK with
FreeBSD* has been added in DPDK Release 1.6.0. Refer to this guide for installation
and configuration instructions to get started using the DPDK with FreeBSD*.
• Programmer’s Guide (this document): Describes:
– The software architecture and how to use it (through examples), specifically in a
Linux* application (linuxapp) environment
– The content of the DPDK, the build system (including the commands that can be
used in the root DPDK Makefile to build the development kit and an application) and
guidelines for porting an application
– Optimizations used in the software and those that should be considered for new
development
A glossary of terms is also provided.
• API Reference : Provides detailed information about DPDK functions, data structures
and other programming constructs.

255
DPDK documentation, Release 17.05.0-rc0

• Sample Applications User Guide: Describes a set of sample applications. Each chap-
ter describes a sample application that showcases specific functionality and provides
instructions on how to compile, run and use the sample application.

Related Publications

The following documents provide information that is relevant to the development of applications
using the DPDK:
• Intel® 64 and IA-32 Architectures Software Developer’s Manual Volume 3A: System Pro-
gramming Guide
Part 1: Architecture Overview

Overview

This section gives a global overview of the architecture of Data Plane Development Kit (DPDK).
The main goal of the DPDK is to provide a simple, complete framework for fast packet process-
ing in data plane applications. Users may use the code to understand some of the techniques
employed, to build upon for prototyping or to add their own protocol stacks. Alternative ecosys-
tem options that use the DPDK are available.
The framework creates a set of libraries for specific environments through the creation of an
Environment Abstraction Layer (EAL), which may be specific to a mode of the Intel® architec-
ture (32-bit or 64-bit), Linux* user space compilers or a specific platform. These environments
are created through the use of make files and configuration files. Once the EAL library is cre-
ated, the user may link with the library to create their own applications. Other libraries, outside
of EAL, including the Hash, Longest Prefix Match (LPM) and rings libraries are also provided.
Sample applications are provided to help show the user how to use various features of the
DPDK.
The DPDK implements a run to completion model for packet processing, where all resources
must be allocated prior to calling Data Plane applications, running as execution units on logical
processing cores. The model does not support a scheduler and all devices are accessed by
polling. The primary reason for not using interrupts is the performance overhead imposed by
interrupt processing.
In addition to the run-to-completion model, a pipeline model may also be used by passing
packets or messages between cores via the rings. This allows work to be performed in stages
and may allow more efficient use of code on cores.

Development Environment

The DPDK project installation requires Linux and the associated toolchain, such as one or more
compilers, assembler, make utility, editor and various libraries to create the DPDK components
and libraries.
Once these libraries are created for the specific environment and architecture, they may then
be used to create the user’s data plane application.

4.2. Overview 256


DPDK documentation, Release 17.05.0-rc0

When creating applications for the Linux user space, the glibc library is used. For DPDK
applications, two environmental variables (RTE_SDK and RTE_TARGET) must be configured
before compiling the applications. The following are examples of how the variables can be set:
export RTE_SDK=/home/user/DPDK
export RTE_TARGET=x86_64-native-linuxapp-gcc

See the DPDK Getting Started Guide for information on setting up the development environ-
ment.

Environment Abstraction Layer

The Environment Abstraction Layer (EAL) provides a generic interface that hides the environ-
ment specifics from the applications and libraries. The services provided by the EAL are:
• DPDK loading and launching
• Support for multi-process and multi-thread execution types
• Core affinity/assignment procedures
• System memory allocation/de-allocation
• Atomic/lock operations
• Time reference
• PCI bus access
• Trace and debug functions
• CPU feature identification
• Interrupt handling
• Alarm operations
• Memory management (malloc)
The EAL is fully described in Environment Abstraction Layer .

Core Components

The core components are a set of libraries that provide all the elements needed for high-
performance packet processing applications.

Fig. 4.1: Core Components Architecture

Ring Manager (librte_ring)

The ring structure provides a lockless multi-producer, multi-consumer FIFO API in a finite size
table. It has some advantages over lockless queues; easier to implement, adapted to bulk
operations and faster. A ring is used by the Memory Pool Manager (librte_mempool) and
may be used as a general communication mechanism between cores and/or execution blocks
connected together on a logical core.
This ring buffer and its usage are fully described in Ring Library .

4.2. Overview 257


DPDK documentation, Release 17.05.0-rc0

Memory Pool Manager (librte_mempool)

The Memory Pool Manager is responsible for allocating pools of objects in memory. A pool
is identified by name and uses a ring to store free objects. It provides some other optional
services, such as a per-core object cache and an alignment helper to ensure that objects are
padded to spread them equally on all RAM channels.
This memory pool allocator is described in Mempool Library .

Network Packet Buffer Management (librte_mbuf)

The mbuf library provides the facility to create and destroy buffers that may be used by the
DPDK application to store message buffers. The message buffers are created at startup time
and stored in a mempool, using the DPDK mempool library.
This library provides an API to allocate/free mbufs, manipulate control message buffers (ctrlm-
buf) which are generic message buffers, and packet buffers (pktmbuf) which are used to carry
network packets.
Network Packet Buffer Management is described in Mbuf Library .

Timer Manager (librte_timer)

This library provides a timer service to DPDK execution units, providing the ability to execute
a function asynchronously. It can be periodic function calls, or just a one-shot call. It uses
the timer interface provided by the Environment Abstraction Layer (EAL) to get a precise time
reference and can be initiated on a per-core basis as required.
The library documentation is available in Timer Library .

Ethernet* Poll Mode Driver Architecture

The DPDK includes Poll Mode Drivers (PMDs) for 1 GbE, 10 GbE and 40GbE, and para virtu-
alized virtio Ethernet controllers which are designed to work without asynchronous, interrupt-
based signaling mechanisms.
See Poll Mode Driver .

Packet Forwarding Algorithm Support

The DPDK includes Hash (librte_hash) and Longest Prefix Match (LPM,librte_lpm) libraries to
support the corresponding packet forwarding algorithms.
See Hash Library and LPM Library for more information.

librte_net

The librte_net library is a collection of IP protocol definitions and convenience macros. It is


based on code from the FreeBSD* IP stack and contains protocol numbers (for use in IP
headers), IP-related macros, IPv4/IPv6 header structures and TCP, UDP and SCTP header
structures.

4.2. Overview 258


DPDK documentation, Release 17.05.0-rc0

Environment Abstraction Layer

The Environment Abstraction Layer (EAL) is responsible for gaining access to low-level re-
sources such as hardware and memory space. It provides a generic interface that hides the
environment specifics from the applications and libraries. It is the responsibility of the initial-
ization routine to decide how to allocate these resources (that is, memory space, PCI devices,
timers, consoles, and so on).
Typical services expected from the EAL are:
• DPDK Loading and Launching: The DPDK and its application are linked as a single
application and must be loaded by some means.
• Core Affinity/Assignment Procedures: The EAL provides mechanisms for assigning exe-
cution units to specific cores as well as creating execution instances.
• System Memory Reservation: The EAL facilitates the reservation of different memory
zones, for example, physical memory areas for device interactions.
• PCI Address Abstraction: The EAL provides an interface to access PCI address space.
• Trace and Debug Functions: Logs, dump_stack, panic and so on.
• Utility Functions: Spinlocks and atomic counters that are not provided in libc.
• CPU Feature Identification: Determine at runtime if a particular feature, for example,
Intel® AVX is supported. Determine if the current CPU supports the feature set that the
binary was compiled for.
• Interrupt Handling: Interfaces to register/unregister callbacks to specific interrupt
sources.
• Alarm Functions: Interfaces to set/remove callbacks to be run at a specific time.

EAL in a Linux-userland Execution Environment

In a Linux user space environment, the DPDK application runs as a user-space application
using the pthread library. PCI information about devices and address space is discovered
through the /sys kernel interface and through kernel modules such as uio_pci_generic, or
igb_uio. Refer to the UIO: User-space drivers documentation in the Linux kernel. This memory
is mmap’d in the application.
The EAL performs physical memory allocation using mmap() in hugetlbfs (using huge page
sizes to increase performance). This memory is exposed to DPDK service layers such as the
Mempool Library .
At this point, the DPDK services layer will be initialized, then through pthread setaffinity calls,
each execution unit will be assigned to a specific logical core to run as a user-level thread.
The time reference is provided by the CPU Time-Stamp Counter (TSC) or by the HPET kernel
API through a mmap() call.

Initialization and Core Launching

Part of the initialization is done by the start function of glibc. A check is also performed at
initialization time to ensure that the micro architecture type chosen in the config file is supported

4.3. Environment Abstraction Layer 259


DPDK documentation, Release 17.05.0-rc0

by the CPU. Then, the main() function is called. The core initialization and launch is done
in rte_eal_init() (see the API documentation). It consist of calls to the pthread library (more
specifically, pthread_self(), pthread_create(), and pthread_setaffinity_np()).

Fig. 4.2: EAL Initialization in a Linux Application Environment

Note: Initialization of objects, such as memory zones, rings, memory pools, lpm tables and
hash tables, should be done as part of the overall application initialization on the master lcore.
The creation and initialization functions for these objects are not multi-thread safe. However,
once initialized, the objects themselves can safely be used in multiple threads simultaneously.

Multi-process Support

The Linuxapp EAL allows a multi-process as well as a multi-threaded (pthread) deployment


model. See chapter Multi-process Support for more details.

Memory Mapping Discovery and Memory Reservation

The allocation of large contiguous physical memory is done using the hugetlbfs kernel filesys-
tem. The EAL provides an API to reserve named memory zones in this contiguous memory.
The physical address of the reserved memory for that memory zone is also returned to the
user by the memory zone reservation API.

Note: Memory reservations done using the APIs provided by rte_malloc are also backed by
pages from the hugetlbfs filesystem.

Xen Dom0 support without hugetbls

The existing memory management implementation is based on the Linux kernel hugepage
mechanism. However, Xen Dom0 does not support hugepages, so a new Linux kernel module
rte_dom0_mm is added to workaround this limitation.
The EAL uses IOCTL interface to notify the Linux kernel module rte_dom0_mm to allocate
memory of specified size, and get all memory segments information from the module, and
the EAL uses MMAP interface to map the allocated memory. For each memory segment,
the physical addresses are contiguous within it but actual hardware addresses are contiguous
within 2MB.

PCI Access

The EAL uses the /sys/bus/pci utilities provided by the kernel to scan the content on the PCI
bus. To access PCI memory, a kernel module called uio_pci_generic provides a /dev/uioX
device file and resource files in /sys that can be mmap’d to obtain access to PCI address
space from the application. The DPDK-specific igb_uio module can also be used for this. Both
drivers use the uio kernel feature (userland driver).

4.3. Environment Abstraction Layer 260


DPDK documentation, Release 17.05.0-rc0

Per-lcore and Shared Variables

Note: lcore refers to a logical execution unit of the processor, sometimes called a hardware
thread.

Shared variables are the default behavior. Per-lcore variables are implemented using Thread
Local Storage (TLS) to provide per-thread local storage.

Logs

A logging API is provided by EAL. By default, in a Linux application, logs are sent to syslog and
also to the console. However, the log function can be overridden by the user to use a different
logging mechanism.

Trace and Debug Functions

There are some debug functions to dump the stack in glibc. The rte_panic() function can
voluntarily provoke a SIG_ABORT, which can trigger the generation of a core file, readable by
gdb.

CPU Feature Identification

The EAL can query the CPU at runtime (using the rte_cpu_get_feature() function) to determine
which CPU features are available.

User Space Interrupt Event

• User Space Interrupt and Alarm Handling in Host Thread


The EAL creates a host thread to poll the UIO device file descriptors to detect the interrupts.
Callbacks can be registered or unregistered by the EAL functions for a specific interrupt event
and are called in the host thread asynchronously. The EAL also allows timed callbacks to be
used in the same way as for NIC interrupts.

Note: In DPDK PMD, the only interrupts handled by the dedicated host thread are those for
link status change, i.e. link up and link down notification.

• RX Interrupt Event
The receive and transmit routines provided by each PMD don’t limit themselves to execute in
polling thread mode. To ease the idle polling with tiny throughput, it’s useful to pause the polling
and wait until the wake-up event happens. The RX interrupt is the first choice to be such kind
of wake-up event, but probably won’t be the only one.
EAL provides the event APIs for this event-driven thread mode. Taking linuxapp as an example,
the implementation relies on epoll. Each thread can monitor an epoll instance in which all the
wake-up events’ file descriptors are added. The event file descriptors are created and mapped

4.3. Environment Abstraction Layer 261


DPDK documentation, Release 17.05.0-rc0

to the interrupt vectors according to the UIO/VFIO spec. From bsdapp’s perspective, kqueue
is the alternative way, but not implemented yet.
EAL initializes the mapping between event file descriptors and interrupt vectors, while each
device initializes the mapping between interrupt vectors and queues. In this way, EAL actually
is unaware of the interrupt cause on the specific vector. The eth_dev driver takes responsibility
to program the latter mapping.

Note: Per queue RX interrupt event is only allowed in VFIO which supports multiple MSI-
X vector. In UIO, the RX interrupt together with other interrupt causes shares the same
vector. In this case, when RX interrupt and LSC(link status change) interrupt are both en-
abled(intr_conf.lsc == 1 && intr_conf.rxq == 1), only the former is capable.

The RX interrupt are controlled/enabled/disabled by ethdev APIs - ‘rte_eth_dev_rx_intr_*’.


They return failure if the PMD hasn’t support them yet. The intr_conf.rxq flag is used to turn on
the capability of RX interrupt per device.

Blacklisting

The EAL PCI device blacklist functionality can be used to mark certain NIC ports as blacklisted,
so they are ignored by the DPDK. The ports to be blacklisted are identified using the PCIe*
description (Domain:Bus:Device.Function).

Misc Functions

Locks and atomic operations are per-architecture (i686 and x86_64).

Memory Segments and Memory Zones (memzone)

The mapping of physical memory is provided by this feature in the EAL. As physical memory
can have gaps, the memory is described in a table of descriptors, and each descriptor (called
rte_memseg ) describes a contiguous portion of memory.
On top of this, the memzone allocator’s role is to reserve contiguous portions of physical mem-
ory. These zones are identified by a unique name when the memory is reserved.
The rte_memzone descriptors are also located in the configuration structure. This structure is
accessed using rte_eal_get_configuration(). The lookup (by name) of a memory zone returns
a descriptor containing the physical address of the memory zone.
Memory zones can be reserved with specific start address alignment by supplying the align
parameter (by default, they are aligned to cache line size). The alignment value should be a
power of two and not less than the cache line size (64 bytes). Memory zones can also be
reserved from either 2 MB or 1 GB hugepages, provided that both are available on the system.

Multiple pthread

DPDK usually pins one pthread per core to avoid the overhead of task switching. This allows
for significant performance gains, but lacks flexibility and is not always efficient.

4.3. Environment Abstraction Layer 262


DPDK documentation, Release 17.05.0-rc0

Power management helps to improve the CPU efficiency by limiting the CPU runtime frequency.
However, alternately it is possible to utilize the idle cycles available to take advantage of the
full capability of the CPU.
By taking advantage of cgroup, the CPU utilization quota can be simply assigned. This gives
another way to improve the CPU efficiency, however, there is a prerequisite; DPDK must handle
the context switching between multiple pthreads per core.
For further flexibility, it is useful to set pthread affinity not only to a CPU but to a CPU set.

EAL pthread and lcore Affinity

The term “lcore” refers to an EAL thread, which is really a Linux/FreeBSD pthread. “EAL
pthreads” are created and managed by EAL and execute the tasks issued by remote_launch.
In each EAL pthread, there is a TLS (Thread Local Storage) called _lcore_id for unique identi-
fication. As EAL pthreads usually bind 1:1 to the physical CPU, the _lcore_id is typically equal
to the CPU ID.
When using multiple pthreads, however, the binding is no longer always 1:1 between an EAL
pthread and a specified physical CPU. The EAL pthread may have affinity to a CPU set, and
as such the _lcore_id will not be the same as the CPU ID. For this reason, there is an EAL
long option ‘–lcores’ defined to assign the CPU affinity of lcores. For a specified lcore ID or ID
group, the option allows setting the CPU set for that EAL pthread.
The format pattern: –lcores=’<lcore_set>[@cpu_set][,<lcore_set>[@cpu_set],...]’
‘lcore_set’ and ‘cpu_set’ can be a single number, range or a group.
A number is a “digit([0-9]+)”; a range is “<number>-<number>”; a group is “(<num-
ber|range>[,<number|range>,...])”.
If a ‘@cpu_set’ value is not supplied, the value of ‘cpu_set’ will default to the value of ‘lcore_set’.
For example, "--lcores='1,2@(5-7),(3-5)@(0,2),(0,6),7-8'" which means start 9 EAL thread;
lcore 0 runs on cpuset 0x41 (cpu 0,6);
lcore 1 runs on cpuset 0x2 (cpu 1);
lcore 2 runs on cpuset 0xe0 (cpu 5,6,7);
lcore 3,4,5 runs on cpuset 0x5 (cpu 0,2);
lcore 6 runs on cpuset 0x41 (cpu 0,6);
lcore 7 runs on cpuset 0x80 (cpu 7);
lcore 8 runs on cpuset 0x100 (cpu 8).

Using this option, for each given lcore ID, the associated CPUs can be assigned. It’s also
compatible with the pattern of corelist(‘-l’) option.

non-EAL pthread support

It is possible to use the DPDK execution context with any user pthread (aka. Non-EAL
pthreads). In a non-EAL pthread, the _lcore_id is always LCORE_ID_ANY which identifies
that it is not an EAL thread with a valid, unique, _lcore_id. Some libraries will use an alter-
native unique ID (e.g. TID), some will not be impacted at all, and some will work but with
limitations (e.g. timer and mempool libraries).
All these impacts are mentioned in Known Issues section.

4.3. Environment Abstraction Layer 263


DPDK documentation, Release 17.05.0-rc0

Public Thread API

There are two public APIs rte_thread_set_affinity() and


rte_pthread_get_affinity() introduced for threads. When they’re used in any
pthread context, the Thread Local Storage(TLS) will be set/get.
Those TLS include _cpuset and _socket_id:
• _cpuset stores the CPUs bitmap to which the pthread is affinitized.
• _socket_id stores the NUMA node of the CPU set. If the CPUs in CPU set belong to
different NUMA node, the _socket_id will be set to SOCKET_ID_ANY.

Known Issues

• rte_mempool
The rte_mempool uses a per-lcore cache inside the mempool. For non-EAL pthreads,
rte_lcore_id() will not return a valid number. So for now, when rte_mempool is used
with non-EAL pthreads, the put/get operations will bypass the default mempool cache and
there is a performance penalty because of this bypass. Only user-owned external caches
can be used in a non-EAL context in conjunction with rte_mempool_generic_put()
and rte_mempool_generic_get() that accept an explicit cache parameter.
• rte_ring
rte_ring supports multi-producer enqueue and multi-consumer dequeue. However, it is
non-preemptive, this has a knock on effect of making rte_mempool non-preemptable.

Note: The “non-preemptive” constraint means:


– a pthread doing multi-producers enqueues on a given ring must not be preempted
by another pthread doing a multi-producer enqueue on the same ring.
– a pthread doing multi-consumers dequeues on a given ring must not be preempted
by another pthread doing a multi-consumer dequeue on the same ring.
Bypassing this constraint may cause the 2nd pthread to spin until the 1st one is scheduled
again. Moreover, if the 1st pthread is preempted by a context that has an higher priority,
it may even cause a dead lock.

This does not mean it cannot be used, simply, there is a need to narrow down the situation
when it is used by multi-pthread on the same core.
1. It CAN be used for any single-producer or single-consumer situation.
2. It MAY be used by multi-producer/consumer pthread whose scheduling policy are all
SCHED_OTHER(cfs). User SHOULD be aware of the performance penalty before
using it.
3. It MUST not be used by multi-producer/consumer pthreads, whose scheduling poli-
cies are SCHED_FIFO or SCHED_RR.
RTE_RING_PAUSE_REP_COUNT is defined for rte_ring to reduce contention. It’s mainly
for case 2, a yield is issued after number of times pause repeat.

4.3. Environment Abstraction Layer 264


DPDK documentation, Release 17.05.0-rc0

It adds a sched_yield() syscall if the thread spins for too long while waiting on the other
thread to finish its operations on the ring. This gives the preempted thread a chance to
proceed and finish with the ring enqueue/dequeue operation.
• rte_timer
Running rte_timer_manager() on a non-EAL pthread is not allowed. However, re-
setting/stopping the timer from a non-EAL pthread is allowed.
• rte_log
In non-EAL pthreads, there is no per thread loglevel and logtype, global loglevels are
used.
• misc
The debug statistics of rte_ring, rte_mempool and rte_timer are not supported in a non-
EAL pthread.

cgroup control

The following is a simple example of cgroup control usage, there are two pthreads(t0 and t1)
doing packet I/O on the same core ($CPU). We expect only 50% of CPU spend on packet IO.
mkdir /sys/fs/cgroup/cpu/pkt_io
mkdir /sys/fs/cgroup/cpuset/pkt_io

echo $cpu > /sys/fs/cgroup/cpuset/cpuset.cpus

echo $t0 > /sys/fs/cgroup/cpu/pkt_io/tasks


echo $t0 > /sys/fs/cgroup/cpuset/pkt_io/tasks

echo $t1 > /sys/fs/cgroup/cpu/pkt_io/tasks


echo $t1 > /sys/fs/cgroup/cpuset/pkt_io/tasks

cd /sys/fs/cgroup/cpu/pkt_io
echo 100000 > pkt_io/cpu.cfs_period_us
echo 50000 > pkt_io/cpu.cfs_quota_us

Malloc

The EAL provides a malloc API to allocate any-sized memory.


The objective of this API is to provide malloc-like functions to allow allocation from hugepage
memory and to facilitate application porting. The DPDK API Reference manual describes the
available functions.
Typically, these kinds of allocations should not be done in data plane processing because they
are slower than pool-based allocation and make use of locks within the allocation and free
paths. However, they can be used in configuration code.
Refer to the rte_malloc() function description in the DPDK API Reference manual for more
information.

4.3. Environment Abstraction Layer 265


DPDK documentation, Release 17.05.0-rc0

Cookies

When CONFIG_RTE_MALLOC_DEBUG is enabled, the allocated memory contains overwrite


protection fields to help identify buffer overflows.

Alignment and NUMA Constraints

The rte_malloc() takes an align argument that can be used to request a memory area that is
aligned on a multiple of this value (which must be a power of two).
On systems with NUMA support, a call to the rte_malloc() function will return memory that has
been allocated on the NUMA socket of the core which made the call. A set of APIs is also
provided, to allow memory to be explicitly allocated on a NUMA socket directly, or by allocated
on the NUMA socket where another core is located, in the case where the memory is to be
used by a logical core other than on the one doing the memory allocation.

Use Cases

This API is meant to be used by an application that requires malloc-like functions at initialization
time.
For allocating/freeing data at runtime, in the fast-path of an application, the memory pool library
should be used instead.

Internal Implementation

Data Structures

There are two data structure types used internally in the malloc library:
• struct malloc_heap - used to track free space on a per-socket basis
• struct malloc_elem - the basic element of allocation and free-space tracking inside the
library.

Structure: malloc_heap The malloc_heap structure is used to manage free space on a


per-socket basis. Internally, there is one heap structure per NUMA node, which allows us to
allocate memory to a thread based on the NUMA node on which this thread runs. While this
does not guarantee that the memory will be used on that NUMA node, it is no worse than a
scheme where the memory is always allocated on a fixed or random node.
The key fields of the heap structure and their function are described below (see also diagram
above):
• lock - the lock field is needed to synchronize access to the heap. Given that the free
space in the heap is tracked using a linked list, we need a lock to prevent two threads
manipulating the list at the same time.
• free_head - this points to the first element in the list of free nodes for this malloc heap.

4.3. Environment Abstraction Layer 266


DPDK documentation, Release 17.05.0-rc0

Note: The malloc_heap structure does not keep track of in-use blocks of memory, since these
are never touched except when they are to be freed again - at which point the pointer to the
block is an input to the free() function.

Fig. 4.3: Example of a malloc heap and malloc elements within the malloc library

Structure: malloc_elem The malloc_elem structure is used as a generic header structure


for various blocks of memory. It is used in three different ways - all shown in the diagram above:
1. As a header on a block of free or allocated memory - normal case
2. As a padding header inside a block of memory
3. As an end-of-memseg marker
The most important fields in the structure and how they are used are described below.

Note: If the usage of a particular field in one of the above three usages is not described, the
field can be assumed to have an undefined value in that situation, for example, for padding
headers only the “state” and “pad” fields have valid values.

• heap - this pointer is a reference back to the heap structure from which this block was
allocated. It is used for normal memory blocks when they are being freed, to add the
newly-freed block to the heap’s free-list.
• prev - this pointer points to the header element/block in the memseg immediately behind
the current one. When freeing a block, this pointer is used to reference the previous block
to check if that block is also free. If so, then the two free blocks are merged to form a
single larger block.
• next_free - this pointer is used to chain the free-list of unallocated memory blocks to-
gether. It is only used in normal memory blocks; on malloc() to find a suitable free
block to allocate and on free() to add the newly freed element to the free-list.
• state - This field can have one of three values: FREE, BUSY or PAD. The former two are
to indicate the allocation state of a normal memory block and the latter is to indicate that
the element structure is a dummy structure at the end of the start-of-block padding, i.e.
where the start of the data within a block is not at the start of the block itself, due to
alignment constraints. In that case, the pad header is used to locate the actual malloc
element header for the block. For the end-of-memseg structure, this is always a BUSY
value, which ensures that no element, on being freed, searches beyond the end of the
memseg for other blocks to merge with into a larger free area.
• pad - this holds the length of the padding present at the start of the block. In the case
of a normal block header, it is added to the address of the end of the header to give the
address of the start of the data area, i.e. the value passed back to the application on
a malloc. Within a dummy header inside the padding, this same value is stored, and is
subtracted from the address of the dummy header to yield the address of the actual block
header.
• size - the size of the data block, including the header itself. For end-of-memseg struc-
tures, this size is given as zero, though it is never actually checked. For normal blocks

4.3. Environment Abstraction Layer 267


DPDK documentation, Release 17.05.0-rc0

which are being freed, this size value is used in place of a “next” pointer to identify the
location of the next block of memory that in the case of being FREE, the two free blocks
can be merged into one.

Memory Allocation

On EAL initialization, all memsegs are setup as part of the malloc heap. This setup involves
placing a dummy structure at the end with BUSY state, which may contain a sentinel value if
CONFIG_RTE_MALLOC_DEBUG is enabled, and a proper element header with FREE at the start
for each memseg. The FREE element is then added to the free_list for the malloc heap.
When an application makes a call to a malloc-like function, the malloc function will first index the
lcore_config structure for the calling thread, and determine the NUMA node of that thread.
The NUMA node is used to index the array of malloc_heap structures which is passed as a
parameter to the heap_alloc() function, along with the requested size, type, alignment and
boundary parameters.
The heap_alloc() function will scan the free_list of the heap, and attempt to find a free block
suitable for storing data of the requested size, with the requested alignment and boundary
constraints.
When a suitable free element has been identified, the pointer to be returned to the user is
calculated. The cache-line of memory immediately preceding this pointer is filled with a struct
malloc_elem header. Because of alignment and boundary constraints, there could be free
space at the start and/or end of the element, resulting in the following behavior:
1. Check for trailing space. If the trailing space is big enough, i.e. > 128 bytes, then the free
element is split. If it is not, then we just ignore it (wasted space).
2. Check for space at the start of the element. If the space at the start is small, i.e. <=128
bytes, then a pad header is used, and the remaining space is wasted. If, however, the
remaining space is greater, then the free element is split.
The advantage of allocating the memory from the end of the existing element is that no ad-
justment of the free list needs to take place - the existing element on the free list just has its
size pointer adjusted, and the following element has its “prev” pointer redirected to the newly
created element.

Freeing Memory

To free an area of memory, the pointer to the start of the data area is passed to the free
function. The size of the malloc_elem structure is subtracted from this pointer to get the
element header for the block. If this header is of type PAD then the pad length is further
subtracted from the pointer to get the proper element header for the entire block.
From this element header, we get pointers to the heap from which the block was allocated and
to where it must be freed, as well as the pointer to the previous element, and via the size field,
we can calculate the pointer to the next element. These next and previous elements are then
checked to see if they are also FREE, and if so, they are merged with the current element. This
means that we can never have two FREE memory blocks adjacent to one another, as they are
always merged into a single block.

4.3. Environment Abstraction Layer 268


DPDK documentation, Release 17.05.0-rc0

Ring Library

The ring allows the management of queues. Instead of having a linked list of infinite size, the
rte_ring has the following properties:
• FIFO
• Maximum size is fixed, the pointers are stored in a table
• Lockless implementation
• Multi-consumer or single-consumer dequeue
• Multi-producer or single-producer enqueue
• Bulk dequeue - Dequeues the specified count of objects if successful; otherwise fails
• Bulk enqueue - Enqueues the specified count of objects if successful; otherwise fails
• Burst dequeue - Dequeue the maximum available objects if the specified count cannot
be fulfilled
• Burst enqueue - Enqueue the maximum available objects if the specified count cannot
be fulfilled
The advantages of this data structure over a linked list queue are as follows:
• Faster; only requires a single Compare-And-Swap instruction of sizeof(void *) instead of
several double-Compare-And-Swap instructions.
• Simpler than a full lockless queue.
• Adapted to bulk enqueue/dequeue operations. As pointers are stored in a table, a de-
queue of several objects will not produce as many cache misses as in a linked queue.
Also, a bulk dequeue of many objects does not cost more than a dequeue of a simple
object.
The disadvantages:
• Size is fixed
• Having many rings costs more in terms of memory than a linked list queue. An empty
ring contains at least N pointers.
A simplified representation of a Ring is shown in with consumer and producer head and tail
pointers to objects stored in the data structure.

Fig. 4.4: Ring Structure

References for Ring Implementation in FreeBSD*

The following code was added in FreeBSD 8.0, and is used in some network device drivers (at
least in Intel drivers):
• bufring.h in FreeBSD
• bufring.c in FreeBSD

4.4. Ring Library 269


DPDK documentation, Release 17.05.0-rc0

Lockless Ring Buffer in Linux*

The following is a link describing the Linux Lockless Ring Buffer Design.

Additional Features

Name

A ring is identified by a unique name. It is not possible to create two rings with the same name
(rte_ring_create() returns NULL if this is attempted).

Water Marking

The ring can have a high water mark (threshold). Once an enqueue operation reaches the high
water mark, the producer is notified, if the water mark is configured.
This mechanism can be used, for example, to exert a back pressure on I/O to inform the LAN
to PAUSE.

Debug

When debug is enabled (CONFIG_RTE_LIBRTE_RING_DEBUG is set), the library stores


some per-ring statistic counters about the number of enqueues/dequeues. These statistics
are per-core to avoid concurrent accesses or atomic operations.

Use Cases

Use cases for the Ring library include:


• Communication between applications in the DPDK
• Used by memory pool allocator

Anatomy of a Ring Buffer

This section explains how a ring buffer operates. The ring structure is composed of two head
and tail couples; one is used by producers and one is used by the consumers. The figures of
the following sections refer to them as prod_head, prod_tail, cons_head and cons_tail.
Each figure represents a simplified state of the ring, which is a circular buffer. The content
of the function local variables is represented on the top of the figure, and the content of ring
structure is represented on the bottom of the figure.

Single Producer Enqueue

This section explains what occurs when a producer adds an object to the ring. In this example,
only the producer head and tail (prod_head and prod_tail) are modified, and there is only one
producer.
The initial state is to have a prod_head and prod_tail pointing at the same location.

4.4. Ring Library 270


DPDK documentation, Release 17.05.0-rc0

Enqueue First Step

First, ring->prod_head and ring->cons_tail are copied in local variables. The prod_next lo-
cal variable points to the next element of the table, or several elements after in case of bulk
enqueue.
If there is not enough room in the ring (this is detected by checking cons_tail), it returns an
error.

Fig. 4.5: Enqueue first step

Enqueue Second Step

The second step is to modify ring->prod_head in ring structure to point to the same location
as prod_next.
A pointer to the added object is copied in the ring (obj4).

Fig. 4.6: Enqueue second step

Enqueue Last Step

Once the object is added in the ring, ring->prod_tail in the ring structure is modified to point to
the same location as ring->prod_head. The enqueue operation is finished.

Fig. 4.7: Enqueue last step

Single Consumer Dequeue

This section explains what occurs when a consumer dequeues an object from the ring. In this
example, only the consumer head and tail (cons_head and cons_tail) are modified and there
is only one consumer.
The initial state is to have a cons_head and cons_tail pointing at the same location.

Dequeue First Step

First, ring->cons_head and ring->prod_tail are copied in local variables. The cons_next local
variable points to the next element of the table, or several elements after in the case of bulk
dequeue.
If there are not enough objects in the ring (this is detected by checking prod_tail), it returns an
error.

4.4. Ring Library 271


DPDK documentation, Release 17.05.0-rc0

Fig. 4.8: Dequeue last step

Dequeue Second Step

The second step is to modify ring->cons_head in the ring structure to point to the same location
as cons_next.
The pointer to the dequeued object (obj1) is copied in the pointer given by the user.

Fig. 4.9: Dequeue second step

Dequeue Last Step

Finally, ring->cons_tail in the ring structure is modified to point to the same location as ring-
>cons_head. The dequeue operation is finished.

Fig. 4.10: Dequeue last step

Multiple Producers Enqueue

This section explains what occurs when two producers concurrently add an object to the ring.
In this example, only the producer head and tail (prod_head and prod_tail) are modified.
The initial state is to have a prod_head and prod_tail pointing at the same location.

Multiple Producers Enqueue First Step

On both cores, ring->prod_head and ring->cons_tail are copied in local variables. The
prod_next local variable points to the next element of the table, or several elements after in
the case of bulk enqueue.
If there is not enough room in the ring (this is detected by checking cons_tail), it returns an
error.

Multiple Producers Enqueue Second Step

The second step is to modify ring->prod_head in the ring structure to point to the same location
as prod_next. This operation is done using a Compare And Swap (CAS) instruction, which
does the following operations atomically:
• If ring->prod_head is different to local variable prod_head, the CAS operation fails, and
the code restarts at first step.
• Otherwise, ring->prod_head is set to local prod_next, the CAS operation is successful,
and processing continues.
In the figure, the operation succeeded on core 1, and step one restarted on core 2.

4.4. Ring Library 272


DPDK documentation, Release 17.05.0-rc0

Fig. 4.11: Multiple producer enqueue first step

Fig. 4.12: Multiple producer enqueue second step

Multiple Producers Enqueue Third Step

The CAS operation is retried on core 2 with success.


The core 1 updates one element of the ring(obj4), and the core 2 updates another one (obj5).

Fig. 4.13: Multiple producer enqueue third step

Multiple Producers Enqueue Fourth Step

Each core now wants to update ring->prod_tail. A core can only update it if ring->prod_tail is
equal to the prod_head local variable. This is only true on core 1. The operation is finished on
core 1.

Multiple Producers Enqueue Last Step

Once ring->prod_tail is updated by core 1, core 2 is allowed to update it too. The operation is
also finished on core 2.

Modulo 32-bit Indexes

In the preceding figures, the prod_head, prod_tail, cons_head and cons_tail indexes are repre-
sented by arrows. In the actual implementation, these values are not between 0 and size(ring)-
1 as would be assumed. The indexes are between 0 and 2^32 -1, and we mask their value
when we access the pointer table (the ring itself). 32-bit modulo also implies that operations
on indexes (such as, add/subtract) will automatically do 2^32 modulo if the result overflows the
32-bit number range.
The following are two examples that help to explain how indexes are used in a ring.

Note: To simplify the explanation, operations with modulo 16-bit are used instead of modulo
32-bit. In addition, the four indexes are defined as unsigned 16-bit integers, as opposed to
unsigned 32-bit integers in the more realistic case.

This ring contains 11000 entries.


This ring contains 12536 entries.

Note: For ease of understanding, we use modulo 65536 operations in the above examples.
In real execution cases, this is redundant for low efficiency, but is done automatically when the
result overflows.

4.4. Ring Library 273


DPDK documentation, Release 17.05.0-rc0

Fig. 4.14: Multiple producer enqueue fourth step

Fig. 4.15: Multiple producer enqueue last step

The code always maintains a distance between producer and consumer between 0 and
size(ring)-1. Thanks to this property, we can do subtractions between 2 index values in a
modulo-32bit base: that’s why the overflow of the indexes is not a problem.
At any time, entries and free_entries are between 0 and size(ring)-1, even if only the first term
of subtraction has overflowed:
uint32_t entries = (prod_tail - cons_head);
uint32_t free_entries = (mask + cons_tail -prod_head);

References

• bufring.h in FreeBSD (version 8)


• bufring.c in FreeBSD (version 8)
• Linux Lockless Ring Buffer Design

Mempool Library

A memory pool is an allocator of a fixed-sized object. In the DPDK, it is identified by name and
uses a mempool handler to store free objects. The default mempool handler is ring based. It
provides some other optional services such as a per-core object cache and an alignment helper
to ensure that objects are padded to spread them equally on all DRAM or DDR3 channels.
This library is used by the Mbuf Library .

Cookies

In debug mode (CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG is enabled), cookies are added


at the beginning and end of allocated blocks. The allocated objects then contain overwrite
protection fields to help debugging buffer overflows.

Stats

In debug mode (CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG is enabled), statistics about get


from/put in the pool are stored in the mempool structure. Statistics are per-lcore to avoid
concurrent access to statistics counters.

Fig. 4.16: Modulo 32-bit indexes - Example 1

4.5. Mempool Library 274


DPDK documentation, Release 17.05.0-rc0

Fig. 4.17: Modulo 32-bit indexes - Example 2

Memory Alignment Constraints

Depending on hardware memory configuration, performance can be greatly improved by


adding a specific padding between objects. The objective is to ensure that the beginning of
each object starts on a different channel and rank in memory so that all channels are equally
loaded.
This is particularly true for packet buffers when doing L3 forwarding or flow classification. Only
the first 64 bytes are accessed, so performance can be increased by spreading the start ad-
dresses of objects among the different channels.
The number of ranks on any DIMM is the number of independent sets of DRAMs that can be
accessed for the full data bit-width of the DIMM. The ranks cannot be accessed simultaneously
since they share the same data path. The physical layout of the DRAM chips on the DIMM itself
does not necessarily relate to the number of ranks.
When running an application, the EAL command line options provide the ability to add the
number of memory channels and ranks.

Note: The command line must always have the number of memory channels specified for the
processor.

Examples of alignment for different DIMM architectures are shown in Fig. 4.18 and Fig. 4.19.

Fig. 4.18: Two Channels and Quad-ranked DIMM Example

In this case, the assumption is that a packet is 16 blocks of 64 bytes, which is not true.
The Intel® 5520 chipset has three channels, so in most cases, no padding is required between
objects (except for objects whose size are n x 3 x 64 bytes blocks).

Fig. 4.19: Three Channels and Two Dual-ranked DIMM Example

When creating a new pool, the user can specify to use this feature or not.

Local Cache

In terms of CPU usage, the cost of multiple cores accessing a memory pool’s ring of free
buffers may be high since each access requires a compare-and-set (CAS) operation. To avoid
having too many access requests to the memory pool’s ring, the memory pool allocator can
maintain a per-core cache and do bulk requests to the memory pool’s ring, via the cache with
many fewer locks on the actual memory pool structure. In this way, each core has full access
to its own cache (with locks) of free objects and only when the cache fills does the core need to
shuffle some of the free objects back to the pools ring or obtain more objects when the cache
is empty.

4.5. Mempool Library 275


DPDK documentation, Release 17.05.0-rc0

While this may mean a number of buffers may sit idle on some core’s cache, the speed at
which a core can access its own cache for a specific memory pool without locks provides
performance gains.
The cache is composed of a small, per-core table of pointers and its length (used as a stack).
This internal cache can be enabled or disabled at creation of the pool.
The maximum size of the cache is static and is defined at compilation time (CON-
FIG_RTE_MEMPOOL_CACHE_MAX_SIZE).
Fig. 4.20 shows a cache in operation.

Fig. 4.20: A mempool in Memory with its Associated Ring

Alternatively to the internal default per-lcore local cache, an application can cre-
ate and manage external caches through the rte_mempool_cache_create(),
rte_mempool_cache_free() and rte_mempool_cache_flush() calls. These
user-owned caches can be explicitly passed to rte_mempool_generic_put() and
rte_mempool_generic_get(). The rte_mempool_default_cache() call returns the
default internal cache if any. In contrast to the default caches, user-owned caches can be
used by non-EAL threads too.

Mempool Handlers

This allows external memory subsystems, such as external hardware memory management
systems and software based memory allocators, to be used with DPDK.
There are two aspects to a mempool handler.
• Adding the code for your new mempool operations (ops). This is achieved by adding a
new mempool ops code, and using the MEMPOOL_REGISTER_OPS macro.
• Using the new API to call rte_mempool_create_empty() and
rte_mempool_set_ops_byname() to create a new mempool and specifying
which ops to use.
Several different mempool handlers may be used in the same application. A new mem-
pool can be created by using the rte_mempool_create_empty() function, then using
rte_mempool_set_ops_byname() to point the mempool to the relevant mempool handler
callback (ops) structure.
Legacy applications may continue to use the old rte_mempool_create() API call, which
uses a ring based mempool handler by default. These applications will need to be modified to
use a new mempool handler.
For applications that use rte_pktmbuf_create(), there is a config setting
(RTE_MBUF_DEFAULT_MEMPOOL_OPS) that allows the application to make use of an
alternative mempool handler.

Use Cases

All allocations that require a high level of performance should use a pool-based memory allo-
cator. Below are some examples:

4.5. Mempool Library 276


DPDK documentation, Release 17.05.0-rc0

• Mbuf Library
• Environment Abstraction Layer , for logging service
• Any application that needs to allocate fixed-sized objects in the data plane and that will
be continuously utilized by the system.

Mbuf Library

The mbuf library provides the ability to allocate and free buffers (mbufs) that may be used by
the DPDK application to store message buffers. The message buffers are stored in a mempool,
using the Mempool Library .
A rte_mbuf struct can carry network packet buffers or generic control buffers (indicated by the
CTRL_MBUF_FLAG). This can be extended to other types. The rte_mbuf header structure is
kept as small as possible and currently uses just two cache lines, with the most frequently used
fields being on the first of the two cache lines.

Design of Packet Buffers

For the storage of the packet data (including protocol headers), two approaches were consid-
ered:
1. Embed metadata within a single memory buffer the structure followed by a fixed size area
for the packet data.
2. Use separate memory buffers for the metadata structure and for the packet data.
The advantage of the first method is that it only needs one operation to allocate/free the whole
memory representation of a packet. On the other hand, the second method is more flexible
and allows the complete separation of the allocation of metadata structures from the allocation
of packet data buffers.
The first method was chosen for the DPDK. The metadata contains control information such as
message type, length, offset to the start of the data and a pointer for additional mbuf structures
allowing buffer chaining.
Message buffers that are used to carry network packets can handle buffer chaining where
multiple buffers are required to hold the complete packet. This is the case for jumbo frames
that are composed of many mbufs linked together through their next field.
For a newly allocated mbuf, the area at which the data begins in the message buffer is
RTE_PKTMBUF_HEADROOM bytes after the beginning of the buffer, which is cache aligned.
Message buffers may be used to carry control information, packets, events, and so on between
different entities in the system. Message buffers may also use their buffer pointers to point to
other message buffer data sections or other structures.
Fig. 4.21 and Fig. 4.22 show some of these scenarios.

Fig. 4.21: An mbuf with One Segment

Fig. 4.22: An mbuf with Three Segments

4.6. Mbuf Library 277


DPDK documentation, Release 17.05.0-rc0

The Buffer Manager implements a fairly standard set of buffer access functions to manipulate
network packets.

Buffers Stored in Memory Pools

The Buffer Manager uses the Mempool Library to allocate buffers. Therefore, it ensures
that the packet header is interleaved optimally across the channels and ranks for L3 pro-
cessing. An mbuf contains a field indicating the pool that it originated from. When calling
rte_ctrlmbuf_free(m) or rte_pktmbuf_free(m), the mbuf returns to its original pool.

Constructors

Packet and control mbuf constructors are provided by the API. The rte_pktmbuf_init() and
rte_ctrlmbuf_init() functions initialize some fields in the mbuf structure that are not modified by
the user once created (mbuf type, origin pool, buffer start address, and so on). This function is
given as a callback function to the rte_mempool_create() function at pool creation time.

Allocating and Freeing mbufs

Allocating a new mbuf requires the user to specify the mempool from which the mbuf
should be taken. For any newly-allocated mbuf, it contains one segment, with a length
of 0. The offset to data is initialized to have some bytes of headroom in the buffer
(RTE_PKTMBUF_HEADROOM).
Freeing a mbuf means returning it into its original mempool. The content of an mbuf is not
modified when it is stored in a pool (as a free mbuf). Fields initialized by the constructor do not
need to be re-initialized at mbuf allocation.
When freeing a packet mbuf that contains several segments, all of them are freed and returned
to their original mempool.

Manipulating mbufs

This library provides some functions for manipulating the data in a packet mbuf. For instance:
• Get data length
• Get a pointer to the start of data
• Prepend data before data
• Append data after data
• Remove data at the beginning of the buffer (rte_pktmbuf_adj())
• Remove data at the end of the buffer (rte_pktmbuf_trim()) Refer to the DPDK API Refer-
ence for details.

4.6. Mbuf Library 278


DPDK documentation, Release 17.05.0-rc0

Meta Information

Some information is retrieved by the network driver and stored in an mbuf to make process-
ing easier. For instance, the VLAN, the RSS hash result (see Poll Mode Driver ) and a flag
indicating that the checksum was computed by hardware.
An mbuf also contains the input port (where it comes from), and the number of segment mbufs
in the chain.
For chained buffers, only the first mbuf of the chain stores this meta information.
For instance, this is the case on RX side for the IEEE1588 packet timestamp mechanism, the
VLAN tagging and the IP checksum computation.
On TX side, it is also possible for an application to delegate some processing to the hardware
if it supports it. For instance, the PKT_TX_IP_CKSUM flag allows to offload the computation
of the IPv4 checksum.
The following examples explain how to configure different TX offloads on a vxlan-encapsulated
tcp packet: out_eth/out_ip/out_udp/vxlan/in_eth/in_ip/in_tcp/payload
• calculate checksum of out_ip:
mb->l2_len = len(out_eth)
mb->l3_len = len(out_ip)
mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CSUM
set out_ip checksum to 0 in the packet

This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM.


• calculate checksum of out_ip and out_udp:
mb->l2_len = len(out_eth)
mb->l3_len = len(out_ip)
mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CSUM | PKT_TX_UDP_CKSUM
set out_ip checksum to 0 in the packet
set out_udp checksum to pseudo header using rte_ipv4_phdr_cksum()

This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM and


DEV_TX_OFFLOAD_UDP_CKSUM.
• calculate checksum of in_ip:
mb->l2_len = len(out_eth + out_ip + out_udp + vxlan + in_eth)
mb->l3_len = len(in_ip)
mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CSUM
set in_ip checksum to 0 in the packet

This is similar to case 1), but l2_len is different. It is supported on hardware advertising
DEV_TX_OFFLOAD_IPV4_CKSUM. Note that it can only work if outer L4 checksum is
0.
• calculate checksum of in_ip and in_tcp:
mb->l2_len = len(out_eth + out_ip + out_udp + vxlan + in_eth)
mb->l3_len = len(in_ip)
mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CSUM | PKT_TX_TCP_CKSUM
set in_ip checksum to 0 in the packet
set in_tcp checksum to pseudo header using rte_ipv4_phdr_cksum()

This is similar to case 2), but l2_len is different. It is supported on hardware advertising
DEV_TX_OFFLOAD_IPV4_CKSUM and DEV_TX_OFFLOAD_TCP_CKSUM. Note that
it can only work if outer L4 checksum is 0.

4.6. Mbuf Library 279


DPDK documentation, Release 17.05.0-rc0

• segment inner TCP:


mb->l2_len = len(out_eth + out_ip + out_udp + vxlan + in_eth)
mb->l3_len = len(in_ip)
mb->l4_len = len(in_tcp)
mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CKSUM | PKT_TX_TCP_CKSUM |
PKT_TX_TCP_SEG;
set in_ip checksum to 0 in the packet
set in_tcp checksum to pseudo header without including the IP
payload length using rte_ipv4_phdr_cksum()

This is supported on hardware advertising DEV_TX_OFFLOAD_TCP_TSO. Note that it


can only work if outer L4 checksum is 0.
• calculate checksum of out_ip, in_ip, in_tcp:
mb->outer_l2_len = len(out_eth)
mb->outer_l3_len = len(out_ip)
mb->l2_len = len(out_udp + vxlan + in_eth)
mb->l3_len = len(in_ip)
mb->ol_flags |= PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IP_CKSUM | \
PKT_TX_IP_CKSUM | PKT_TX_TCP_CKSUM;
set out_ip checksum to 0 in the packet
set in_ip checksum to 0 in the packet
set in_tcp checksum to pseudo header using rte_ipv4_phdr_cksum()

This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM,


DEV_TX_OFFLOAD_UDP_CKSUM and DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM.
The list of flags and their precise meaning is described in the mbuf API documentation
(rte_mbuf.h). Also refer to the testpmd source code (specifically the csumonly.c file) for de-
tails.

Direct and Indirect Buffers

A direct buffer is a buffer that is completely separate and self-contained. An indirect buffer
behaves like a direct buffer but for the fact that the buffer pointer and data offset in it refer to
data in another direct buffer. This is useful in situations where packets need to be duplicated
or fragmented, since indirect buffers provide the means to reuse the same packet data across
multiple buffers.
A buffer becomes indirect when it is “attached” to a direct buffer using the rte_pktmbuf_attach()
function. Each buffer has a reference counter field and whenever an indirect buffer is attached
to the direct buffer, the reference counter on the direct buffer is incremented. Similarly, when-
ever the indirect buffer is detached, the reference counter on the direct buffer is decremented.
If the resulting reference counter is equal to 0, the direct buffer is freed since it is no longer in
use.
There are a few things to remember when dealing with indirect buffers. First of all, an indirect
buffer is never attached to another indirect buffer. Attempting to attach buffer A to indirect buffer
B that is attached to C, makes rte_pktmbuf_attach() automatically attach A to C, effectively
cloning B. Secondly, for a buffer to become indirect, its reference counter must be equal to 1,
that is, it must not be already referenced by another indirect buffer. Finally, it is not possible to
reattach an indirect buffer to the direct buffer (unless it is detached first).
While the attach/detach operations can be invoked directly using the recommended
rte_pktmbuf_attach() and rte_pktmbuf_detach() functions, it is suggested to use the higher-
level rte_pktmbuf_clone() function, which takes care of the correct initialization of an indirect

4.6. Mbuf Library 280


DPDK documentation, Release 17.05.0-rc0

buffer and can clone buffers with multiple segments.


Since indirect buffers are not supposed to actually hold any data, the memory pool for indirect
buffers should be configured to indicate the reduced memory consumption. Examples of the
initialization of a memory pool for indirect buffers (as well as use case examples for indirect
buffers) can be found in several of the sample applications, for example, the IPv4 Multicast
sample application.

Debug

In debug mode (CONFIG_RTE_MBUF_DEBUG is enabled), the functions of the mbuf library


perform sanity checks before any operation (such as, buffer corruption, bad type, and so on).

Use Cases

All networking application should use mbufs to transport network packets.

Poll Mode Driver

The DPDK includes 1 Gigabit, 10 Gigabit and 40 Gigabit and para virtualized virtio Poll Mode
Drivers.
A Poll Mode Driver (PMD) consists of APIs, provided through the BSD driver running in user
space, to configure the devices and their respective queues. In addition, a PMD accesses the
RX and TX descriptors directly without any interrupts (with the exception of Link Status Change
interrupts) to quickly receive, process and deliver packets in the user’s application. This section
describes the requirements of the PMDs, their global design principles and proposes a high-
level architecture and a generic external API for the Ethernet PMDs.

Requirements and Assumptions

The DPDK environment for packet processing applications allows for two models, run-to-
completion and pipe-line:
• In the run-to-completion model, a specific port’s RX descriptor ring is polled for packets
through an API. Packets are then processed on the same core and placed on a port’s TX
descriptor ring through an API for transmission.
• In the pipe-line model, one core polls one or more port’s RX descriptor ring through
an API. Packets are received and passed to another core via a ring. The other core
continues to process the packet which then may be placed on a port’s TX descriptor ring
through an API for transmission.
In a synchronous run-to-completion model, each logical core assigned to the DPDK executes
a packet processing loop that includes the following steps:
• Retrieve input packets through the PMD receive API
• Process each received packet one at a time, up to its forwarding
• Send pending output packets through the PMD transmit API

4.7. Poll Mode Driver 281


DPDK documentation, Release 17.05.0-rc0

Conversely, in an asynchronous pipe-line model, some logical cores may be dedicated to the
retrieval of received packets and other logical cores to the processing of previously received
packets. Received packets are exchanged between logical cores through rings. The loop for
packet retrieval includes the following steps:
• Retrieve input packets through the PMD receive API
• Provide received packets to processing lcores through packet queues
The loop for packet processing includes the following steps:
• Retrieve the received packet from the packet queue
• Process the received packet, up to its retransmission if forwarded
To avoid any unnecessary interrupt processing overhead, the execution environment must not
use any asynchronous notification mechanisms. Whenever needed and appropriate, asyn-
chronous communication should be introduced as much as possible through the use of rings.
Avoiding lock contention is a key issue in a multi-core environment. To address this issue,
PMDs are designed to work with per-core private resources as much as possible. For example,
a PMD maintains a separate transmit queue per-core, per-port. In the same way, every receive
queue of a port is assigned to and polled by a single logical core (lcore).
To comply with Non-Uniform Memory Access (NUMA), memory management is designed to
assign to each logical core a private buffer pool in local memory to minimize remote memory
access. The configuration of packet buffer pools should take into account the underlying physi-
cal memory architecture in terms of DIMMS, channels and ranks. The application must ensure
that appropriate parameters are given at memory pool creation time. See Mempool Library .

Design Principles

The API and architecture of the Ethernet* PMDs are designed with the following guidelines in
mind.
PMDs must help global policy-oriented decisions to be enforced at the upper application level.
Conversely, NIC PMD functions should not impede the benefits expected by upper-level global
policies, or worse prevent such policies from being applied.
For instance, both the receive and transmit functions of a PMD have a maximum number of
packets/descriptors to poll. This allows a run-to-completion processing stack to statically fix or
to dynamically adapt its overall behavior through different global loop policies, such as:
• Receive, process immediately and transmit packets one at a time in a piecemeal fashion.
• Receive as many packets as possible, then process all received packets, transmitting
them immediately.
• Receive a given maximum number of packets, process the received packets, accumulate
them and finally send all accumulated packets to transmit.
To achieve optimal performance, overall software design choices and pure software optimiza-
tion techniques must be considered and balanced against available low-level hardware-based
optimization features (CPU cache properties, bus speed, NIC PCI bandwidth, and so on). The
case of packet transmission is an example of this software/hardware tradeoff issue when opti-
mizing burst-oriented network packet processing engines. In the initial case, the PMD could ex-
port only an rte_eth_tx_one function to transmit one packet at a time on a given queue. On top
of that, one can easily build an rte_eth_tx_burst function that loops invoking the rte_eth_tx_one

4.7. Poll Mode Driver 282


DPDK documentation, Release 17.05.0-rc0

function to transmit several packets at a time. However, an rte_eth_tx_burst function is effec-


tively implemented by the PMD to minimize the driver-level transmit cost per packet through
the following optimizations:
• Share among multiple packets the un-amortized cost of invoking the rte_eth_tx_one func-
tion.
• Enable the rte_eth_tx_burst function to take advantage of burst-oriented hardware fea-
tures (prefetch data in cache, use of NIC head/tail registers) to minimize the number of
CPU cycles per packet, for example by avoiding unnecessary read memory accesses
to ring transmit descriptors, or by systematically using arrays of pointers that exactly fit
cache line boundaries and sizes.
• Apply burst-oriented software optimization techniques to remove operations that would
otherwise be unavoidable, such as ring index wrap back management.
Burst-oriented functions are also introduced via the API for services that are intensively used
by the PMD. This applies in particular to buffer allocators used to populate NIC rings, which
provide functions to allocate/free several buffers at a time. For example, an mbuf_multiple_alloc
function returning an array of pointers to rte_mbuf buffers which speeds up the receive poll
function of the PMD when replenishing multiple descriptors of the receive ring.

Logical Cores, Memory and NIC Queues Relationships

The DPDK supports NUMA allowing for better performance when a processor’s logical cores
and interfaces utilize its local memory. Therefore, mbuf allocation associated with local PCIe*
interfaces should be allocated from memory pools created in the local memory. The buffers
should, if possible, remain on the local processor to obtain the best performance results and RX
and TX buffer descriptors should be populated with mbufs allocated from a mempool allocated
from local memory.
The run-to-completion model also performs better if packet or data manipulation is in local
memory instead of a remote processors memory. This is also true for the pipe-line model
provided all logical cores used are located on the same processor.
Multiple logical cores should never share receive or transmit queues for interfaces since this
would require global locks and hinder performance.

Device Identification and Configuration

Device Identification

Each NIC port is uniquely designated by its (bus/bridge, device, function) PCI identifiers as-
signed by the PCI probing/enumeration function executed at DPDK initialization. Based on
their PCI identifier, NIC ports are assigned two other identifiers:
• A port index used to designate the NIC port in all functions exported by the PMD API.
• A port name used to designate the port in console messages, for administration or de-
bugging purposes. For ease of use, the port name includes the port index.

4.7. Poll Mode Driver 283


DPDK documentation, Release 17.05.0-rc0

Device Configuration

The configuration of each NIC port includes the following operations:


• Allocate PCI resources
• Reset the hardware (issue a Global Reset) to a well-known default state
• Set up the PHY and the link
• Initialize statistics counters
The PMD API must also export functions to start/stop the all-multicast feature of a port and
functions to set/unset the port in promiscuous mode.
Some hardware offload features must be individually configured at port initialization through
specific configuration parameters. This is the case for the Receive Side Scaling (RSS) and
Data Center Bridging (DCB) features for example.

On-the-Fly Configuration

All device features that can be started or stopped “on the fly” (that is, without stopping the
device) do not require the PMD API to export dedicated functions for this purpose.
All that is required is the mapping address of the device PCI registers to implement the config-
uration of these features in specific functions outside of the drivers.
For this purpose, the PMD API exports a function that provides all the information associated
with a device that can be used to set up a given device feature outside of the driver. This
includes the PCI vendor identifier, the PCI device identifier, the mapping address of the PCI
device registers, and the name of the driver.
The main advantage of this approach is that it gives complete freedom on the choice of the
API used to configure, to start, and to stop such features.
As an example, refer to the configuration of the IEEE1588 feature for the Intel® 82576 Giga-
bit Ethernet Controller and the Intel® 82599 10 Gigabit Ethernet Controller controllers in the
testpmd application.
Other features such as the L3/L4 5-Tuple packet filtering feature of a port can be configured in
the same way. Ethernet* flow control (pause frame) can be configured on the individual port.
Refer to the testpmd source code for details. Also, L4 (UDP/TCP/ SCTP) checksum offload by
the NIC can be enabled for an individual packet as long as the packet mbuf is set up correctly.
See Hardware Offload for details.

Configuration of Transmit Queues

Each transmit queue is independently configured with the following information:


• The number of descriptors of the transmit ring
• The socket identifier used to identify the appropriate DMA memory zone from which to
allocate the transmit ring in NUMA architectures
• The values of the Prefetch, Host and Write-Back threshold registers of the transmit queue

4.7. Poll Mode Driver 284


DPDK documentation, Release 17.05.0-rc0

• The minimum transmit packets to free threshold (tx_free_thresh). When the number of
descriptors used to transmit packets exceeds this threshold, the network adaptor should
be checked to see if it has written back descriptors. A value of 0 can be passed during
the TX queue configuration to indicate the default value should be used. The default
value for tx_free_thresh is 32. This ensures that the PMD does not search for completed
descriptors until at least 32 have been processed by the NIC for this queue.
• The minimum RS bit threshold. The minimum number of transmit descriptors to use be-
fore setting the Report Status (RS) bit in the transmit descriptor. Note that this parameter
may only be valid for Intel 10 GbE network adapters. The RS bit is set on the last de-
scriptor used to transmit a packet if the number of descriptors used since the last RS bit
setting, up to the first descriptor used to transmit the packet, exceeds the transmit RS
bit threshold (tx_rs_thresh). In short, this parameter controls which transmit descriptors
are written back to host memory by the network adapter. A value of 0 can be passed
during the TX queue configuration to indicate that the default value should be used. The
default value for tx_rs_thresh is 32. This ensures that at least 32 descriptors are used
before the network adapter writes back the most recently used descriptor. This saves
upstream PCIe* bandwidth resulting from TX descriptor write-backs. It is important to
note that the TX Write-back threshold (TX wthresh) should be set to 0 when tx_rs_thresh
is greater than 1. Refer to the Intel® 82599 10 Gigabit Ethernet Controller Datasheet for
more details.
The following constraints must be satisfied for tx_free_thresh and tx_rs_thresh:
• tx_rs_thresh must be greater than 0.
• tx_rs_thresh must be less than the size of the ring minus 2.
• tx_rs_thresh must be less than or equal to tx_free_thresh.
• tx_free_thresh must be greater than 0.
• tx_free_thresh must be less than the size of the ring minus 3.
• For optimal performance, TX wthresh should be set to 0 when tx_rs_thresh is greater
than 1.
One descriptor in the TX ring is used as a sentinel to avoid a hardware race condition, hence
the maximum threshold constraints.

Note: When configuring for DCB operation, at port initialization, both the number of transmit
queues and the number of receive queues must be set to 128.

Hardware Offload

Depending on driver capabilities advertised by rte_eth_dev_info_get(), the PMD may


support hardware offloading feature like checksumming, TCP segmentation or VLAN insertion.
The support of these offload features implies the addition of dedicated status bit(s) and value
field(s) into the rte_mbuf data structure, along with their appropriate handling by the re-
ceive/transmit functions exported by each PMD. The list of flags and their precise meaning
is described in the mbuf API documentation and in the in Mbuf Library , section “Meta Informa-
tion”.

4.7. Poll Mode Driver 285


DPDK documentation, Release 17.05.0-rc0

Poll Mode Driver API

Generalities

By default, all functions exported by a PMD are lock-free functions that are assumed not to be
invoked in parallel on different logical cores to work on the same target object. For instance,
a PMD receive function cannot be invoked in parallel on two logical cores to poll the same RX
queue of the same port. Of course, this function can be invoked in parallel by different logical
cores on different RX queues. It is the responsibility of the upper-level application to enforce
this rule.
If needed, parallel accesses by multiple logical cores to shared queues can be explicitly pro-
tected by dedicated inline lock-aware functions built on top of their corresponding lock-free
functions of the PMD API.

Generic Packet Representation

A packet is represented by an rte_mbuf structure, which is a generic metadata structure con-


taining all necessary housekeeping information. This includes fields and status bits corre-
sponding to offload hardware features, such as checksum computation of IP headers or VLAN
tags.
The rte_mbuf data structure includes specific fields to represent, in a generic way, the offload
features provided by network controllers. For an input packet, most fields of the rte_mbuf
structure are filled in by the PMD receive function with the information contained in the receive
descriptor. Conversely, for output packets, most fields of rte_mbuf structures are used by the
PMD transmit function to initialize transmit descriptors.
The mbuf structure is fully described in the Mbuf Library chapter.

Ethernet Device API

The Ethernet device API exported by the Ethernet PMDs is described in the DPDK API Refer-
ence.

Extended Statistics API

The extended statistics API allows each individual PMD to expose a unique set of statistics.
Accessing these from application programs is done via two functions:
• rte_eth_xstats_get: Fills in an array of struct rte_eth_xstat with extended
statistics.
• rte_eth_xstats_get_names: Fills in an array of struct rte_eth_xstat_name
with extended statistic name lookup information.
Each struct rte_eth_xstat contains an identifier and value pair, and each
struct rte_eth_xstat_name contains a string. Each identifier within the struct
rte_eth_xstat lookup array must have a corresponding entry in the struct
rte_eth_xstat_name lookup array. Within the latter the index of the entry is the identi-
fier the string is associated with. These identifiers, as well as the number of extended statistic

4.7. Poll Mode Driver 286


DPDK documentation, Release 17.05.0-rc0

exposed, must remain constant during runtime. Note that extended statistic identifiers are
driver-specific, and hence might not be the same for different ports.
A naming scheme exists for the strings exposed to clients of the API. This is to allow scraping of
the API for statistics of interest. The naming scheme uses strings split by a single underscore
_. The scheme is as follows:
• direction
• detail 1
• detail 2
• detail n
• unit
Examples of common statistics xstats strings, formatted to comply to the scheme proposed
above:
• rx_bytes
• rx_crc_errors
• tx_multicast_packets
The scheme, although quite simple, allows flexibility in presenting and reading information
from the statistic strings. The following example illustrates the naming scheme:rx_packets.
In this example, the string is split into two components. The first component rx indicates that
the statistic is associated with the receive side of the NIC. The second component packets
indicates that the unit of measure is packets.
A more complicated example: tx_size_128_to_255_packets. In this example, tx indi-
cates transmission, size is the first detail, 128 etc are more details, and packets indicates
that this is a packet counter.
Some additions in the metadata scheme are as follows:
• If the first part does not match rx or tx, the statistic does not have an affinity with either
receive of transmit.
• If the first letter of the second part is q and this q is followed by a number, this statistic is
part of a specific queue.
An example where queue numbers are used is as follows: tx_q7_bytes which indicates this
statistic applies to queue number 7, and represents the number of transmitted bytes on that
queue.

Generic flow API (rte_flow)

Overview

This API provides a generic means to configure hardware to match specific ingress or egress
traffic, alter its fate and query related counters according to any number of user-defined rules.
It is named rte_flow after the prefix used for all its symbols, and is defined in rte_flow.h.
• Matching can be performed on packet data (protocol headers, payload) and properties
(e.g. associated physical port, virtual device function ID).

4.8. Generic flow API (rte_flow) 287


DPDK documentation, Release 17.05.0-rc0

• Possible operations include dropping traffic, diverting it to specific queues, to vir-


tual/physical device functions or ports, performing tunnel offloads, adding marks and
so on.
It is slightly higher-level than the legacy filtering framework which it encompasses and super-
sedes (including all functions and filter types) in order to expose a single interface with an
unambiguous behavior that is common to all poll-mode drivers (PMDs).
Several methods to migrate existing applications are described in API migration.

Flow rule

Description

A flow rule is the combination of attributes with a matching pattern and a list of actions. Flow
rules form the basis of this API.
Flow rules can have several distinct actions (such as counting, encapsulating, decapsulating
before redirecting packets to a particular queue, etc.), instead of relying on several rules to
achieve this and having applications deal with hardware implementation details regarding their
order.
Support for different priority levels on a rule basis is provided, for example in order to force a
more specific rule to come before a more generic one for packets matched by both. However
hardware support for more than a single priority level cannot be guaranteed. When supported,
the number of available priority levels is usually low, which is why they can also be implemented
in software by PMDs (e.g. missing priority levels may be emulated by reordering rules).
In order to remain as hardware-agnostic as possible, by default all rules are considered to
have the same priority, which means that the order between overlapping rules (when a packet
is matched by several filters) is undefined.
PMDs may refuse to create overlapping rules at a given priority level when they can be detected
(e.g. if a pattern matches an existing filter).
Thus predictable results for a given priority level can only be achieved with non-overlapping
rules, using perfect matching on all protocol layers.
Flow rules can also be grouped, the flow rule priority is specific to the group they belong to. All
flow rules in a given group are thus processed either before or after another group.
Support for multiple actions per rule may be implemented internally on top of non-default hard-
ware priorities, as a result both features may not be simultaneously available to applications.
Considering that allowed pattern/actions combinations cannot be known in advance and would
result in an impractically large number of capabilities to expose, a method is provided to vali-
date a given rule from the current device configuration state.
This enables applications to check if the rule types they need is supported at initialization time,
before starting their data path. This method can be used anytime, its only requirement being
that the resources needed by a rule should exist (e.g. a target RX queue should be configured
first).
Each defined rule is associated with an opaque handle managed by the PMD, applications are
responsible for keeping it. These can be used for queries and rules management, such as
retrieving counters or other data and destroying them.

4.8. Generic flow API (rte_flow) 288


DPDK documentation, Release 17.05.0-rc0

To avoid resource leaks on the PMD side, handles must be explicitly destroyed by the applica-
tion before releasing associated resources such as queues and ports.
The following sections cover:
• Attributes (represented by struct rte_flow_attr): properties of a flow rule such
as its direction (ingress or egress) and priority.
• Pattern item (represented by struct rte_flow_item): part of a matching pattern
that either matches specific packet data or traffic properties. It can also describe proper-
ties of the pattern itself, such as inverted matching.
• Matching pattern: traffic properties to look for, a combination of any number of items.
• Actions (represented by struct rte_flow_action): operations to perform when-
ever a packet is matched by a pattern.

Attributes

Attribute: Group

Flow rules can be grouped by assigning them a common group number. Lower values have
higher priority. Group 0 has the highest priority.
Although optional, applications are encouraged to group similar rules as much as possible
to fully take advantage of hardware capabilities (e.g. optimized matching) and work around
limitations (e.g. a single pattern type possibly allowed in a given group).
Note that support for more than a single group is not guaranteed.

Attribute: Priority

A priority level can be assigned to a flow rule. Like groups, lower values denote higher priority,
with 0 as the maximum.
A rule with priority 0 in group 8 is always matched after a rule with priority 8 in group 0.
Group and priority levels are arbitrary and up to the application, they do not need to be con-
tiguous nor start from 0, however the maximum number varies between devices and may be
affected by existing flow rules.
If a packet is matched by several rules of a given group for a given priority level, the outcome
is undefined. It can take any path, may be duplicated or even cause unrecoverable errors.
Note that support for more than a single priority level is not guaranteed.

Attribute: Traffic direction

Flow rules can apply to inbound and/or outbound traffic (ingress/egress).


Several pattern items and actions are valid and can be used in both directions. At least one
direction must be specified.
Specifying both directions at once for a given rule is not recommended but may be valid in a
few cases (e.g. shared counters).

4.8. Generic flow API (rte_flow) 289


DPDK documentation, Release 17.05.0-rc0

Pattern item

Pattern items fall in two categories:


• Matching protocol headers and packet data (ANY, RAW, ETH, VLAN, IPV4, IPV6, ICMP,
UDP, TCP, SCTP, VXLAN and so on), usually associated with a specification structure.
• Matching meta-data or affecting pattern processing (END, VOID, INVERT, PF, VF, PORT
and so on), often without a specification structure.
Item specification structures are used to match specific values among protocol fields (or item
properties). Documentation describes for each item whether they are associated with one and
their type name if so.
Up to three structures of the same type can be set for a given item:
• spec: values to match (e.g. a given IPv4 address).
• last: upper bound for an inclusive range with corresponding fields in spec.
• mask: bit-mask applied to both spec and last whose purpose is to distinguish the
values to take into account and/or partially mask them out (e.g. in order to match an IPv4
address prefix).
Usage restrictions and expected behavior:
• Setting either mask or last without spec is an error.
• Field values in last which are either 0 or equal to the corresponding values in spec are
ignored; they do not generate a range. Nonzero values lower than those in spec are not
supported.
• Setting spec and optionally last without mask causes the PMD to use the default mask
defined for that item (defined as rte_flow_item_{name}_mask constants).
• Not setting any of them (assuming item type allows it) is equivalent to providing an empty
(zeroed) mask for broad (nonspecific) matching.
• mask is a simple bit-mask applied before interpreting the contents of spec and last,
which may yield unexpected results if not used carefully. For example, if for an IPv4
address field, spec provides 10.1.2.3, last provides 10.3.4.5 and mask provides
255.255.0.0, the effective range becomes 10.1.0.0 to 10.3.255.255.
Example of an item specification matching an Ethernet header:

Table 4.1: Ethernet item


Field Subfield Value
src 00:01:02:03:04
spec dst 00:2a:66:00:01
type 0x22aa
last unspecified
src 00:ff:ff:ff:00
mask dst 00:00:00:00:ff
type 0x0000
Non-masked bits stand for any value (shown as ? below), Ethernet headers with the following
properties are thus matched:
• src: ??:01:02:03:??

4.8. Generic flow API (rte_flow) 290


DPDK documentation, Release 17.05.0-rc0

• dst: ??:??:??:??:01
• type: 0x????

Matching pattern

A pattern is formed by stacking items starting from the lowest protocol layer to match. This
stacking restriction does not apply to meta items which can be placed anywhere in the stack
without affecting the meaning of the resulting pattern.
Patterns are terminated by END items.
Examples:

Table 4.2: TCPv4


as L4
Index Item
0 Ethernet
1 IPv4
2 TCP
3 END

Table 4.3: TCPv6 in


VXLAN
Index Item
0 Ethernet
1 IPv4
2 UDP
3 VXLAN
4 Ethernet
5 IPv6
6 TCP
7 END

4.8. Generic flow API (rte_flow) 291


DPDK documentation, Release 17.05.0-rc0

Table 4.4: TCPv4


as L4 with meta
items
Index Item
0 VOID
1 Ethernet
2 VOID
3 IPv4
4 TCP
5 VOID
6 VOID
7 END
The above example shows how meta items do not affect packet data matching items, as long
as those remain stacked properly. The resulting matching pattern is identical to “TCPv4 as L4”.

Table 4.5:
UDPv6 any-
where
Index Item
0 IPv6
1 UDP
2 END
If supported by the PMD, omitting one or several protocol layers at the bottom of the stack
as in the above example (missing an Ethernet specification) enables looking up anywhere in
packets.
It is unspecified whether the payload of supported encapsulations (e.g. VXLAN payload) is
matched by such a pattern, which may apply to inner, outer or both packets.

Table 4.6: Invalid,


missing L3
Index Item
0 Ethernet
1 UDP
2 END
The above pattern is invalid due to a missing L3 specification between L2 (Ethernet) and L4
(UDP). Doing so is only allowed at the bottom and at the top of the stack.

Meta item types

They match meta-data or affect pattern processing instead of matching packet data directly,
most of them do not need a specification structure. This particularity allows them to be speci-
fied anywhere in the stack without causing any side effect.

Item: END

End marker for item lists. Prevents further processing of items, thereby ending the pattern.

4.8. Generic flow API (rte_flow) 292


DPDK documentation, Release 17.05.0-rc0

• Its numeric value is 0 for convenience.


• PMD support is mandatory.
• spec, last and mask are ignored.

Table 4.7: END


Field Value
spec ignored
last ignored
mask ignored

Item: VOID

Used as a placeholder for convenience. It is ignored and simply discarded by PMDs.


• PMD support is mandatory.
• spec, last and mask are ignored.

Table 4.8: VOID


Field Value
spec ignored
last ignored
mask ignored
One usage example for this type is generating rules that share a common prefix quickly without
reallocating memory, only by updating item types:

Table 4.9: TCP, UDP or ICMP as


L4
Index Item
0 Ethernet
1 IPv4
2 UDP VOID VOID
3 VOID TCP VOID
4 VOID VOID ICMP
5 END

Item: INVERT

Inverted matching, i.e. process packets that do not match the pattern.
• spec, last and mask are ignored.

4.8. Generic flow API (rte_flow) 293


DPDK documentation, Release 17.05.0-rc0

Table 4.10:
INVERT
Field Value
spec ignored
last ignored
mask ignored
Usage example, matching non-TCPv4 packets only:

Table 4.11:
Anything but TCPv4
Index Item
0 INVERT
1 Ethernet
2 IPv4
3 TCP
4 END

Item: PF

Matches packets addressed to the physical function of the device.


If the underlying device function differs from the one that would normally receive the matched
traffic, specifying this item prevents it from reaching that device unless the flow rule contains a
Action: PF . Packets are not duplicated between device instances by default.
• Likely to return an error or never match any traffic if applied to a VF device.
• Can be combined with any number of Item: VF to match both PF and VF traffic.
• spec, last and mask must not be set.

Table 4.12: PF
Field Value
spec unset
last unset
mask unset

Item: VF

Matches packets addressed to a virtual function ID of the device.


If the underlying device function differs from the one that would normally receive the matched
traffic, specifying this item prevents it from reaching that device unless the flow rule contains a
Action: VF . Packets are not duplicated between device instances by default.
• Likely to return an error or never match any traffic if this causes a VF device to match
traffic addressed to a different VF.
• Can be specified multiple times to match traffic addressed to several VF IDs.
• Can be combined with a PF item to match both PF and VF traffic.

4.8. Generic flow API (rte_flow) 294


DPDK documentation, Release 17.05.0-rc0

• Default mask matches any VF ID.

Table 4.13: VF
Field Subfield Value
spec id destination VF ID
last id upper range value
mask id zeroed to match any VF ID

Item: PORT

Matches packets coming from the specified physical port of the underlying device.
The first PORT item overrides the physical port normally associated with the specified DPDK
input port (port_id). This item can be provided several times to match additional physical ports.
Note that physical ports are not necessarily tied to DPDK input ports (port_id) when those are
not under DPDK control. Possible values are specific to each device, they are not necessarily
indexed from zero and may not be contiguous.
As a device property, the list of allowed values as well as the value associated with a port_id
should be retrieved by other means.
• Default mask matches any port index.

Table 4.14: PORT


Field Subfield Value
spec index physical port index
last index upper range value
mask index zeroed to match any port index

Data matching item types

Most of these are basically protocol header definitions with associated bit-masks. They must
be specified (stacked) from lowest to highest protocol layer to form a matching pattern.
The following list is not exhaustive, new protocols will be added in the future.

Item: ANY

Matches any protocol in place of the current layer, a single ANY may also stand for several
protocol layers.
This is usually specified as the first pattern item when looking for a protocol anywhere in a
packet.
• Default mask stands for any number of layers.

4.8. Generic flow API (rte_flow) 295


DPDK documentation, Release 17.05.0-rc0

Table 4.15: ANY


Field Subfield Value
spec num number of layers covered
last num upper range value
mask num zeroed to cover any number of layers
Example for VXLAN TCP payload matching regardless of outer L3 (IPv4 or IPv6) and L4 (UDP)
both matched by the first ANY specification, and inner L3 (IPv4 or IPv6) matched by the second
ANY specification:

Table 4.16: TCP in VXLAN with wildcards


Index Item Field Subfield Value
0 Ethernet
1 ANY spec num 2
2 VXLAN
3 Ethernet
4 ANY spec num 1
5 TCP
6 END

Item: RAW

Matches a byte string of a given length at a given offset.


Offset is either absolute (using the start of the packet) or relative to the end of the previous
matched item in the stack, in which case negative values are allowed.
If search is enabled, offset is used as the starting point. The search area can be delimited by
setting limit to a nonzero value, which is the maximum number of bytes after offset where the
pattern may start.
Matching a zero-length pattern is allowed, doing so resets the relative offset for subsequent
items.
• This type does not support ranges (last field).
• Default mask matches all fields exactly.

Table 4.17: RAW


Field Subfield Value
relative look for pattern after the previous item
search search pattern from offset (see also limit)
reserved reserved, must be set to zero
spec offset absolute or relative offset for pattern
limit search area limit for start of pattern
length pattern length
pattern byte string to look for
last if specified, either all 0 or with the same values as spec
mask bit-mask applied to spec values with usual behavior

4.8. Generic flow API (rte_flow) 296


DPDK documentation, Release 17.05.0-rc0

Example pattern looking for several strings at various offsets of a UDP payload, using com-
bined RAW items:

Table 4.18: UDP payload matching


Index Item Field Subfield Value
0 Ethernet
1 IPv4
2 UDP
relative 1
search 1
offset 10
3 RAW spec
limit 0
length 3
pattern “foo”
relative 1
search 0
offset 20
4 RAW spec
limit 0
length 3
pattern “bar”
relative 1
search 0
offset -29
5 RAW spec
limit 0
length 3
pattern “baz”
6 END
This translates to:
• Locate “foo” at least 10 bytes deep inside UDP payload.
• Locate “bar” after “foo” plus 20 bytes.
• Locate “baz” after “bar” minus 29 bytes.
Such a packet may be represented as follows (not to scale):
0 >= 10 B == 20 B
| |<--------->| |<--------->|
| | | | |
|-----|------|-----|-----|-----|-----|-----------|-----|------|
| ETH | IPv4 | UDP | ... | baz | foo | ......... | bar | .... |
|-----|------|-----|-----|-----|-----|-----------|-----|------|
| |
|<--------------------------->|
== 29 B

Note that matching subsequent pattern items would resume after “baz”, not “bar” since match-
ing is always performed after the previous item of the stack.

Item: ETH

Matches an Ethernet header.

4.8. Generic flow API (rte_flow) 297


DPDK documentation, Release 17.05.0-rc0

• dst: destination MAC.


• src: source MAC.
• type: EtherType.
• Default mask matches destination and source addresses only.

Item: VLAN

Matches an 802.1Q/ad VLAN tag.


• tpid: tag protocol identifier.
• tci: tag control information.
• Default mask matches TCI only.

Item: IPV4

Matches an IPv4 header.


Note: IPv4 options are handled by dedicated pattern items.
• hdr: IPv4 header definition (rte_ip.h).
• Default mask matches source and destination addresses only.

Item: IPV6

Matches an IPv6 header.


Note: IPv6 options are handled by dedicated pattern items.
• hdr: IPv6 header definition (rte_ip.h).
• Default mask matches source and destination addresses only.

Item: ICMP

Matches an ICMP header.


• hdr: ICMP header definition (rte_icmp.h).
• Default mask matches ICMP type and code only.

Item: UDP

Matches a UDP header.


• hdr: UDP header definition (rte_udp.h).
• Default mask matches source and destination ports only.

4.8. Generic flow API (rte_flow) 298


DPDK documentation, Release 17.05.0-rc0

Item: TCP

Matches a TCP header.


• hdr: TCP header definition (rte_tcp.h).
• Default mask matches source and destination ports only.

Item: SCTP

Matches a SCTP header.


• hdr: SCTP header definition (rte_sctp.h).
• Default mask matches source and destination ports only.

Item: VXLAN

Matches a VXLAN header (RFC 7348).


• flags: normally 0x08 (I flag).
• rsvd0: reserved, normally 0x000000.
• vni: VXLAN network identifier.
• rsvd1: reserved, normally 0x00.
• Default mask matches VNI only.

Actions

Each possible action is represented by a type. Some have associated configuration structures.
Several actions combined in a list can be affected to a flow rule. That list is not ordered.
They fall in three categories:
• Terminating actions (such as QUEUE, DROP, RSS, PF, VF) that prevent processing
matched packets by subsequent flow rules, unless overridden with PASSTHRU.
• Non-terminating actions (PASSTHRU, DUP) that leave matched packets up for additional
processing by subsequent flow rules.
• Other non-terminating meta actions that do not affect the fate of packets (END, VOID,
MARK, FLAG, COUNT).
When several actions are combined in a flow rule, they should all have different types (e.g.
dropping a packet twice is not possible).
Only the last action of a given type is taken into account. PMDs still perform error checking on
the entire list.
Like matching patterns, action lists are terminated by END items.
Note that PASSTHRU is the only action able to override a terminating rule.
Example of action that redirects packets to queue index 10:

4.8. Generic flow API (rte_flow) 299


DPDK documentation, Release 17.05.0-rc0

Table 4.19:
Queue action
Field Value
index 10
Action lists examples, their order is not significant, applications must consider all actions to be
performed simultaneously:

Table 4.20: Count


and drop
Index Action
0 COUNT
1 DROP
2 END

Table 4.21: Mark, count and redirect


Index Action Field Value
0 MARK mark 0x2a
1 COUNT
2 QUEUE queue 10
3 END

Table 4.22: Redirect to queue 5


Index Action Field Value
0 DROP
1 QUEUE queue 5
2 END
In the above example, considering both actions are performed simultaneously, the end result
is that only QUEUE has any effect.

Table 4.23: Redirect to queue 3


Index Action Field Value
0 QUEUE queue 5
1 VOID
2 QUEUE queue 3
3 END
As previously described, only the last action of a given type found in the list is taken into
account. The above example also shows that VOID is ignored.

4.8. Generic flow API (rte_flow) 300


DPDK documentation, Release 17.05.0-rc0

Action types

Common action types are described in this section. Like pattern item types, this list is not
exhaustive as new actions will be added in the future.

Action: END

End marker for action lists. Prevents further processing of actions, thereby ending the list.
• Its numeric value is 0 for convenience.
• PMD support is mandatory.
• No configurable properties.

Table 4.24:
END
Field
no properties

Action: VOID

Used as a placeholder for convenience. It is ignored and simply discarded by PMDs.


• PMD support is mandatory.
• No configurable properties.

Table 4.25:
VOID
Field
no properties

Action: PASSTHRU

Leaves packets up for additional processing by subsequent flow rules. This is the default when
a rule does not contain a terminating action, but can be specified to force a rule to become
non-terminating.
• No configurable properties.

Table 4.26:
PASSTHRU
Field
no properties
Example to copy a packet to a queue and continue processing by subsequent flow rules:

4.8. Generic flow API (rte_flow) 301


DPDK documentation, Release 17.05.0-rc0

Table 4.27: Copy to queue 8


Index Action Field Value
0 PASSTHRU
1 QUEUE queue 8
2 END

Action: MARK

Attaches an integer value to packets and sets PKT_RX_FDIR and PKT_RX_FDIR_ID mbuf
flags.
This value is arbitrary and application-defined. Maximum allowed value depends on the under-
lying implementation. It is returned in the hash.fdir.hi mbuf field.

Table 4.28: MARK


Field Value
id integer value to return with packets

Action: FLAG

Flags packets. Similar to Action: MARK without a specific value; only sets the PKT_RX_FDIR
mbuf flag.
• No configurable properties.

Table 4.29:
FLAG
Field
no properties

Action: QUEUE

Assigns packets to a given queue index.


• Terminating by default.

Table 4.30: QUEUE


Field Value
index queue index to use

Action: DROP

Drop packets.
• No configurable properties.
• Terminating by default.

4.8. Generic flow API (rte_flow) 302


DPDK documentation, Release 17.05.0-rc0

• PASSTHRU overrides this action if both are specified.

Table 4.31:
DROP
Field
no properties

Action: COUNT

Enables counters for this rule.


These counters can be retrieved and reset through rte_flow_query(), see struct
rte_flow_query_count.
• Counters can be retrieved with rte_flow_query().
• No configurable properties.

Table 4.32:
COUNT
Field
no properties
Query structure to retrieve and reset flow rule counters:

Table 4.33: COUNT query


Field I/O Value
reset in reset counter after query
hits_set out hits field is set
bytes_set out bytes field is set
hits out number of hits for this rule
bytes out number of bytes through this rule

Action: DUP

Duplicates packets to a given queue index.


This is normally combined with QUEUE, however when used alone, it is actually similar to
QUEUE + PASSTHRU.
• Non-terminating by default.

Table 4.34: DUP


Field Value
index queue index to duplicate packet to

Action: RSS

Similar to QUEUE, except RSS is additionally performed on packets to spread them among
several queues according to the provided parameters.

4.8. Generic flow API (rte_flow) 303


DPDK documentation, Release 17.05.0-rc0

Note: RSS hash result is stored in the hash.rss mbuf field which overlaps hash.fdir.lo.
Since Action: MARK sets the hash.fdir.hi field only, both can be requested simultane-
ously.
• Terminating by default.

Table 4.35: RSS


Field Value
rss_conf RSS parameters
num number of entries in queue[]
queue[] queue indices to use

Action: PF

Redirects packets to the physical function (PF) of the current device.


• No configurable properties.
• Terminating by default.

Table 4.36: PF
Field
no properties

Action: VF

Redirects packets to a virtual function (VF) of the current device.


Packets matched by a VF pattern item can be redirected to their original VF ID instead of the
specified one. This parameter may not be available and is not guaranteed to work properly if
the VF part is matched by a prior flow rule or if packets are not addressed to a VF in the first
place.
• Terminating by default.

Table 4.37: VF
Field Value
original use original VF ID if possible
vf VF ID to redirect packets to

Negative types

All specified pattern items (enum rte_flow_item_type) and actions (enum


rte_flow_action_type) use positive identifiers.
The negative space is reserved for dynamic types generated by PMDs during run-time. PMDs
may encounter them as a result but must not accept negative identifiers they are not aware of.
A method to generate them remains to be defined.

4.8. Generic flow API (rte_flow) 304


DPDK documentation, Release 17.05.0-rc0

Planned types

Pattern item types will be added as new protocols are implemented.


Variable headers support through dedicated pattern items, for example in order to match spe-
cific IPv4 options and IPv6 extension headers would be stacked after IPv4/IPv6 items.
Other action types are planned but are not defined yet. These include the ability to alter packet
data in several ways, such as performing encapsulation/decapsulation of tunnel headers.

Rules management

A rather simple API with few functions is provided to fully manage flow rules.
Each created flow rule is associated with an opaque, PMD-specific handle pointer. The appli-
cation is responsible for keeping it until the rule is destroyed.
Flows rules are represented by struct rte_flow objects.

Validation

Given that expressing a definite set of device capabilities is not practical, a dedicated function
is provided to check if a flow rule is supported and can be created.
int
rte_flow_validate(uint8_t port_id,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error);

While this function has no effect on the target device, the flow rule is validated against its
current configuration state and the returned value should be considered valid by the caller for
that state only.
The returned value is guaranteed to remain valid only as long as no successful calls to
rte_flow_create() or rte_flow_destroy() are made in the meantime and no device
parameter affecting flow rules in any way are modified, due to possible collisions or resource
limitations (although in such cases EINVAL should not be returned).
Arguments:
• port_id: port identifier of Ethernet device.
• attr: flow rule attributes.
• pattern: pattern specification (list terminated by the END pattern item).
• actions: associated actions (list terminated by the END action).
• error: perform verbose error reporting if not NULL. PMDs initialize this structure in case
of error only.
Return values:
• 0 if flow rule is valid and can be created. A negative errno value otherwise (rte_errno
is also set), the following errors are defined.
• -ENOSYS: underlying device does not support this functionality.

4.8. Generic flow API (rte_flow) 305


DPDK documentation, Release 17.05.0-rc0

• -EINVAL: unknown or invalid rule specification.


• -ENOTSUP: valid but unsupported rule specification (e.g. partial bit-masks are unsup-
ported).
• -EEXIST: collision with an existing rule.
• -ENOMEM: not enough resources.
• -EBUSY: action cannot be performed due to busy device resources, may suc-
ceed if the affected queues or even the entire port are in a stopped state (see
rte_eth_dev_rx_queue_stop() and rte_eth_dev_stop()).

Creation

Creating a flow rule is similar to validating one, except the rule is actually created and a handle
returned.
struct rte_flow *
rte_flow_create(uint8_t port_id,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action *actions[],
struct rte_flow_error *error);

Arguments:
• port_id: port identifier of Ethernet device.
• attr: flow rule attributes.
• pattern: pattern specification (list terminated by the END pattern item).
• actions: associated actions (list terminated by the END action).
• error: perform verbose error reporting if not NULL. PMDs initialize this structure in case
of error only.
Return values:
A valid handle in case of success, NULL otherwise and rte_errno is set to the positive
version of one of the error codes defined for rte_flow_validate().

Destruction

Flow rules destruction is not automatic, and a queue or a port should not be released if any
are still attached to them. Applications must take care of performing this step before releasing
resources.
int
rte_flow_destroy(uint8_t port_id,
struct rte_flow *flow,
struct rte_flow_error *error);

Failure to destroy a flow rule handle may occur when other flow rules depend on it, and de-
stroying it would result in an inconsistent state.
This function is only guaranteed to succeed if handles are destroyed in reverse order of their
creation.

4.8. Generic flow API (rte_flow) 306


DPDK documentation, Release 17.05.0-rc0

Arguments:
• port_id: port identifier of Ethernet device.
• flow: flow rule handle to destroy.
• error: perform verbose error reporting if not NULL. PMDs initialize this structure in case
of error only.
Return values:
• 0 on success, a negative errno value otherwise and rte_errno is set.

Flush

Convenience function to destroy all flow rule handles associated with a port. They are released
as with successive calls to rte_flow_destroy().
int
rte_flow_flush(uint8_t port_id,
struct rte_flow_error *error);

In the unlikely event of failure, handles are still considered destroyed and no longer valid but
the port must be assumed to be in an inconsistent state.
Arguments:
• port_id: port identifier of Ethernet device.
• error: perform verbose error reporting if not NULL. PMDs initialize this structure in case
of error only.
Return values:
• 0 on success, a negative errno value otherwise and rte_errno is set.

Query

Query an existing flow rule.


This function allows retrieving flow-specific data such as counters. Data is gathered by special
actions which must be present in the flow rule definition.
int
rte_flow_query(uint8_t port_id,
struct rte_flow *flow,
enum rte_flow_action_type action,
void *data,
struct rte_flow_error *error);

Arguments:
• port_id: port identifier of Ethernet device.
• flow: flow rule handle to query.
• action: action type to query.
• data: pointer to storage for the associated query data type.
• error: perform verbose error reporting if not NULL. PMDs initialize this structure in case
of error only.

4.8. Generic flow API (rte_flow) 307


DPDK documentation, Release 17.05.0-rc0

Return values:
• 0 on success, a negative errno value otherwise and rte_errno is set.

Verbose error reporting

The defined errno values may not be accurate enough for users or application developers
who want to investigate issues related to flow rules management. A dedicated error object is
defined for this purpose:
enum rte_flow_error_type {
RTE_FLOW_ERROR_TYPE_NONE, /**< No error. */
RTE_FLOW_ERROR_TYPE_UNSPECIFIED, /**< Cause unspecified. */
RTE_FLOW_ERROR_TYPE_HANDLE, /**< Flow rule (handle). */
RTE_FLOW_ERROR_TYPE_ATTR_GROUP, /**< Group field. */
RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, /**< Priority field. */
RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, /**< Ingress field. */
RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, /**< Egress field. */
RTE_FLOW_ERROR_TYPE_ATTR, /**< Attributes structure. */
RTE_FLOW_ERROR_TYPE_ITEM_NUM, /**< Pattern length. */
RTE_FLOW_ERROR_TYPE_ITEM, /**< Specific pattern item. */
RTE_FLOW_ERROR_TYPE_ACTION_NUM, /**< Number of actions. */
RTE_FLOW_ERROR_TYPE_ACTION, /**< Specific action. */
};

struct rte_flow_error {
enum rte_flow_error_type type; /**< Cause field and error types. */
const void *cause; /**< Object responsible for the error. */
const char *message; /**< Human-readable error message. */
};

Error type RTE_FLOW_ERROR_TYPE_NONE stands for no error, in which case remaining fields
can be ignored. Other error types describe the type of the object pointed by cause.
If non-NULL, cause points to the object responsible for the error. For a flow rule, this may be
a pattern item or an individual action.
If non-NULL, message provides a human-readable error message.
This object is normally allocated by applications and set by PMDs in case of error, the message
points to a constant string which does not need to be freed by the application, however its
pointer can be considered valid only as long as its associated DPDK port remains configured.
Closing the underlying device or unloading the PMD invalidates it.

Caveats

• DPDK does not keep track of flow rules definitions or flow rule objects automatically.
Applications may keep track of the former and must keep track of the latter. PMDs may
also do it for internal needs, however this must not be relied on by applications.
• Flow rules are not maintained between successive port initializations. An application
exiting without releasing them and restarting must re-create them from scratch.
• API operations are synchronous and blocking (EAGAIN cannot be returned).
• There is no provision for reentrancy/multi-thread safety, although nothing should prevent
different devices from being configured at the same time. PMDs may protect their control
path functions accordingly.

4.8. Generic flow API (rte_flow) 308


DPDK documentation, Release 17.05.0-rc0

• Stopping the data path (TX/RX) should not be necessary when managing flow rules. If
this cannot be achieved naturally or with workarounds (such as temporarily replacing the
burst function pointers), an appropriate error code must be returned (EBUSY).
• PMDs, not applications, are responsible for maintaining flow rules configuration when
stopping and restarting a port or performing other actions which may affect them. They
can only be destroyed explicitly by applications.
For devices exposing multiple ports sharing global settings affected by flow rules:
• All ports under DPDK control must behave consistently, PMDs are responsible for making
sure that existing flow rules on a port are not affected by other ports.
• Ports not under DPDK control (unaffected or handled by other applications) are user’s
responsibility. They may affect existing flow rules and cause undefined behavior. PMDs
aware of this may prevent flow rules creation altogether in such cases.

PMD interface

The PMD interface is defined in rte_flow_driver.h. It is not subject to API/ABI versioning


constraints as it is not exposed to applications and may evolve independently.
It is currently implemented on top of the legacy filtering framework through filter type
RTE_ETH_FILTER_GENERIC that accepts the single operation RTE_ETH_FILTER_GET to
return PMD-specific rte_flow callbacks wrapped inside struct rte_flow_ops.
This overhead is temporarily necessary in order to keep compatibility with the legacy filtering
framework, which should eventually disappear.
• PMD callbacks implement exactly the interface described in Rules management, except
for the port ID argument which has already been converted to a pointer to the underlying
struct rte_eth_dev.
• Public API functions do not process flow rules definitions at all before calling PMD func-
tions (no basic error checking, no validation whatsoever). They only make sure these
callbacks are non-NULL or return the ENOSYS (function not supported) error.
This interface additionally defines the following helper functions:
• rte_flow_ops_get(): get generic flow operations structure from a port.
• rte_flow_error_set(): initialize generic flow error structure.
More will be added over time.

Device compatibility

No known implementation supports all the described features.


Unsupported features or combinations are not expected to be fully emulated in software by
PMDs for performance reasons. Partially supported features may be completed in software as
long as hardware performs most of the work (such as queue redirection and packet recogni-
tion).
However PMDs are expected to do their best to satisfy application requests by working around
hardware limitations as long as doing so does not affect the behavior of existing flow rules.

4.8. Generic flow API (rte_flow) 309


DPDK documentation, Release 17.05.0-rc0

The following sections provide a few examples of such cases and describe how PMDs should
handle them, they are based on limitations built into the previous APIs.

Global bit-masks

Each flow rule comes with its own, per-layer bit-masks, while hardware may support only a
single, device-wide bit-mask for a given layer type, so that two IPv4 rules cannot use different
bit-masks.
The expected behavior in this case is that PMDs automatically configure global bit-masks ac-
cording to the needs of the first flow rule created.
Subsequent rules are allowed only if their bit-masks match those, the EEXIST error code
should be returned otherwise.

Unsupported layer types

Many protocols can be simulated by crafting patterns with the Item: RAW type.
PMDs can rely on this capability to simulate support for protocols with headers not directly
recognized by hardware.

ANY pattern item

This pattern item stands for anything, which can be difficult to translate to something hardware
would understand, particularly if followed by more specific types.
Consider the following pattern:

Table 4.38: Pattern with


ANY as L3
Index Item
0 ETHER
1 ANY num 1
2 TCP
3 END
Knowing that TCP does not make sense with something other than IPv4 and IPv6 as L3, such
a pattern may be translated to two flow rules instead:

Table 4.39: ANY replaced with


IPV4
Index Item
0 ETHER
1 IPV4 (zeroed mask)
2 TCP
3 END

4.8. Generic flow API (rte_flow) 310


DPDK documentation, Release 17.05.0-rc0

Table 4.40: ANY replaced with


IPV6
Index Item
0 ETHER
1 IPV6 (zeroed mask)
2 TCP
3 END
Note that as soon as a ANY rule covers several layers, this approach may yield a large number
of hidden flow rules. It is thus suggested to only support the most common scenarios (anything
as L2 and/or L3).

Unsupported actions

• When combined with Action: QUEUE, packet counting (Action: COUNT ) and tagging
(Action: MARK or Action: FLAG) may be implemented in software as long as the target
queue is used by a single rule.
• A rule specifying both Action: DUP + Action: QUEUE may be translated to two hidden
rules combining Action: QUEUE and Action: PASSTHRU.
• When a single target queue is provided, Action: RSS can also be implemented through
Action: QUEUE.

Flow rules priority

While it would naturally make sense, flow rules cannot be assumed to be processed by hard-
ware in the same order as their creation for several reasons:
• They may be managed internally as a tree or a hash table instead of a list.
• Removing a flow rule before adding another one can either put the new rule at the end of
the list or reuse a freed entry.
• Duplication may occur when packets are matched by several rules.
For overlapping rules (particularly in order to use Action: PASSTHRU) predictable behavior is
only guaranteed by using different priority levels.
Priority levels are not necessarily implemented in hardware, or may be severely limited (e.g. a
single priority bit).
For these reasons, priority levels may be implemented purely in software by PMDs.
• For devices expecting flow rules to be added in the correct order, PMDs may destroy and
re-create existing rules after adding a new one with a higher priority.
• A configurable number of dummy or empty rules can be created at initialization time to
save high priority slots for later.
• In order to save priority levels, PMDs may evaluate whether rules are likely to collide and
adjust their priority accordingly.

4.8. Generic flow API (rte_flow) 311


DPDK documentation, Release 17.05.0-rc0

Future evolutions

• A device profile selection function which could be used to force a permanent profile in-
stead of relying on its automatic configuration based on existing flow rules.
• A method to optimize rte_flow rules with specific pattern items and action types gener-
ated on the fly by PMDs. DPDK should assign negative numbers to these in order to not
collide with the existing types. See Negative types.
• Adding specific egress pattern items and actions as described in Attribute: Traffic direc-
tion.
• Optional software fallback when PMDs are unable to handle requested flow rules so
applications do not have to implement their own.

API migration

Exhaustive list of deprecated filter types (normally prefixed with RTE_ETH_FILTER_) found in
rte_eth_ctrl.h and methods to convert them to rte_flow rules.

MACVLAN to ETH → VF, PF

MACVLAN can be translated to a basic Item: ETH flow rule with a terminating Action: VF or
Action: PF .

Table 4.41: MACVLAN conversion


Pattern Actions
spec any
0 ETH last N/A VF, PF
mask any
1 END END

ETHERTYPE to ETH → QUEUE, DROP

ETHERTYPE is basically an Item: ETH flow rule with a terminating Action: QUEUE or Action:
DROP.

Table 4.42: ETHERTYPE conversion


Pattern Actions
spec any
0 ETH last N/A QUEUE, DROP
mask any
1 END END

FLEXIBLE to RAW → QUEUE

FLEXIBLE can be translated to one Item: RAW pattern with a terminating Action: QUEUE and
a defined priority level.

4.8. Generic flow API (rte_flow) 312


DPDK documentation, Release 17.05.0-rc0

Table 4.43: FLEXIBLE conversion


Pattern Actions
spec any
0 RAW last N/A QUEUE
mask any
1 END END

SYN to TCP → QUEUE

SYN is a Item: TCP rule with only the syn bit enabled and masked, and a terminating Action:
QUEUE.
Priority level can be set to simulate the high priority bit.

Table 4.44: SYN conversion


Pattern Actions
spec unset
0 ETH last unset QUEUE
mask unset
spec unset
1 IPV4 mask unset
mask unset
END
spec syn 1
2 TCP
mask syn 1
3 END

NTUPLE to IPV4, TCP, UDP → QUEUE

NTUPLE is similar to specifying an empty L2, Item: IPV4 as L3 with Item: TCP or Item: UDP
as L4 and a terminating Action: QUEUE.
A priority level can be specified as well.

Table 4.45: NTUPLE conversion


Pattern Actions
spec unset
0 ETH last unset QUEUE
mask unset
spec any
1 IPV4 last unset
mask any
spec any END
2 TCP, UDP last unset
mask any
3 END

4.8. Generic flow API (rte_flow) 313


DPDK documentation, Release 17.05.0-rc0

TUNNEL to ETH, IPV4, IPV6, VXLAN (or other) → QUEUE

TUNNEL matches common IPv4 and IPv6 L3/L4-based tunnel types.


In the following table, Item: ANY is used to cover the optional L4.

Table 4.46: TUNNEL conversion


Pattern Actions
spec any
0 ETH last unset QUEUE
mask any
spec any
1 IPV4, IPV6 last unset
mask any
spec any
2 ANY last unset
END
mask num 0
spec any
3 VXLAN, GENEVE, TEREDO, NVGRE, GRE, ... last unset
mask any
4 END

FDIR to most item types → QUEUE, DROP, PASSTHRU

FDIR is more complex than any other type, there are several methods to emulate its function-
ality. It is summarized for the most part in the table below.
A few features are intentionally not supported:
• The ability to configure the matching input set and masks for the entire device, PMDs
should take care of it automatically according to the requested flow rules.
For example if a device supports only one bit-mask per protocol type, source/address
IPv4 bit-masks can be made immutable by the first created rule. Subsequent IPv4 or
TCPv4 rules can only be created if they are compatible.
Note that only protocol bit-masks affected by existing flow rules are immutable, others can
be changed later. They become mutable again after the related flow rules are destroyed.
• Returning four or eight bytes of matched data when using flex bytes filtering. Although a
specific action could implement it, it conflicts with the much more useful 32 bits tagging
on devices that support it.
• Side effects on RSS processing of the entire device. Flow rules that conflict with the
current device configuration should not be allowed. Similarly, device configuration should
not be allowed when it affects existing flow rules.
• Device modes of operation. “none” is unsupported since filtering cannot be disabled as
long as a flow rule is present.
• “MAC VLAN” or “tunnel” perfect matching modes should be automatically set according
to the created flow rules.
• Signature mode of operation is not defined but could be handled through a specific item
type if needed.

4.8. Generic flow API (rte_flow) 314


DPDK documentation, Release 17.05.0-rc0

Table 4.47: FDIR conversion


Pattern Actions
spec any
0 ETH, RAW last N/A QUEUE, DROP, PASSTHRU
mask any
spec any
1 IPV4, IPv6 last N/A MARK
mask any
spec any
2 TCP, UDP, SCTP last N/A
mask any
spec any END
3 VF, PF (optional) last N/A
mask any
4 END

HASH

There is no counterpart to this filter type because it translates to a global device setting instead
of a pattern item. Device settings are automatically set according to the created flow rules.

L2_TUNNEL to VOID → VXLAN (or others)

All packets are matched. This type alters incoming packets to encapsulate them in a chosen
tunnel type, optionally redirect them to a VF as well.
The destination pool for tag based forwarding can be emulated with other flow rules using
Action: DUP.

Table 4.48: L2_TUNNEL conversion


Pattern Actions
spec N/A
0 VOID last N/A VXLAN, GENEVE, ...
mask N/A
1 VF (optional)
END
2 END

Cryptography Device Library

The cryptodev library provides a Crypto device framework for management and provisioning
of hardware and software Crypto poll mode drivers, defining generic APIs which support a
number of different Crypto operations. The framework currently only supports cipher, authen-
tication, chained cipher/authentication and AEAD symmetric Crypto operations.

4.9. Cryptography Device Library 315


DPDK documentation, Release 17.05.0-rc0

Design Principles

The cryptodev library follows the same basic principles as those used in DPDKs Ethernet
Device framework. The Crypto framework provides a generic Crypto device framework which
supports both physical (hardware) and virtual (software) Crypto devices as well as a generic
Crypto API which allows Crypto devices to be managed and configured and supports Crypto
operations to be provisioned on Crypto poll mode driver.

Device Management

Device Creation

Physical Crypto devices are discovered during the PCI probe/enumeration of the EAL function
which is executed at DPDK initialization, based on their PCI device identifier, each unique
PCI BDF (bus/bridge, device, function). Specific physical Crypto devices, like other physical
devices in DPDK can be white-listed or black-listed using the EAL command line options.
Virtual devices can be created by two mechanisms, either using the EAL command line options
or from within the application using an EAL API directly.
From the command line using the –vdev EAL option
--vdev 'cryptodev_aesni_mb_pmd0,max_nb_queue_pairs=2,max_nb_sessions=1024,socket_id=0'

Our using the rte_eal_vdev_init API within the application code.


rte_eal_vdev_init("cryptodev_aesni_mb_pmd",
"max_nb_queue_pairs=2,max_nb_sessions=1024,socket_id=0")

All virtual Crypto devices support the following initialization parameters:


• max_nb_queue_pairs - maximum number of queue pairs supported by the device.
• max_nb_sessions - maximum number of sessions supported by the device
• socket_id - socket on which to allocate the device resources on.

Device Identification

Each device, whether virtual or physical is uniquely designated by two identifiers:


• A unique device index used to designate the Crypto device in all functions exported by
the cryptodev API.
• A device name used to designate the Crypto device in console messages, for adminis-
tration or debugging purposes. For ease of use, the port name includes the port index.

Device Configuration

The configuration of each Crypto device includes the following operations:


• Allocation of resources, including hardware resources if a physical device.
• Resetting the device into a well-known default state.
• Initialization of statistics counters.

4.9. Cryptography Device Library 316


DPDK documentation, Release 17.05.0-rc0

The rte_cryptodev_configure API is used to configure a Crypto device.


int rte_cryptodev_configure(uint8_t dev_id,
struct rte_cryptodev_config *config)

The rte_cryptodev_config structure is used to pass the configuration parameters. In


contains parameter for socket selection, number of queue pairs and the session mempool
configuration.
struct rte_cryptodev_config {
int socket_id;
/**< Socket to allocate resources on */
uint16_t nb_queue_pairs;
/**< Number of queue pairs to configure on device */

struct {
uint32_t nb_objs;
uint32_t cache_size;
} session_mp;
/**< Session mempool configuration */
};

Configuration of Queue Pairs

Each Crypto devices queue pair is individually configured through the


rte_cryptodev_queue_pair_setup API. Each queue pairs resources may be allo-
cated on a specified socket.
int rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
const struct rte_cryptodev_qp_conf *qp_conf,
int socket_id)

struct rte_cryptodev_qp_conf {
uint32_t nb_descriptors; /**< Number of descriptors per queue pair */
};

Logical Cores, Memory and Queues Pair Relationships

The Crypto device Library as the Poll Mode Driver library support NUMA for when a processor’s
logical cores and interfaces utilize its local memory. Therefore Crypto operations, and in the
case of symmetric Crypto operations, the session and the mbuf being operated on, should
be allocated from memory pools created in the local memory. The buffers should, if possible,
remain on the local processor to obtain the best performance results and buffer descriptors
should be populated with mbufs allocated from a mempool allocated from local memory.
The run-to-completion model also performs better, especially in the case of virtual Crypto de-
vices, if the Crypto operation and session and data buffer is in local memory instead of a
remote processor’s memory. This is also true for the pipe-line model provided all logical cores
used are located on the same processor.
Multiple logical cores should never share the same queue pair for enqueuing operations or de-
queuing operations on the same Crypto device since this would require global locks and hinder
performance. It is however possible to use a different logical core to dequeue an operation on
a queue pair from the logical core which it was enqueued on. This means that a crypto burst
enqueue/dequeue APIs are a logical place to transition from one logical core to another in a
packet processing pipeline.

4.9. Cryptography Device Library 317


DPDK documentation, Release 17.05.0-rc0

Device Features and Capabilities

Crypto devices define their functionality through two mechanisms, global device features and
algorithm capabilities. Global devices features identify device wide level features which are
applicable to the whole device such as the device having hardware acceleration or supporting
symmetric Crypto operations,
The capabilities mechanism defines the individual algorithms/functions which the device sup-
ports, such as a specific symmetric Crypto cipher or authentication operation.

Device Features

Currently the following Crypto device features are defined:


• Symmetric Crypto operations
• Asymmetric Crypto operations
• Chaining of symmetric Crypto operations
• SSE accelerated SIMD vector operations
• AVX accelerated SIMD vector operations
• AVX2 accelerated SIMD vector operations
• AESNI accelerated instructions
• Hardware off-load processing

Device Operation Capabilities

Crypto capabilities which identify particular algorithm which the Crypto PMD supports are de-
fined by the operation type, the operation transform, the transform identifier and then the par-
ticulars of the transform. For the full scope of the Crypto capability see the definition of the
structure in the DPDK API Reference.
struct rte_cryptodev_capabilities;

Each Crypto poll mode driver defines its own private array of capabilities for the operations it
supports. Below is an example of the capabilities for a PMD which supports the authentication
algorithm SHA1_HMAC and the cipher algorithm AES_CBC.
static const struct rte_cryptodev_capabilities pmd_capabilities[] = {
{ /* SHA1 HMAC */
.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
.sym = {
.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
.auth = {
.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
.block_size = 64,
.key_size = {
.min = 64,
.max = 64,
.increment = 0
},
.digest_size = {
.min = 12,
.max = 12,

4.9. Cryptography Device Library 318


DPDK documentation, Release 17.05.0-rc0

.increment = 0
},
.aad_size = { 0 }
}
}
},
{ /* AES CBC */
.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
.sym = {
.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
.cipher = {
.algo = RTE_CRYPTO_CIPHER_AES_CBC,
.block_size = 16,
.key_size = {
.min = 16,
.max = 32,
.increment = 8
},
.iv_size = {
.min = 16,
.max = 16,
.increment = 0
}
}
}
}
}

Capabilities Discovery

Discovering the features and capabilities of a Crypto device poll mode driver is achieved
through the rte_cryptodev_info_get function.
void rte_cryptodev_info_get(uint8_t dev_id,
struct rte_cryptodev_info *dev_info);

This allows the user to query a specific Crypto PMD and get all the device features and ca-
pabilities. The rte_cryptodev_info structure contains all the relevant information for the
device.
struct rte_cryptodev_info {
const char *driver_name;
enum rte_cryptodev_type dev_type;
struct rte_pci_device *pci_dev;

uint64_t feature_flags;

const struct rte_cryptodev_capabilities *capabilities;

unsigned max_nb_queue_pairs;

struct {
unsigned max_nb_sessions;
} sym;
};

Operation Processing

Scheduling of Crypto operations on DPDK’s application data path is performed using a burst
oriented asynchronous API set. A queue pair on a Crypto device accepts a burst of Crypto

4.9. Cryptography Device Library 319


DPDK documentation, Release 17.05.0-rc0

operations using enqueue burst API. On physical Crypto devices the enqueue burst API will
place the operations to be processed on the devices hardware input queue, for virtual devices
the processing of the Crypto operations is usually completed during the enqueue call to the
Crypto device. The dequeue burst API will retrieve any processed operations available from the
queue pair on the Crypto device, from physical devices this is usually directly from the devices
processed queue, and for virtual device’s from a rte_ring where processed operations are
place after being processed on the enqueue call.

Enqueue / Dequeue Burst APIs

The burst enqueue API uses a Crypto device identifier and a queue pair identifier to specify the
Crypto device queue pair to schedule the processing on. The nb_ops parameter is the number
of operations to process which are supplied in the ops array of rte_crypto_op structures.
The enqueue function returns the number of operations it actually enqueued for processing, a
return value equal to nb_ops means that all packets have been enqueued.
uint16_t rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
struct rte_crypto_op **ops, uint16_t nb_ops)

The dequeue API uses the same format as the enqueue API of processed but the nb_ops
and ops parameters are now used to specify the max processed operations the user wishes
to retrieve and the location in which to store them. The API call returns the actual number of
processed operations returned, this can never be larger than nb_ops.
uint16_t rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
struct rte_crypto_op **ops, uint16_t nb_ops)

Operation Representation

An Crypto operation is represented by an rte_crypto_op structure, which is a generic metadata


container for all necessary information required for the Crypto operation to be processed on a
particular Crypto device poll mode driver.

The operation structure includes the operation type and the operation status, a reference to
the operation specific data, which can vary in size and content depending on the operation
being provisioned. It also contains the source mempool for the operation, if it allocate from a
mempool. Finally an opaque pointer for user specific data is provided.
If Crypto operations are allocated from a Crypto operation mempool, see next section, there is
also the ability to allocate private memory with the operation for applications purposes.
Application software is responsible for specifying all the operation specific fields in the
rte_crypto_op structure which are then used by the Crypto PMD to process the requested
operation.

Operation Management and Allocation

The cryptodev library provides an API set for managing Crypto operations which utilize the
Mempool Library to allocate operation buffers. Therefore, it ensures that the crytpo op-
eration is interleaved optimally across the channels and ranks for optimal processing. A
rte_crypto_op contains a field indicating the pool that it originated from. When calling
rte_crypto_op_free(op), the operation returns to its original pool.

4.9. Cryptography Device Library 320


DPDK documentation, Release 17.05.0-rc0

extern struct rte_mempool *


rte_crypto_op_pool_create(const char *name, enum rte_crypto_op_type type,
unsigned nb_elts, unsigned cache_size, uint16_t priv_size,
int socket_id);

During pool creation rte_crypto_op_init() is called as a constructor to initialize each


Crypto operation which subsequently calls __rte_crypto_op_reset() to configure any
operation type specific fields based on the type parameter.
rte_crypto_op_alloc() and rte_crypto_op_bulk_alloc() are used to allo-
cate Crypto operations of a specific type from a given Crypto operation mempool.
__rte_crypto_op_reset() is called on each operation before being returned to allocate
to a user so the operation is always in a good known state before use by the application.
struct rte_crypto_op *rte_crypto_op_alloc(struct rte_mempool *mempool,
enum rte_crypto_op_type type)

unsigned rte_crypto_op_bulk_alloc(struct rte_mempool *mempool,


enum rte_crypto_op_type type,
struct rte_crypto_op **ops, uint16_t nb_ops)

rte_crypto_op_free() is called by the application to return an operation to its allocating


pool.
void rte_crypto_op_free(struct rte_crypto_op *op)

Symmetric Cryptography Support

The cryptodev library currently provides support for the following symmetric Crypto operations;
cipher, authentication, including chaining of these operations, as well as also supporting AEAD
operations.

Session and Session Management

Session are used in symmetric cryptographic processing to store the immutable data defined in
a cryptographic transform which is used in the operation processing of a packet flow. Sessions
are used to manage information such as expand cipher keys and HMAC IPADs and OPADs,
which need to be calculated for a particular Crypto operation, but are immutable on a packet
to packet basis for a flow. Crypto sessions cache this immutable data in a optimal way for the
underlying PMD and this allows further acceleration of the offload of Crypto workloads.

The Crypto device framework provides a set of session pool management APIs for the creation
and freeing of the sessions, utilizing the Mempool Library.
The framework also provides hooks so the PMDs can pass the amount of memory required for
that PMDs private session parameters, as well as initialization functions for the configuration
of the session parameters and freeing function so the PMD can managed the memory on
destruction of a session.
Note: Sessions created on a particular device can only be used on Crypto devices of the same
type, and if you try to use a session on a device different to that on which it was created then
the Crypto operation will fail.

4.9. Cryptography Device Library 321


DPDK documentation, Release 17.05.0-rc0

rte_cryptodev_sym_session_create() is used to create a symmetric session on


Crypto device. A symmetric transform chain is used to specify the particular operation and
its parameters. See the section below for details on transforms.
struct rte_cryptodev_sym_session * rte_cryptodev_sym_session_create(
uint8_t dev_id, struct rte_crypto_sym_xform *xform);

Note: For AEAD operations the algorithm selected for authentication and ciphering must
aligned, eg AES_GCM.

Transforms and Transform Chaining

Symmetric Crypto transforms (rte_crypto_sym_xform) are the mechanism used to spec-


ify the details of the Crypto operation. For chaining of symmetric operations such as cipher
encrypt and authentication generate, the next pointer allows transform to be chained together.
Crypto devices which support chaining must publish the chaining of symmetric Crypto opera-
tions feature flag.
Currently there are two transforms types cipher and authentication, to specify an AEAD opera-
tion it is required to chain a cipher and an authentication transform together. Also it is important
to note that the order in which the transforms are passed indicates the order of the chaining.
struct rte_crypto_sym_xform {
struct rte_crypto_sym_xform *next;
/**< next xform in chain */
enum rte_crypto_sym_xform_type type;
/**< xform type */
union {
struct rte_crypto_auth_xform auth;
/**< Authentication / hash xform */
struct rte_crypto_cipher_xform cipher;
/**< Cipher xform */
};
};

The API does not place a limit on the number of transforms that can be chained together but
this will be limited by the underlying Crypto device poll mode driver which is processing the
operation.

Symmetric Operations

The symmetric Crypto operation structure contains all the mutable data relating to performing
symmetric cryptographic processing on a referenced mbuf data buffer. It is used for either
cipher, authentication, AEAD and chained operations.
As a minimum the symmetric operation must have a source data buffer (m_src), the session
type (session-based/less), a valid session (or transform chain if in session-less mode) and
the minimum authentication/ cipher parameters required depending on the type of operation
specified in the session or the transform chain.
struct rte_crypto_sym_op {
struct rte_mbuf *m_src;
struct rte_mbuf *m_dst;

enum rte_crypto_sym_op_sess_type type;

4.9. Cryptography Device Library 322


DPDK documentation, Release 17.05.0-rc0

union {
struct rte_cryptodev_sym_session *session;
/**< Handle for the initialised session context */
struct rte_crypto_sym_xform *xform;
/**< Session-less API Crypto operation parameters */
};

struct {
struct {
uint32_t offset;
uint32_t length;
} data; /**< Data offsets and length for ciphering */

struct {
uint8_t *data;
phys_addr_t phys_addr;
uint16_t length;
} iv; /**< Initialisation vector parameters */
} cipher;

struct {
struct {
uint32_t offset;
uint32_t length;
} data; /**< Data offsets and length for authentication */

struct {
uint8_t *data;
phys_addr_t phys_addr;
uint16_t length;
} digest; /**< Digest parameters */

struct {
uint8_t *data;
phys_addr_t phys_addr;
uint16_t length;
} aad; /**< Additional authentication parameters */
} auth;
}

Asymmetric Cryptography

Asymmetric functionality is currently not supported by the cryptodev API.

Crypto Device API

The cryptodev Library API is described in the DPDK API Reference document.

Link Bonding Poll Mode Driver Library

In addition to Poll Mode Drivers (PMDs) for physical and virtual hardware, DPDK also includes
a pure-software library that allows physical PMD’s to be bonded together to create a single
logical PMD.

Fig. 4.23: Bonded PMDs

4.10. Link Bonding Poll Mode Driver Library 323


DPDK documentation, Release 17.05.0-rc0

The Link Bonding PMD library(librte_pmd_bond) supports bonding of groups of rte_eth_dev


ports of the same speed and duplex to provide similar the capabilities to that found in Linux
bonding driver to allow the aggregation of multiple (slave) NICs into a single logical interface
between a server and a switch. The new bonded PMD will then process these interfaces based
on the mode of operation specified to provide support for features such as redundant links, fault
tolerance and/or load balancing.
The librte_pmd_bond library exports a C API which provides an API for the creation of bonded
devices as well as the configuration and management of the bonded device and its slave
devices.

Note: The Link Bonding PMD Library is enabled by default in the build configuration files, the
library can be disabled by setting CONFIG_RTE_LIBRTE_PMD_BOND=n and recompiling the
DPDK.

Link Bonding Modes Overview

Currently the Link Bonding PMD library supports following modes of operation:
• Round-Robin (Mode 0):

Fig. 4.24: Round-Robin (Mode 0)


This mode provides load balancing and fault tolerance by transmission of packets in se-
quential order from the first available slave device through the last. Packets are bulk de-
queued from devices then serviced in a round-robin manner. This mode does not guaran-
tee in order reception of packets and down stream should be able to handle out of order
packets.

• Active Backup (Mode 1):

Fig. 4.25: Active Backup (Mode 1)


In this mode only one slave in the bond is active at any time, a different slave becomes
active if, and only if, the primary active slave fails, thereby providing fault tolerance to slave
failure. The single logical bonded interface’s MAC address is externally visible on only one
NIC (port) to avoid confusing the network switch.

• Balance XOR (Mode 2):

Fig. 4.26: Balance XOR (Mode 2)


This mode provides transmit load balancing (based on the selected transmission policy)
and fault tolerance. The default policy (layer2) uses a simple calculation based on the
packet flow source and destination MAC addresses as well as the number of active slaves
available to the bonded device to classify the packet to a specific slave to transmit on. Alter-
nate transmission policies supported are layer 2+3, this takes the IP source and destination
addresses into the calculation of the transmit slave port and the final supported policy is
layer 3+4, this uses IP source and destination addresses as well as the TCP/UDP source
and destination port.

4.10. Link Bonding Poll Mode Driver Library 324


DPDK documentation, Release 17.05.0-rc0

Note: The coloring differences of the packets are used to identify different flow classification
calculated by the selected transmit policy

• Broadcast (Mode 3):

Fig. 4.27: Broadcast (Mode 3)


This mode provides fault tolerance by transmission of packets on all slave ports.

• Link Aggregation 802.3AD (Mode 4):

Fig. 4.28: Link Aggregation 802.3AD (Mode 4)


This mode provides dynamic link aggregation according to the 802.3ad specification. It
negotiates and monitors aggregation groups that share the same speed and duplex settings
using the selected balance transmit policy for balancing outgoing traffic.
DPDK implementation of this mode provide some additional requirements of the applica-
tion.
1. It needs to call rte_eth_tx_burst and rte_eth_rx_burst with intervals period
of less than 100ms.
2. Calls to rte_eth_tx_burst must have a buffer size of at least 2xN, where N is
the number of slaves. This is a space required for LACP frames. Additionally LACP
packets are included in the statistics, but they are not returned to the application.

• Transmit Load Balancing (Mode 5):

Fig. 4.29: Transmit Load Balancing (Mode 5)


This mode provides an adaptive transmit load balancing. It dynamically changes the trans-
mitting slave, according to the computed load. Statistics are collected in 100ms intervals
and scheduled every 10ms.

Implementation Details

The librte_pmd_bond bonded device are compatible with the Ethernet device API exported by
the Ethernet PMDs described in the DPDK API Reference.
The Link Bonding Library supports the creation of bonded devices at application startup time
during EAL initialization using the --vdev option as well as programmatically via the C API
rte_eth_bond_create function.
Bonded devices support the dynamical addition and removal of slave devices using the
rte_eth_bond_slave_add / rte_eth_bond_slave_remove APIs.
After a slave device is added to a bonded device slave is stopped using rte_eth_dev_stop
and then reconfigured using rte_eth_dev_configure the RX and TX queues are also re-
configured using rte_eth_tx_queue_setup / rte_eth_rx_queue_setup with the pa-
rameters use to configure the bonding device. If RSS is enabled for bonding device, this mode
is also enabled on new slave and configured as well.

4.10. Link Bonding Poll Mode Driver Library 325


DPDK documentation, Release 17.05.0-rc0

Setting up multi-queue mode for bonding device to RSS, makes it fully RSS-capable, so all
slaves are synchronized with its configuration. This mode is intended to provide RSS configu-
ration on slaves transparent for client application implementation.
Bonding device stores its own version of RSS settings i.e. RETA, RSS hash function and RSS
key, used to set up its slaves. That let to define the meaning of RSS configuration of bonding
device as desired configuration of whole bonding (as one unit), without pointing any of slave
inside. It is required to ensure consistency and made it more error-proof.
RSS hash function set for bonding device, is a maximal set of RSS hash functions supported
by all bonded slaves. RETA size is a GCD of all its RETA’s sizes, so it can be easily used as
a pattern providing expected behavior, even if slave RETAs’ sizes are different. If RSS Key is
not set for bonded device, it’s not changed on the slaves and default key for device is used.
All settings are managed through the bonding port API and always are propagated in one
direction (from bonding to slaves).

Link Status Change Interrupts / Polling

Link bonding devices support the registration of a link status change callback, using the
rte_eth_dev_callback_register API, this will be called when the status of the bond-
ing device changes. For example in the case of a bonding device which has 3 slaves, the link
status will change to up when one slave becomes active or change to down when all slaves
become inactive. There is no callback notification when a single slave changes state and the
previous conditions are not met. If a user wishes to monitor individual slaves then they must
register callbacks with that slave directly.
The link bonding library also supports devices which do not implement link status change
interrupts, this is achieved by polling the devices link status at a defined period which is
set using the rte_eth_bond_link_monitoring_set API, the default polling interval is
10ms. When a device is added as a slave to a bonding device it is determined using the
RTE_PCI_DRV_INTR_LSC flag whether the device supports interrupts or whether the link sta-
tus should be monitored by polling it.

Requirements / Limitations

The current implementation only supports devices that support the same speed and duplex to
be added as a slaves to the same bonded device. The bonded device inherits these attributes
from the first active slave added to the bonded device and then all further slaves added to the
bonded device must support these parameters.
A bonding device must have a minimum of one slave before the bonding device itself can be
started.
To use a bonding device dynamic RSS configuration feature effectively, it is also required, that
all slaves should be RSS-capable and support, at least one common hash function available
for each of them. Changing RSS key is only possible, when all slave devices support the same
key size.
To prevent inconsistency on how slaves process packets, once a device is added to a bonding
device, RSS configuration should be managed through the bonding device API, and not directly
on the slave.

4.10. Link Bonding Poll Mode Driver Library 326


DPDK documentation, Release 17.05.0-rc0

Like all other PMD, all functions exported by a PMD are lock-free functions that are assumed
not to be invoked in parallel on different logical cores to work on the same target object.
It should also be noted that the PMD receive function should not be invoked directly on a slave
devices after they have been to a bonded device since packets read directly from the slave
device will no longer be available to the bonded device to read.

Configuration

Link bonding devices are created using the rte_eth_bond_create API which requires a
unique device name, the bonding mode, and the socket Id to allocate the bonding device’s
resources on. The other configurable parameters for a bonded device are its slave devices, its
primary slave, a user defined MAC address and transmission policy to use if the device is in
balance XOR mode.

Slave Devices

Bonding devices support up to a maximum of RTE_MAX_ETHPORTS slave devices of the same


speed and duplex. Ethernet devices can be added as a slave to a maximum of one bonded
device. Slave devices are reconfigured with the configuration of the bonded device on being
added to a bonded device.
The bonded also guarantees to return the MAC address of the slave device to its original value
of removal of a slave from it.

Primary Slave

The primary slave is used to define the default port to use when a bonded device is in active
backup mode. A different port will only be used if, and only if, the current primary port goes
down. If the user does not specify a primary port it will default to being the first port added to
the bonded device.

MAC Address

The bonded device can be configured with a user specified MAC address, this address will be
inherited by the some/all slave devices depending on the operating mode. If the device is in
active backup mode then only the primary device will have the user specified MAC, all other
slaves will retain their original MAC address. In mode 0, 2, 3, 4 all slaves devices are configure
with the bonded devices MAC address.
If a user defined MAC address is not defined then the bonded device will default to using the
primary slaves MAC address.

Balance XOR Transmit Policies

There are 3 supported transmission policies for bonded device running in Balance XOR mode.
Layer 2, Layer 2+3, Layer 3+4.

4.10. Link Bonding Poll Mode Driver Library 327


DPDK documentation, Release 17.05.0-rc0

• Layer 2: Ethernet MAC address based balancing is the default transmission policy for
Balance XOR bonding mode. It uses a simple XOR calculation on the source MAC
address and destination MAC address of the packet and then calculate the modulus of
this value to calculate the slave device to transmit the packet on.
• Layer 2 + 3: Ethernet MAC address & IP Address based balancing uses a combination of
source/destination MAC addresses and the source/destination IP addresses of the data
packet to decide which slave port the packet will be transmitted on.
• Layer 3 + 4: IP Address & UDP Port based balancing uses a combination of
source/destination IP Address and the source/destination UDP ports of the packet of
the data packet to decide which slave port the packet will be transmitted on.
All these policies support 802.1Q VLAN Ethernet packets, as well as IPv4, IPv6 and UDP
protocols for load balancing.

Using Link Bonding Devices

The librte_pmd_bond library supports two modes of device creation, the libraries export full C
API or using the EAL command line to statically configure link bonding devices at application
startup. Using the EAL option it is possible to use link bonding functionality transparently
without specific knowledge of the libraries API, this can be used, for example, to add bonding
functionality, such as active backup, to an existing application which has no knowledge of the
link bonding C API.

Using the Poll Mode Driver from an Application

Using the librte_pmd_bond libraries API it is possible to dynamically create and manage
link bonding device from within any application. Link bonding devices are created using the
rte_eth_bond_create API which requires a unique device name, the link bonding mode
to initial the device in and finally the socket Id which to allocate the devices resources onto.
After successful creation of a bonding device it must be configured using the generic Ethernet
device configure API rte_eth_dev_configure and then the RX and TX queues which will
be used must be setup using rte_eth_tx_queue_setup / rte_eth_rx_queue_setup.
Slave devices can be dynamically added and removed from a link bonding device us-
ing the rte_eth_bond_slave_add / rte_eth_bond_slave_remove APIs but at least
one slave device must be added to the link bonding device before it can be started using
rte_eth_dev_start.
The link status of a bonded device is dictated by that of its slaves, if all slave device link status
are down or if all slaves are removed from the link bonding device then the link status of the
bonding device will go down.
It is also possible to configure / query the configuration of the control param-
eters of a bonded device using the provided APIs rte_eth_bond_mode_set/
get, rte_eth_bond_primary_set/get, rte_eth_bond_mac_set/reset and
rte_eth_bond_xmit_policy_set/get.

4.10. Link Bonding Poll Mode Driver Library 328


DPDK documentation, Release 17.05.0-rc0

Using Link Bonding Devices from the EAL Command Line

Link bonding devices can be created at application startup time using the --vdev EAL com-
mand line option. The device name must start with the net_bond prefix followed by numbers
or letters. The name must be unique for each device. Each device can have multiple options
arranged in a comma separated list. Multiple devices definitions can be arranged by calling the
--vdev option multiple times.
Device names and bonding options must be separated by commas as shown below:
$RTE_TARGET/app/testpmd -c f -n 4 --vdev 'net_bond0,bond_opt0=..,bond opt1=..'--vdev 'net_bond1

Link Bonding EAL Options

There are multiple ways of definitions that can be assessed and combined as long as the
following two rules are respected:
• A unique device name, in the format of net_bondX is provided, where X can be any
combination of numbers and/or letters, and the name is no greater than 32 characters
long.
• A least one slave device is provided with for each bonded device definition.
• The operation mode of the bonded device being created is provided.
The different options are:
• mode: Integer value defining the bonding mode of the device. Currently supports modes
0,1,2,3,4,5 (round-robin, active backup, balance, broadcast, link aggregation, transmit
load balancing).
mode=2

• slave: Defines the PMD device which will be added as slave to the bonded de-
vice. This option can be selected multiple times, for each device to be added as a
slave. Physical devices should be specified using their PCI address, in the format do-
main:bus:devid.function
slave=0000:0a:00.0,slave=0000:0a:00.1

• primary: Optional parameter which defines the primary slave port, is used in active
backup mode to select the primary slave for data TX/RX if it is available. The primary
port also is used to select the MAC address to use when it is not defined by the user.
This defaults to the first slave added to the device if it is specified. The primary device
must be a slave of the bonded device.
primary=0000:0a:00.0

• socket_id: Optional parameter used to select which socket on a NUMA device the bonded
devices resources will be allocated on.
socket_id=0

• mac: Optional parameter to select a MAC address for link bonding device, this overrides
the value of the primary slave device.
mac=00:1e:67:1d:fd:1d

4.10. Link Bonding Poll Mode Driver Library 329


DPDK documentation, Release 17.05.0-rc0

• xmit_policy: Optional parameter which defines the transmission policy when the bonded
device is in balance mode. If not user specified this defaults to l2 (layer 2) forwarding, the
other transmission policies available are l23 (layer 2+3) and l34 (layer 3+4)
xmit_policy=l23

• lsc_poll_period_ms: Optional parameter which defines the polling interval in milli-


seconds at which devices which don’t support lsc interrupts are checked for a change
in the devices link status
lsc_poll_period_ms=100

• up_delay: Optional parameter which adds a delay in milli-seconds to the propagation of


a devices link status changing to up, by default this parameter is zero.
up_delay=10

• down_delay: Optional parameter which adds a delay in milli-seconds to the propagation


of a devices link status changing to down, by default this parameter is zero.
down_delay=50

Examples of Usage

Create a bonded device in round robin mode with two slaves specified by their PCI address:
$RTE_TARGET/app/testpmd -c '0xf' -n 4 --vdev 'net_bond0,mode=0, slave=0000:00a:00.01,slave=0000

Create a bonded device in round robin mode with two slaves specified by their PCI address
and an overriding MAC address:
$RTE_TARGET/app/testpmd -c '0xf' -n 4 --vdev 'net_bond0,mode=0, slave=0000:00a:00.01,slave=0000

Create a bonded device in active backup mode with two slaves specified, and a primary slave
specified by their PCI addresses:
$RTE_TARGET/app/testpmd -c '0xf' -n 4 --vdev 'net_bond0,mode=1, slave=0000:00a:00.01,slave=0000

Create a bonded device in balance mode with two slaves specified by their PCI addresses,
and a transmission policy of layer 3 + 4 forwarding:
$RTE_TARGET/app/testpmd -c '0xf' -n 4 --vdev 'net_bond0,mode=2, slave=0000:00a:00.01,slave=0000

Timer Library

The Timer library provides a timer service to DPDK execution units to enable execution of
callback functions asynchronously. Features of the library are:
• Timers can be periodic (multi-shot) or single (one-shot).
• Timers can be loaded from one core and executed on another. It has to be specified in
the call to rte_timer_reset().
• Timers provide high precision (depends on the call frequency to rte_timer_manage() that
checks timer expiration for the local core).
• If not required in the application, timers can be disabled at compilation time by not calling
the rte_timer_manage() to increase performance.

4.11. Timer Library 330


DPDK documentation, Release 17.05.0-rc0

The timer library uses the rte_get_timer_cycles() function that uses the High Precision Event
Timer (HPET) or the CPUs Time Stamp Counter (TSC) to provide a reliable time reference.
This library provides an interface to add, delete and restart a timer. The API is based on BSD
callout() with a few differences. Refer to the callout manual.

Implementation Details

Timers are tracked on a per-lcore basis, with all pending timers for a core being maintained
in order of timer expiry in a skiplist data structure. The skiplist used has ten levels and each
entry in the table appears in each level with probability ¼^level. This means that all entries are
present in level 0, 1 in every 4 entries is present at level 1, one in every 16 at level 2 and so on
up to level 9. This means that adding and removing entries from the timer list for a core can be
done in log(n) time, up to 4^10 entries, that is, approximately 1,000,000 timers per lcore.
A timer structure contains a special field called status, which is a union of a timer state
(stopped, pending, running, config) and an owner (lcore id). Depending on the timer state,
we know if a timer is present in a list or not:
• STOPPED: no owner, not in a list
• CONFIG: owned by a core, must not be modified by another core, maybe in a list or not,
depending on previous state
• PENDING: owned by a core, present in a list
• RUNNING: owned by a core, must not be modified by another core, present in a list
Resetting or stopping a timer while it is in a CONFIG or RUNNING state is not allowed. When
modifying the state of a timer, a Compare And Swap instruction should be used to guarantee
that the status (state+owner) is modified atomically.
Inside the rte_timer_manage() function, the skiplist is used as a regular list by iterating along
the level 0 list, which contains all timer entries, until an entry which has not yet expired has
been encountered. To improve performance in the case where there are entries in the timer
list but none of those timers have yet expired, the expiry time of the first list entry is maintained
within the per-core timer list structure itself. On 64-bit platforms, this value can be checked
without the need to take a lock on the overall structure. (Since expiry times are maintained
as 64-bit values, a check on the value cannot be done on 32-bit platforms without using either
a compare-and-swap (CAS) instruction or using a lock, so this additional check is skipped in
favor of checking as normal once the lock has been taken.) On both 64-bit and 32-bit platforms,
a call to rte_timer_manage() returns without taking a lock in the case where the timer list for
the calling core is empty.

Use Cases

The timer library is used for periodic calls, such as garbage collectors, or some state machines
(ARP, bridging, and so on).

References

• callout manual - The callout facility that provides timers with a mechanism to execute a
function at a given time.

4.11. Timer Library 331


DPDK documentation, Release 17.05.0-rc0

• HPET - Information about the High Precision Event Timer (HPET).

Hash Library

The DPDK provides a Hash Library for creating hash table for fast lookup. The hash table is
a data structure optimized for searching through a set of entries that are each identified by a
unique key. For increased performance the DPDK Hash requires that all the keys have the
same number of bytes which is set at the hash creation time.

Hash API Overview

The main configuration parameters for the hash are:


• Total number of hash entries
• Size of the key in bytes
The hash also allows the configuration of some low-level implementation related parameters
such as:
• Hash function to translate the key into a bucket index
The main methods exported by the hash are:
• Add entry with key: The key is provided as input. If a new entry is successfully added to
the hash for the specified key, or there is already an entry in the hash for the specified
key, then the position of the entry is returned. If the operation was not successful, for
example due to lack of free entries in the hash, then a negative value is returned;
• Delete entry with key: The key is provided as input. If an entry with the specified key is
found in the hash, then the entry is removed from the hash and the position where the
entry was found in the hash is returned. If no entry with the specified key exists in the
hash, then a negative value is returned
• Lookup for entry with key: The key is provided as input. If an entry with the specified
key is found in the hash (lookup hit), then the position of the entry is returned, otherwise
(lookup miss) a negative value is returned.
Apart from these method explained above, the API allows the user three more options:
• Add / lookup / delete with key and precomputed hash: Both the key and its precomputed
hash are provided as input. This allows the user to perform these operations faster, as
hash is already computed.
• Add / lookup with key and data: A pair of key-value is provided as input. This allows the
user to store not only the key, but also data which may be either a 8-byte integer or a
pointer to external data (if data size is more than 8 bytes).
• Combination of the two options above: User can provide key, precomputed hash and
data.
Also, the API contains a method to allow the user to look up entries in bursts, achieving higher
performance than looking up individual entries, as the function prefetches next entries at the
time it is operating with the first ones, which reduces significantly the impact of the necessary
memory accesses. Notice that this method uses a pipeline of 8 entries (4 stages of 2 entries),
so it is highly recommended to use at least 8 entries per burst.

4.12. Hash Library 332


DPDK documentation, Release 17.05.0-rc0

The actual data associated with each key can be either managed by the user using a separate
table that mirrors the hash in terms of number of entries and position of each entry, as shown
in the Flow Classification use case describes in the following sections, or stored in the hash
table itself.
The example hash tables in the L2/L3 Forwarding sample applications defines which port to
forward a packet to based on a packet flow identified by the five-tuple lookup. However, this
table could also be used for more sophisticated features and provide many other functions and
actions that could be performed on the packets and flows.

Multi-process support

The hash library can be used in a multi-process environment, minding that only lookups
are thread-safe. The only function that can only be used in single-process mode is
rte_hash_set_cmp_func(), which sets up a custom compare function, which is assigned to
a function pointer (therefore, it is not supported in multi-process mode).

Implementation Details

The hash table has two main tables:


• First table is an array of entries which is further divided into buckets, with the same
number of consecutive array entries in each bucket. Each entry contains the computed
primary and secondary hashes of a given key (explained below), and an index to the
second table.
• The second table is an array of all the keys stored in the hash table and its data associ-
ated to each key.
The hash library uses the cuckoo hash method to resolve collisions. For any input key, there
are two possible buckets (primary and secondary/alternative location) where that key can be
stored in the hash, therefore only the entries within those bucket need to be examined when
the key is looked up. The lookup speed is achieved by reducing the number of entries to be
scanned from the total number of hash entries down to the number of entries in the two hash
buckets, as opposed to the basic method of linearly scanning all the entries in the array. The
hash uses a hash function (configurable) to translate the input key into a 4-byte key signature.
The bucket index is the key signature modulo the number of hash buckets.
Once the buckets are identified, the scope of the hash add, delete and lookup operations is
reduced to the entries in those buckets (it is very likely that entries are in the primary bucket).
To speed up the search logic within the bucket, each hash entry stores the 4-byte key signa-
ture together with the full key for each hash entry. For large key sizes, comparing the input key
against a key from the bucket can take significantly more time than comparing the 4-byte sig-
nature of the input key against the signature of a key from the bucket. Therefore, the signature
comparison is done first and the full key comparison done only when the signatures matches.
The full key comparison is still necessary, as two input keys from the same bucket can still
potentially have the same 4-byte hash signature, although this event is relatively rare for hash
functions providing good uniform distributions for the set of input keys.
Example of lookup:
First of all, the primary bucket is identified and entry is likely to be stored there. If signature
was stored there, we compare its key against the one provided and return the position where

4.12. Hash Library 333


DPDK documentation, Release 17.05.0-rc0

it was stored and/or the data associated to that key if there is a match. If signature is not in
the primary bucket, the secondary bucket is looked up, where same procedure is carried out.
If there is no match there either, key is considered not to be in the table.
Example of addition:
Like lookup, the primary and secondary buckets are identified. If there is an empty slot in the
primary bucket, primary and secondary signatures are stored in that slot, key and data (if any)
are added to the second table and an index to the position in the second table is stored in
the slot of the first table. If there is no space in the primary bucket, one of the entries on that
bucket is pushed to its alternative location, and the key to be added is inserted in its position.
To know where the alternative bucket of the evicted entry is, the secondary signature is looked
up and alternative bucket index is calculated from doing the modulo, as seen above. If there is
room in the alternative bucket, the evicted entry is stored in it. If not, same process is repeated
(one of the entries gets pushed) until a non full bucket is found. Notice that despite all the
entry movement in the first table, the second table is not touched, which would impact greatly
in performance.
In the very unlikely event that table enters in a loop where same entries are being evicted
indefinitely, key is considered not able to be stored. With random keys, this method allows the
user to get around 90% of the table utilization, without having to drop any stored entry (LRU)
or allocate more memory (extended buckets).

Entry distribution in hash table

As mentioned above, Cuckoo hash implementation pushes elements out of their bucket, if there
is a new entry to be added which primary location coincides with their current bucket, being
pushed to their alternative location. Therefore, as user adds more entries to the hash table,
distribution of the hash values in the buckets will change, being most of them in their primary
location and a few in their secondary location, which the later will increase, as table gets busier.
This information is quite useful, as performance may be lower as more entries are evicted to
their secondary location.
See the tables below showing example entry distribution as table utilization increases.

Table 4.49: Entry distribution measured with an example table with


1024 random entries using jhash algorithm
% Table used % In Primary location % In Secondary location
25 100 0
50 96.1 3.9
75 88.2 11.8
80 86.3 13.7
85 83.1 16.9
90 77.3 22.7
95.8 64.5 35.5

4.12. Hash Library 334


DPDK documentation, Release 17.05.0-rc0

Table 4.50: Entry distribution measured with an example table with 1


million random entries using jhash algorithm
% Table used % In Primary location % In Secondary location
50 96 4
75 86.9 13.1
80 83.9 16.1
85 80.1 19.9
90 74.8 25.2
94.5 67.4 32.6

Note: Last values on the tables above are the average maximum table utilization with random
keys and using Jenkins hash function.

Use Case: Flow Classification

Flow classification is used to map each input packet to the connection/flow it belongs to. This
operation is necessary as the processing of each input packet is usually done in the context
of their connection, so the same set of operations is applied to all the packets from the same
flow.
Applications using flow classification typically have a flow table to manage, with each separate
flow having an entry associated with it in this table. The size of the flow table entry is application
specific, with typical values of 4, 16, 32 or 64 bytes.
Each application using flow classification typically has a mechanism defined to uniquely iden-
tify a flow based on a number of fields read from the input packet that make up the flow key.
One example is to use the DiffServ 5-tuple made up of the following fields of the IP and trans-
port layer packet headers: Source IP Address, Destination IP Address, Protocol, Source Port,
Destination Port.
The DPDK hash provides a generic method to implement an application specific flow classifi-
cation mechanism. Given a flow table implemented as an array, the application should create
a hash object with the same number of entries as the flow table and with the hash key size set
to the number of bytes in the selected flow key.
The flow table operations on the application side are described below:
• Add flow: Add the flow key to hash. If the returned position is valid, use it to access the
flow entry in the flow table for adding a new flow or updating the information associated
with an existing flow. Otherwise, the flow addition failed, for example due to lack of free
entries for storing new flows.
• Delete flow: Delete the flow key from the hash. If the returned position is valid, use it to
access the flow entry in the flow table to invalidate the information associated with the
flow.
• Lookup flow: Lookup for the flow key in the hash. If the returned position is valid (flow
lookup hit), use the returned position to access the flow entry in the flow table. Otherwise
(flow lookup miss) there is no flow registered for the current packet.

4.12. Hash Library 335


DPDK documentation, Release 17.05.0-rc0

References

• Donald E. Knuth, The Art of Computer Programming, Volume 3: Sorting and Searching
(2nd Edition), 1998, Addison-Wesley Professional

Elastic Flow Distributor Library

Introduction

In Data Centers today, clustering and scheduling of distributed workloads is a very common
task. Many workloads require a deterministic partitioning of a flat key space among a cluster
of machines. When a packet enters the cluster, the ingress node will direct the packet to its
handling node. For example, data-centers with disaggregated storage use storage metadata
tables to forward I/O requests to the correct back end storage cluster, stateful packet inspection
will use match incoming flows to signatures in flow tables to send incoming packets to their
intended deep packet inspection (DPI) devices, and so on.
EFD is a distributor library that uses perfect hashing to determine a target/value for a given
incoming flow key. It has the following advantages: first, because it uses perfect hashing
it does not store the key itself and hence lookup performance is not dependent on the key
size. Second, the target/value can be any arbitrary value hence the system designer and/or
operator can better optimize service rates and inter-cluster network traffic locating. Third,
since the storage requirement is much smaller than a hash-based flow table (i.e. better fit for
CPU cache), EFD can scale to millions of flow keys. Finally, with the current optimized library
implementation, performance is fully scalable with any number of CPU cores.

Flow Based Distribution

Computation Based Schemes

Flow distribution and/or load balancing can be simply done using a stateless computation, for
instance using round-robin or a simple computation based on the flow key as an input. For
example, a hash function can be used to direct a certain flow to a target based on the flow key
(e.g. h(key) mod n) where h(key) is the hash value of the flow key and n is the number of
possible targets.

Fig. 4.30: Load Balancing Using Front End Node

In this scheme (Fig. 4.30), the front end server/distributor/load balancer extracts the flow key
from the input packet and applies a computation to determine where this flow should be di-
rected. Intuitively, this scheme is very simple and requires no state to be kept at the front end
node, and hence, storage requirements are minimum.

Fig. 4.31: Consistent Hashing

A widely used flow distributor that belongs to the same category of computation-based
schemes is consistent hashing, shown in Fig. 4.31. Target destinations (shown in red)
are hashed into the same space as the flow keys (shown in blue), and keys are mapped to the

4.13. Elastic Flow Distributor Library 336


DPDK documentation, Release 17.05.0-rc0

nearest target in a clockwise fashion. Dynamically adding and removing targets with consistent
hashing requires only K/n keys to be remapped on average, where K is the number of keys,
and n is the number of targets. In contrast, in a traditional hash-based scheme, a change in
the number of targets causes nearly all keys to be remapped.
Although computation-based schemes are simple and need very little storage requirement,
they suffer from the drawback that the system designer/operator can’t fully control the target
to assign a specific key, as this is dictated by the hash function. Deterministically co-locating
of keys together (for example, to minimize inter-server traffic or to optimize for network traffic
conditions, target load, etc.) is simply not possible.

Flow-Table Based Schemes

When using a Flow-Table based scheme to handle flow distribution/load balancing, in contrast
with computation-based schemes, the system designer has the flexibility of assigning a given
flow to any given target. The flow table (e.g. DPDK RTE Hash Library) will simply store both
the flow key and the target value.

Fig. 4.32: Table Based Flow Distribution

As shown in Fig. 4.32, when doing a lookup, the flow-table is indexed with the hash of the flow
key and the keys (more than one is possible, because of hash collision) stored in this index
and corresponding values are retrieved. The retrieved key(s) is matched with the input flow key
and if there is a match the value (target id) is returned.
The drawback of using a hash table for flow distribution/load balancing is the storage require-
ment, since the flow table need to store keys, signatures and target values. This doesn’t allow
this scheme to scale to millions of flow keys. Large tables will usually not fit in the CPU cache,
and hence, the lookup performance is degraded because of the latency to access the main
memory.

EFD Based Scheme

EFD combines the advantages of both flow-table based and computation-based schemes.
It doesn’t require the large storage necessary for flow-table based schemes (because EFD
doesn’t store the key as explained below), and it supports any arbitrary value for any given key.

Fig. 4.33: Searching for Perfect Hash Function

The basic idea of EFD is when a given key is to be inserted, a family of hash functions is
searched until the correct hash function that maps the input key to the correct value is found, as
shown in Fig. 4.33. However, rather than explicitly storing all keys and their associated values,
EFD stores only indices of hash functions that map keys to values, and thereby consumes
much less space than conventional flow-based tables. The lookup operation is very simple,
similar to a computational-based scheme: given an input key the lookup operation is reduced
to hashing that key with the correct hash function.

Fig. 4.34: Divide and Conquer for Millions of Keys

4.13. Elastic Flow Distributor Library 337


DPDK documentation, Release 17.05.0-rc0

Intuitively, finding a hash function that maps each of a large number (millions) of input keys
to the correct output value is effectively impossible, as a result EFD, as shown in Fig. 4.34,
breaks the problem into smaller pieces (divide and conquer). EFD divides the entire input key
set into many small groups. Each group consists of approximately 20-28 keys (a configurable
parameter for the library), then, for each small group, a brute force search to find a hash
function that produces the correct outputs for each key in the group.
It should be mentioned that, since the online lookup table for EFD doesn’t store the key itself,
the size of the EFD table is independent of the key size and hence EFD lookup performance
which is almost constant irrespective of the length of the key which is a highly desirable feature
especially for longer keys.
In summary, EFD is a set separation data structure that supports millions of keys. It is used to
distribute a given key to an intended target. By itself EFD is not a FIB data structure with an
exact match the input flow key.

Example of EFD Library Usage

EFD can be used along the data path of many network functions and middleboxes. As previ-
ously mentioned, it can used as an index table for <key,value> pairs, meta-data for objects, a
flow-level load balancer, etc. Fig. 4.35 shows an example of using EFD as a flow-level load
balancer, where flows are received at a front end server before being forwarded to the target
back end server for processing. The system designer would deterministically co-locate flows
together in order to minimize cross-server interaction. (For example, flows requesting certain
webpage objects are co-located together, to minimize forwarding of common objects across
servers).

Fig. 4.35: EFD as a Flow-Level Load Balancer

As shown in Fig. 4.35, the front end server will have an EFD table that stores for each group
what is the perfect hash index that satisfies the correct output. Because the table size is small
and fits in cache (since keys are not stored), it sustains a large number of flows (N*X, where N
is the maximum number of flows served by each back end server of the X possible targets).
With an input flow key, the group id is computed (for example, using last few bits of CRC hash)
and then the EFD table is indexed with the group id to retrieve the corresponding hash index to
use. Once the index is retrieved the key is hashed using this hash function and the result will
be the intended correct target where this flow is supposed to be processed.
It should be noted that as a result of EFD not matching the exact key but rather distributing
the flows to a target back end node based on the perfect hash index, a key that has not been
inserted before will be distributed to a valid target. Hence, a local table which stores the flows
served at each node is used and is exact matched with the input key to rule out new never
seen before flows.

Library API Overview

The EFD library API is created with a very similar semantics of a hash-index or a flow table.
The application creates an EFD table for a given maximum number of flows, a function is called
to insert a flow key with a specific target value, and another function is used to retrieve target
values for a given individual flow key or a bulk of keys.

4.13. Elastic Flow Distributor Library 338


DPDK documentation, Release 17.05.0-rc0

EFD Table Create

The function rte_efd_create() is used to create and return a pointer to an EFD table that
is sized to hold up to num_flows key. The online version of the EFD table (the one that does
not store the keys and is used for lookups) will be allocated and created in the last level cache
(LLC) of the socket defined by the online_socket_bitmask, while the offline EFD table (the
one that stores the keys and is used for key inserts and for computing the perfect hashing) is
allocated and created in the LLC of the socket defined by offline_socket_bitmask. It should
be noted, that for highest performance the socket id should match that where the thread is
running, i.e. the online EFD lookup table should be created on the same socket as where the
lookup thread is running.

EFD Insert and Update

The EFD function to insert a key or update a key to a new value is rte_efd_update().
This function will update an existing key to a new value (target) if the key has already been
inserted before, or will insert the <key,value> pair if this key has not been inserted before. It
will return 0 upon success. It will return EFD_UPDATE_WARN_GROUP_FULL (1) if the op-
eration is insert, and the last available space in the key’s group was just used. It will return
EFD_UPDATE_FAILED (2) when the insertion or update has failed (either it failed to find a
suitable perfect hash or the group was full). The function will return EFD_UPDATE_NO_CHANGE
(3) if there is no change to the EFD table (i.e, same value already exists).

Note: This function is not multi-thread safe and should only be called from one thread.

EFD Lookup

To lookup a certain key in an EFD table, the function rte_efd_lookup() is used to return the
value associated with single key. As previously mentioned, if the key has been inserted, the cor-
rect value inserted is returned, if the key has not been inserted before, a ‘random’ value (based
on hashing of the key) is returned. For better performance and to decrease the overhead of
function calls per key, it is always recommended to use a bulk lookup function (simultaneous
lookup of multiple keys) instead of a single key lookup function. rte_efd_lookup_bulk()
is the bulk lookup function, that looks up num_keys simultaneously stored in the key_list and
the corresponding return values will be returned in the value_list.

Note: This function is multi-thread safe, but there should not be other threads writing in the
EFD table, unless locks are used.

EFD Delete

To delete a certain key in an EFD table, the function rte_efd_delete() can be used. The
function returns zero upon success when the key has been found and deleted. Socket_id is
the parameter to use to lookup the existing value, which is ideally the caller’s socket id. The
previous value associated with this key will be returned in the prev_value argument.

4.13. Elastic Flow Distributor Library 339


DPDK documentation, Release 17.05.0-rc0

Note: This function is not multi-thread safe and should only be called from one thread.

Library Internals

This section provides the brief high-level idea and an overview of the library internals to accom-
pany the RFC. The intent of this section is to explain to readers the high-level implementation
of insert, lookup and group rebalancing in the EFD library.

Insert Function Internals

As previously mentioned the EFD divides the whole set of keys into groups of a manageable
size (e.g. 28 keys) and then searches for the perfect hash that satisfies the intended target
value for each key. EFD stores two version of the <key,value> table:
• Offline Version (in memory): Only used for the insertion/update operation, which is less
frequent than the lookup operation. In the offline version the exact keys for each group is
stored. When a new key is added, the hash function is updated that will satisfy the value
for the new key together with the all old keys already inserted in this group.
• Online Version (in cache): Used for the frequent lookup operation. In the online version,
as previously mentioned, the keys are not stored but rather only the hash index for each
group.

Fig. 4.36: Group Assignment

Fig. 4.36 depicts the group assignment for 7 flow keys as an example. Given a flow key, a hash
function (in our implementation CRC hash) is used to get the group id. As shown in the figure,
the groups can be unbalanced. (We highlight group rebalancing further below).

Fig. 4.37: Perfect Hash Search - Assigned Keys & Target Value

Focusing on one group that has four keys, Fig. 4.37 depicts the search algorithm to find the
perfect hash function. Assuming that the target value bit for the keys is as shown in the figure,
then the online EFD table will store a 16 bit hash index and 16 bit lookup table per group per
value bit.

Fig. 4.38: Perfect Hash Search - Satisfy Target Values

For a given keyX, a hash function (h(keyX, seed1) + index * h(keyX, seed2)) is
used to point to certain bit index in the 16bit lookup_table value, as shown in Fig. 4.38. The
insert function will brute force search for all possible values for the hash index until a non
conflicting lookup_table is found.
For example, since both key3 and key7 have a target bit value of 1, it is okay if the hash function
of both keys point to the same bit in the lookup table. A conflict will occur if a hash index is
used that maps both Key4 and Key7 to the same index in the lookup_table, as shown in Fig.
4.39, since their target value bit are not the same. Once a hash index is found that produces
a lookup_table with no contradictions, this index is stored for this group. This procedure is
repeated for each bit of target value.

4.13. Elastic Flow Distributor Library 340


DPDK documentation, Release 17.05.0-rc0

Fig. 4.39: Finding Hash Index for Conflict Free lookup_table

Lookup Function Internals

The design principle of EFD is that lookups are much more frequent than inserts, and hence,
EFD’s design optimizes for the lookups which are faster and much simpler than the slower
insert procedure (inserts are slow, because of perfect hash search as previously discussed).

Fig. 4.40: EFD Lookup Operation

Fig. 4.40 depicts the lookup operation for EFD. Given an input key, the group id is computed
(using CRC hash) and then the hash index for this group is retrieved from the EFD table. Using
the retrieved hash index, the hash function h(key, seed1) + index *h(key, seed2) is
used which will result in an index in the lookup_table, the bit corresponding to this index will be
the target value bit. This procedure is repeated for each bit of the target value.

Group Rebalancing Function Internals

When discussing EFD inserts and lookups, the discussion is simplified by assuming that a
group id is simply a result of hash function. However, since hashing in general is not perfect
and will not always produce a uniform output, this simplified assumption will lead to unbalanced
groups, i.e., some group will have more keys than other groups. Typically, and to minimize in-
sert time with an increasing number of keys, it is preferable that all groups will have a balanced
number of keys, so the brute force search for the perfect hash terminates with a valid hash
index. In order to achieve this target, groups are rebalanced during runtime inserts, and keys
are moved around from a busy group to a less crowded group as the more keys are inserted.

Fig. 4.41: Runtime Group Rebalancing

Fig. 4.41 depicts the high level idea of group rebalancing, given an input key the hash result is
split into two parts a chunk id and 8-bit bin id. A chunk contains 64 different groups and 256
bins (i.e. for any given bin it can map to 4 distinct groups). When a key is inserted, the bin id is
computed, for example in Fig. 4.41 bin_id=2, and since each bin can be mapped to one of four
different groups (2 bit storage), the four possible mappings are evaluated and the one that will
result in a balanced key distribution across these four is selected the mapping result is stored
in these two bits.

References

1- EFD is based on collaborative research work between Intel and Carnegie Mel-
lon University (CMU), interested readers can refer to the paper “Scaling Up Clus-
tered Network Appliances with ScaleBricks;” Dong Zhou et al. at SIGCOMM 2015
(http://conferences.sigcomm.org/sigcomm/2015/pdf/papers/p241.pdf ) for more information.

4.13. Elastic Flow Distributor Library 341


DPDK documentation, Release 17.05.0-rc0

LPM Library

The DPDK LPM library component implements the Longest Prefix Match (LPM) table search
method for 32-bit keys that is typically used to find the best route match in IP forwarding appli-
cations.

LPM API Overview

The main configuration parameter for LPM component instances is the maximum number of
rules to support. An LPM prefix is represented by a pair of parameters (32- bit key, depth), with
depth in the range of 1 to 32. An LPM rule is represented by an LPM prefix and some user
data associated with the prefix. The prefix serves as the unique identifier of the LPM rule. In
this implementation, the user data is 1-byte long and is called next hop, in correlation with its
main use of storing the ID of the next hop in a routing table entry.
The main methods exported by the LPM component are:
• Add LPM rule: The LPM rule is provided as input. If there is no rule with the same prefix
present in the table, then the new rule is added to the LPM table. If a rule with the same
prefix is already present in the table, the next hop of the rule is updated. An error is
returned when there is no available rule space left.
• Delete LPM rule: The prefix of the LPM rule is provided as input. If a rule with the
specified prefix is present in the LPM table, then it is removed.
• Lookup LPM key: The 32-bit key is provided as input. The algorithm selects the rule that
represents the best match for the given key and returns the next hop of that rule. In the
case that there are multiple rules present in the LPM table that have the same 32-bit key,
the algorithm picks the rule with the highest depth as the best match rule, which means
that the rule has the highest number of most significant bits matching between the input
key and the rule key.

Implementation Details

The current implementation uses a variation of the DIR-24-8 algorithm that trades memory
usage for improved LPM lookup speed. The algorithm allows the lookup operation to be per-
formed with typically a single memory read access. In the statistically rare case when the best
match rule is having a depth bigger than 24, the lookup operation requires two memory read
accesses. Therefore, the performance of the LPM lookup operation is greatly influenced by
whether the specific memory location is present in the processor cache or not.
The main data structure is built using the following elements:
• A table with 2^24 entries.
• A number of tables (RTE_LPM_TBL8_NUM_GROUPS) with 2^8 entries.
The first table, called tbl24, is indexed using the first 24 bits of the IP address to be looked up,
while the second table(s), called tbl8, is indexed using the last 8 bits of the IP address. This
means that depending on the outcome of trying to match the IP address of an incoming packet
to the rule stored in the tbl24 we might need to continue the lookup process in the second level.
Since every entry of the tbl24 can potentially point to a tbl8, ideally, we would have 2^24 tbl8s,
which would be the same as having a single table with 2^32 entries. This is not feasible due

4.14. LPM Library 342


DPDK documentation, Release 17.05.0-rc0

to resource restrictions. Instead, this approach takes advantage of the fact that rules longer
than 24 bits are very rare. By splitting the process in two different tables/levels and limiting the
number of tbl8s, we can greatly reduce memory consumption while maintaining a very good
lookup speed (one memory access, most of the times).

Fig. 4.42: Table split into different levels

An entry in tbl24 contains the following fields:


• next hop / index to the tbl8
• valid flag
• external entry flag
• depth of the rule (length)
The first field can either contain a number indicating the tbl8 in which the lookup process should
continue or the next hop itself if the longest prefix match has already been found. The two flags
are used to determine whether the entry is valid or not and whether the search process have
finished or not respectively. The depth or length of the rule is the number of bits of the rule that
is stored in a specific entry.
An entry in a tbl8 contains the following fields:
• next hop
• valid
• valid group
• depth

4.14. LPM Library 343


DPDK documentation, Release 17.05.0-rc0

Next hop and depth contain the same information as in the tbl24. The two flags show whether
the entry and the table are valid respectively.
The other main data structure is a table containing the main information about the rules (IP
and next hop). This is a higher level table, used for different things:
• Check whether a rule already exists or not, prior to addition or deletion, without having to
actually perform a lookup.
• When deleting, to check whether there is a rule containing the one that is to be deleted.
This is important, since the main data structure will have to be updated accordingly.

Addition

When adding a rule, there are different possibilities. If the rule’s depth is exactly 24 bits, then:
• Use the rule (IP address) as an index to the tbl24.
• If the entry is invalid (i.e. it doesn’t already contain a rule) then set its next hop to its value,
the valid flag to 1 (meaning this entry is in use), and the external entry flag to 0 (meaning
the lookup process ends at this point, since this is the longest prefix that matches).
If the rule’s depth is exactly 32 bits, then:
• Use the first 24 bits of the rule as an index to the tbl24.
• If the entry is invalid (i.e. it doesn’t already contain a rule) then look for a free tbl8, set
the index to the tbl8 to this value, the valid flag to 1 (meaning this entry is in use), and the
external entry flag to 1 (meaning the lookup process must continue since the rule hasn’t
been explored completely).
If the rule’s depth is any other value, prefix expansion must be performed. This means the rule
is copied to all the entries (as long as they are not in use) which would also cause a match.
As a simple example, let’s assume the depth is 20 bits. This means that there are 2^(24 -
20) = 16 different combinations of the first 24 bits of an IP address that would cause a match.
Hence, in this case, we copy the exact same entry to every position indexed by one of these
combinations.
By doing this we ensure that during the lookup process, if a rule matching the IP address exists,
it is found in either one or two memory accesses, depending on whether we need to move to
the next table or not. Prefix expansion is one of the keys of this algorithm, since it improves the
speed dramatically by adding redundancy.

Lookup

The lookup process is much simpler and quicker. In this case:


• Use the first 24 bits of the IP address as an index to the tbl24. If the entry is not in use,
then it means we don’t have a rule matching this IP. If it is valid and the external entry
flag is set to 0, then the next hop is returned.
• If it is valid and the external entry flag is set to 1, then we use the tbl8 index to find out
the tbl8 to be checked, and the last 8 bits of the IP address as an index to this table.
Similarly, if the entry is not in use, then we don’t have a rule matching this IP address. If
it is valid then the next hop is returned.

4.14. LPM Library 344


DPDK documentation, Release 17.05.0-rc0

Limitations in the Number of Rules

There are different things that limit the number of rules that can be added. The first one is the
maximum number of rules, which is a parameter passed through the API. Once this number is
reached, it is not possible to add any more rules to the routing table unless one or more are
removed.
The second reason is an intrinsic limitation of the algorithm. As explained before, to avoid high
memory consumption, the number of tbl8s is limited in compilation time (this value is by default
256). If we exhaust tbl8s, we won’t be able to add any more rules. How many of them are
necessary for a specific routing table is hard to determine in advance.
A tbl8 is consumed whenever we have a new rule with depth bigger than 24, and the first 24
bits of this rule are not the same as the first 24 bits of a rule previously added. If they are, then
the new rule will share the same tbl8 than the previous one, since the only difference between
the two rules is within the last byte.
With the default value of 256, we can have up to 256 rules longer than 24 bits that differ on
their first three bytes. Since routes longer than 24 bits are unlikely, this shouldn’t be a problem
in most setups. Even if it is, however, the number of tbl8s can be modified.

Use Case: IPv4 Forwarding

The LPM algorithm is used to implement Classless Inter-Domain Routing (CIDR) strategy used
by routers implementing IPv4 forwarding.

References

• RFC1519 Classless Inter-Domain Routing (CIDR): an Address Assignment and Aggre-


gation Strategy, http://www.ietf.org/rfc/rfc1519
• Pankaj Gupta, Algorithms for Routing Lookups and Packet Classification, PhD Thesis,
Stanford University, 2000 (http://klamath.stanford.edu/~pankaj/thesis/ thesis_1sided.pdf
)

LPM6 Library

The LPM6 (LPM for IPv6) library component implements the Longest Prefix Match (LPM) ta-
ble search method for 128-bit keys that is typically used to find the best match route in IPv6
forwarding applications.

LPM6 API Overview

The main configuration parameters for the LPM6 library are:


• Maximum number of rules: This defines the size of the table that holds the rules, and
therefore the maximum number of rules that can be added.
• Number of tbl8s: A tbl8 is a node of the trie that the LPM6 algorithm is based on.

4.15. LPM6 Library 345


DPDK documentation, Release 17.05.0-rc0

This parameter is related to the number of rules you can have, but there is no way to accurately
predict the number needed to hold a specific number of rules, since it strongly depends on the
depth and IP address of every rule. One tbl8 consumes 1 kb of memory. As a recommendation,
65536 tbl8s should be sufficient to store several thousand IPv6 rules, but the number can vary
depending on the case.
An LPM prefix is represented by a pair of parameters (128-bit key, depth), with depth in the
range of 1 to 128. An LPM rule is represented by an LPM prefix and some user data associated
with the prefix. The prefix serves as the unique identifier for the LPM rule. In this implementa-
tion, the user data is 1-byte long and is called “next hop”, which corresponds to its main use of
storing the ID of the next hop in a routing table entry.
The main methods exported for the LPM component are:
• Add LPM rule: The LPM rule is provided as input. If there is no rule with the same prefix
present in the table, then the new rule is added to the LPM table. If a rule with the same
prefix is already present in the table, the next hop of the rule is updated. An error is
returned when there is no available space left.
• Delete LPM rule: The prefix of the LPM rule is provided as input. If a rule with the
specified prefix is present in the LPM table, then it is removed.
• Lookup LPM key: The 128-bit key is provided as input. The algorithm selects the rule
that represents the best match for the given key and returns the next hop of that rule. In
the case that there are multiple rules present in the LPM table that have the same 128-bit
value, the algorithm picks the rule with the highest depth as the best match rule, which
means the rule has the highest number of most significant bits matching between the
input key and the rule key.

Implementation Details

This is a modification of the algorithm used for IPv4 (see Implementation Details). In this case,
instead of using two levels, one with a tbl24 and a second with a tbl8, 14 levels are used.
The implementation can be seen as a multi-bit trie where the stride or number of bits inspected
on each level varies from level to level. Specifically, 24 bits are inspected on the root node, and
the remaining 104 bits are inspected in groups of 8 bits. This effectively means that the trie
has 14 levels at the most, depending on the rules that are added to the table.
The algorithm allows the lookup operation to be performed with a number of memory accesses
that directly depends on the length of the rule and whether there are other rules with bigger
depths and the same key in the data structure. It can vary from 1 to 14 memory accesses, with
5 being the average value for the lengths that are most commonly used in IPv6.
The main data structure is built using the following elements:
• A table with 224 entries
• A number of tables, configurable by the user through the API, with 28 entries
The first table, called tbl24, is indexed using the first 24 bits of the IP address be looked up,
while the rest of the tables, called tbl8s, are indexed using the rest of the bytes of the IP
address, in chunks of 8 bits. This means that depending on the outcome of trying to match
the IP address of an incoming packet to the rule stored in the tbl24 or the subsequent tbl8s we
might need to continue the lookup process in deeper levels of the tree.

4.15. LPM6 Library 346


DPDK documentation, Release 17.05.0-rc0

Similar to the limitation presented in the algorithm for IPv4, to store every possible IPv6 rule,
we would need a table with 2^128 entries. This is not feasible due to resource restrictions.
By splitting the process in different tables/levels and limiting the number of tbl8s, we can greatly
reduce memory consumption while maintaining a very good lookup speed (one memory ac-
cess per level).

Fig. 4.43: Table split into different levels

An entry in a table contains the following fields:


• next hop / index to the tbl8
• depth of the rule (length)
• valid flag
• valid group flag
• external entry flag
The first field can either contain a number indicating the tbl8 in which the lookup process should
continue or the next hop itself if the longest prefix match has already been found. The depth
or length of the rule is the number of bits of the rule that is stored in a specific entry. The flags
are used to determine whether the entry/table is valid or not and whether the search process
have finished or not respectively.
Both types of tables share the same structure.
The other main data structure is a table containing the main information about the rules (IP,
next hop and depth). This is a higher level table, used for different things:
• Check whether a rule already exists or not, prior to addition or deletion, without having to
actually perform a lookup.

4.15. LPM6 Library 347


DPDK documentation, Release 17.05.0-rc0

When deleting, to check whether there is a rule containing the one that is to be deleted. This
is important, since the main data structure will have to be updated accordingly.

Addition

When adding a rule, there are different possibilities. If the rule’s depth is exactly 24 bits, then:
• Use the rule (IP address) as an index to the tbl24.
• If the entry is invalid (i.e. it doesn’t already contain a rule) then set its next hop to its value,
the valid flag to 1 (meaning this entry is in use), and the external entry flag to 0 (meaning
the lookup process ends at this point, since this is the longest prefix that matches).
If the rule’s depth is bigger than 24 bits but a multiple of 8, then:
• Use the first 24 bits of the rule as an index to the tbl24.
• If the entry is invalid (i.e. it doesn’t already contain a rule) then look for a free tbl8, set
the index to the tbl8 to this value, the valid flag to 1 (meaning this entry is in use), and the
external entry flag to 1 (meaning the lookup process must continue since the rule hasn’t
been explored completely).
• Use the following 8 bits of the rule as an index to the next tbl8.
• Repeat the process until the tbl8 at the right level (depending on the depth) has been
reached and fill it with the next hop, setting the next entry flag to 0.
If the rule’s depth is any other value, prefix expansion must be performed. This means the rule
is copied to all the entries (as long as they are not in use) which would also cause a match.
As a simple example, let’s assume the depth is 20 bits. This means that there are 2^(24-20)
= 16 different combinations of the first 24 bits of an IP address that would cause a match.
Hence, in this case, we copy the exact same entry to every position indexed by one of these
combinations.
By doing this we ensure that during the lookup process, if a rule matching the IP address exists,
it is found in, at the most, 14 memory accesses, depending on how many times we need to
move to the next table. Prefix expansion is one of the keys of this algorithm, since it improves
the speed dramatically by adding redundancy.
Prefix expansion can be performed at any level. So, for example, is the depth is 34 bits, it will
be performed in the third level (second tbl8-based level).

Lookup

The lookup process is much simpler and quicker. In this case:


• Use the first 24 bits of the IP address as an index to the tbl24. If the entry is not in use,
then it means we don’t have a rule matching this IP. If it is valid and the external entry
flag is set to 0, then the next hop is returned.
• If it is valid and the external entry flag is set to 1, then we use the tbl8 index to find out
the tbl8 to be checked, and the next 8 bits of the IP address as an index to this table.
Similarly, if the entry is not in use, then we don’t have a rule matching this IP address. If
it is valid then check the external entry flag for a new tbl8 to be inspected.

4.15. LPM6 Library 348


DPDK documentation, Release 17.05.0-rc0

• Repeat the process until either we find an invalid entry (lookup miss) or a valid entry with
the external entry flag set to 0. Return the next hop in the latter case.

Limitations in the Number of Rules

There are different things that limit the number of rules that can be added. The first one is the
maximum number of rules, which is a parameter passed through the API. Once this number is
reached, it is not possible to add any more rules to the routing table unless one or more are
removed.
The second limitation is in the number of tbl8s available. If we exhaust tbl8s, we won’t be able
to add any more rules. How to know how many of them are necessary for a specific routing
table is hard to determine in advance.
In this algorithm, the maximum number of tbl8s a single rule can consume is 13, which is the
number of levels minus one, since the first three bytes are resolved in the tbl24. However:
• Typically, on IPv6, routes are not longer than 48 bits, which means rules usually take up
to 3 tbl8s.
As explained in the LPM for IPv4 algorithm, it is possible and very likely that several rules will
share one or more tbl8s, depending on what their first bytes are. If they share the same first 24
bits, for instance, the tbl8 at the second level will be shared. This might happen again in deeper
levels, so, effectively, two 48 bit-long rules may use the same three tbl8s if the only difference
is in their last byte.
The number of tbl8s is a parameter exposed to the user through the API in this version of the
algorithm, due to its impact in memory consumption and the number or rules that can be added
to the LPM table. One tbl8 consumes 1 kilobyte of memory.

Use Case: IPv6 Forwarding

The LPM algorithm is used to implement the Classless Inter-Domain Routing (CIDR) strategy
used by routers implementing IP forwarding.

Packet Distributor Library

The DPDK Packet Distributor library is a library designed to be used for dynamic load balancing
of traffic while supporting single packet at a time operation. When using this library, the logical
cores in use are to be considered in two roles: firstly a distributor lcore, which is responsible
for load balancing or distributing packets, and a set of worker lcores which are responsible for
receiving the packets from the distributor and operating on them. The model of operation is
shown in the diagram below.

Distributor Core Operation

The distributor core does the majority of the processing for ensuring that packets are fairly
shared among workers. The operation of the distributor is as follows:
1. Packets are passed to the distributor component by having the distributor lcore thread
call the “rte_distributor_process()” API

4.16. Packet Distributor Library 349


DPDK documentation, Release 17.05.0-rc0

Fig. 4.44: Packet Distributor mode of operation

4.16. Packet Distributor Library 350


DPDK documentation, Release 17.05.0-rc0

2. The worker lcores all share a single cache line with the distributor core in order to pass
messages and packets to and from the worker. The process API call will poll all the
worker cache lines to see what workers are requesting packets.
3. As workers request packets, the distributor takes packets from the set of packets passed
in and distributes them to the workers. As it does so, it examines the “tag” – stored in the
RSS hash field in the mbuf – for each packet and records what tags are being processed
by each worker.
4. If the next packet in the input set has a tag which is already being processed by a worker,
then that packet will be queued up for processing by that worker and given to it in prefer-
ence to other packets when that work next makes a request for work. This ensures that
no two packets with the same tag are processed in parallel, and that all packets with the
same tag are processed in input order.
5. Once all input packets passed to the process API have either been distributed to workers
or been queued up for a worker which is processing a given tag, then the process API
returns to the caller.
Other functions which are available to the distributor lcore are:
• rte_distributor_returned_pkts()
• rte_distributor_flush()
• rte_distributor_clear_returns()
Of these the most important API call is “rte_distributor_returned_pkts()” which should only be
called on the lcore which also calls the process API. It returns to the caller all packets which
have finished processing by all worker cores. Within this set of returned packets, all packets
sharing the same tag will be returned in their original order.
NOTE: If worker lcores buffer up packets internally for transmission in bulk afterwards, the
packets sharing a tag will likely get out of order. Once a worker lcore requests a new packet,
the distributor assumes that it has completely finished with the previous packet and therefore
that additional packets with the same tag can safely be distributed to other workers – who may
then flush their buffered packets sooner and cause packets to get out of order.
NOTE: No packet ordering guarantees are made about packets which do not share a common
packet tag.
Using the process and returned_pkts API, the following application workflow can be used, while
allowing packet order within a packet flow – identified by a tag – to be maintained.
The flush and clear_returns API calls, mentioned previously, are likely of less use that the
process and returned_pkts APIS, and are principally provided to aid in unit testing of the li-
brary. Descriptions of these functions and their use can be found in the DPDK API Reference
document.

Worker Operation

Worker cores are the cores which do the actual manipulation of the packets distributed by the
packet distributor. Each worker calls “rte_distributor_get_pkt()” API to request a new packet
when it has finished processing the previous one. [The previous packet should be returned to
the distributor component by passing it as the final parameter to this API call.]

4.16. Packet Distributor Library 351


DPDK documentation, Release 17.05.0-rc0

Fig. 4.45: Application workflow

Since it may be desirable to vary the number of worker cores, depending on the traffic load i.e.
to save power at times of lighter load, it is possible to have a worker stop processing packets
by calling “rte_distributor_return_pkt()” to indicate that it has finished the current packet and
does not want a new one.

Reorder Library

The Reorder Library provides a mechanism for reordering mbufs based on their sequence
number.

Operation

The reorder library is essentially a buffer that reorders mbufs. The user inserts out of order
mbufs into the reorder buffer and pulls in-order mbufs from it.
At a given time, the reorder buffer contains mbufs whose sequence number are inside the
sequence window. The sequence window is determined by the minimum sequence number
and the number of entries that the buffer was configured to hold. For example, given a reorder
buffer with 200 entries and a minimum sequence number of 350, the sequence window has
low and high limits of 350 and 550 respectively.
When inserting mbufs, the reorder library differentiates between valid, early and late mbufs
depending on the sequence number of the inserted mbuf:
• valid: the sequence number is inside the window.
• late: the sequence number is outside the window and less than the low limit.
• early: the sequence number is outside the window and greater than the high limit.

4.17. Reorder Library 352


DPDK documentation, Release 17.05.0-rc0

The reorder buffer directly returns late mbufs and tries to accommodate early mbufs.

Implementation Details

The reorder library is implemented as a pair of buffers, which referred to as the Order buffer
and the Ready buffer.
On an insert call, valid mbufs are inserted directly into the Order buffer and late mbufs are
returned to the user with an error.
In the case of early mbufs, the reorder buffer will try to move the window (incrementing the
minimum sequence number) so that the mbuf becomes a valid one. To that end, mbufs in the
Order buffer are moved into the Ready buffer. Any mbufs that have not arrived yet are ignored
and therefore will become late mbufs. This means that as long as there is room in the Ready
buffer, the window will be moved to accommodate early mbufs that would otherwise be outside
the reordering window.
For example, assuming that we have a buffer of 200 entries with a 350 minimum sequence
number, and we need to insert an early mbuf with 565 sequence number. That means that we
would need to move the windows at least 15 positions to accommodate the mbuf. The reorder
buffer would try to move mbufs from at least the next 15 slots in the Order buffer to the Ready
buffer, as long as there is room in the Ready buffer. Any gaps in the Order buffer at that point
are skipped, and those packet will be reported as late packets when they arrive. The process
of moving packets to the Ready buffer continues beyond the minimum required until a gap, i.e.
missing mbuf, in the Order buffer is encountered.
When draining mbufs, the reorder buffer would return mbufs in the Ready buffer first and then
from the Order buffer until a gap is found (mbufs that have not arrived yet).

Use Case: Packet Distributor

An application using the DPDK packet distributor could make use of the reorder library to
transmit packets in the same order they were received.
A basic packet distributor use case would consist of a distributor with multiple workers cores.
The processing of packets by the workers is not guaranteed to be in order, hence a reorder
buffer can be used to order as many packets as possible.
In such a scenario, the distributor assigns a sequence number to mbufs before delivering them
to the workers. As the workers finish processing the packets, the distributor inserts those mbufs
into the reorder buffer and finally transmit drained mbufs.
NOTE: Currently the reorder buffer is not thread safe so the same thread is responsible for
inserting and draining mbufs.

IP Fragmentation and Reassembly Library

The IP Fragmentation and Reassembly Library implements IPv4 and IPv6 packet fragmenta-
tion and reassembly.

4.18. IP Fragmentation and Reassembly Library 353


DPDK documentation, Release 17.05.0-rc0

Packet fragmentation

Packet fragmentation routines divide input packet into number of fragments. Both
rte_ipv4_fragment_packet() and rte_ipv6_fragment_packet() functions assume that input mbuf
data points to the start of the IP header of the packet (i.e. L2 header is already stripped out).
To avoid copying of the actual packet’s data zero-copy technique is used (rte_pktmbuf_attach).
For each fragment two new mbufs are created:
• Direct mbuf – mbuf that will contain L3 header of the new fragment.
• Indirect mbuf – mbuf that is attached to the mbuf with the original packet. It’s data field
points to the start of the original packets data plus fragment offset.
Then L3 header is copied from the original mbuf into the ‘direct’ mbuf and updated to reflect
new fragmented status. Note that for IPv4, header checksum is not recalculated and is set to
zero.
Finally ‘direct’ and ‘indirect’ mbufs for each fragment are linked together via mbuf’s next filed to
compose a packet for the new fragment.
The caller has an ability to explicitly specify which mempools should be used to allocate ‘direct’
and ‘indirect’ mbufs from.
For more information about direct and indirect mbufs, refer to Direct and Indirect Buffers.

Packet reassembly

IP Fragment Table

Fragment table maintains information about already received fragments of the packet.
Each IP packet is uniquely identified by triple <Source IP address>, <Destination IP address>,
<ID>.
Note that all update/lookup operations on Fragment Table are not thread safe. So if different
execution contexts (threads/processes) will access the same table simultaneously, then some
external syncing mechanism have to be provided.
Each table entry can hold information about packets consisting of up to
RTE_LIBRTE_IP_FRAG_MAX (by default: 4) fragments.
Code example, that demonstrates creation of a new Fragment table:
frag_cycles = (rte_get_tsc_hz() + MS_PER_S - 1) / MS_PER_S * max_flow_ttl;
bucket_num = max_flow_num + max_flow_num / 4;
frag_tbl = rte_ip_frag_table_create(max_flow_num, bucket_entries, max_flow_num, frag_cycles, so

Internally Fragment table is a simple hash table. The basic idea is to use two hash functions
and <bucket_entries> * associativity. This provides 2 * <bucket_entries> possible locations
in the hash table for each key. When the collision occurs and all 2 * <bucket_entries> are
occupied, instead of reinserting existing keys into alternative locations, ip_frag_tbl_add() just
returns a failure.
Also, entries that resides in the table longer then <max_cycles> are considered as invalid, and
could be removed/replaced by the new ones.

4.18. IP Fragmentation and Reassembly Library 354


DPDK documentation, Release 17.05.0-rc0

Note that reassembly demands a lot of mbuf’s to be allocated. At any given time up to (2 *
bucket_entries * RTE_LIBRTE_IP_FRAG_MAX * <maximum number of mbufs per packet>)
can be stored inside Fragment Table waiting for remaining fragments.

Packet Reassembly

Fragmented packets processing and reassembly is done by the


rte_ipv4_frag_reassemble_packet()/rte_ipv6_frag_reassemble_packet. Functions. They
either return a pointer to valid mbuf that contains reassembled packet, or NULL (if the packet
can’t be reassembled for some reason).
These functions are responsible for:
1. Search the Fragment Table for entry with packet’s <IPv4 Source Address, IPv4 Destina-
tion Address, Packet ID>.
2. If the entry is found, then check if that entry already timed-out. If yes, then free all
previously received fragments, and remove information about them from the entry.
3. If no entry with such key is found, then try to create a new one by one of two ways:
(a) Use as empty entry.
(b) Delete a timed-out entry, free mbufs associated with it mbufs and store a new entry
with specified key in it.
4. Update the entry with new fragment information and check if a packet can be reassem-
bled (the packet’s entry contains all fragments).
(a) If yes, then, reassemble the packet, mark table’s entry as empty and return the
reassembled mbuf to the caller.
(b) If no, then return a NULL to the caller.
If at any stage of packet processing an error is encountered (e.g: can’t insert new entry into the
Fragment Table, or invalid/timed-out fragment), then the function will free all associated with
the packet fragments, mark the table entry as invalid and return NULL to the caller.

Debug logging and Statistics Collection

The RTE_LIBRTE_IP_FRAG_TBL_STAT config macro controls statistics collection for the


Fragment Table. This macro is not enabled by default.
The RTE_LIBRTE_IP_FRAG_DEBUG controls debug logging of IP fragments processing and
reassembling. This macro is disabled by default. Note that while logging contains a lot of
detailed information, it slows down packet processing and might cause the loss of a lot of
packets.

The librte_pdump Library

The librte_pdump library provides a framework for packet capturing in DPDK. The library
does the complete copy of the Rx and Tx mbufs to a new mempool and hence it slows down
the performance of the applications, so it is recommended to use this library for debugging
purposes.

4.19. The librte_pdump Library 355


DPDK documentation, Release 17.05.0-rc0

The library provides the following APIs to initialize the packet capture framework, to enable or
disable the packet capture, and to uninitialize it:
• rte_pdump_init(): This API initializes the packet capture framework.
• rte_pdump_enable(): This API enables the packet capture on a given port and queue.
Note: The filter option in the API is a place holder for future enhancements.
• rte_pdump_enable_by_deviceid(): This API enables the packet capture on a
given device id (vdev name or pci address) and queue. Note: The filter option
in the API is a place holder for future enhancements.
• rte_pdump_disable(): This API disables the packet capture on a given port and
queue.
• rte_pdump_disable_by_deviceid(): This API disables the packet capture on a
given device id (vdev name or pci address) and queue.
• rte_pdump_uninit(): This API uninitializes the packet capture framework.
• rte_pdump_set_socket_dir(): This API sets the server and client socket paths.
Note: This API is not thread-safe.

Operation

The librte_pdump library works on a client/server model. The server is responsible for
enabling or disabling the packet capture and the clients are responsible for requesting the
enabling or disabling of the packet capture.
The packet capture framework, as part of its initialization, creates the pthread and the server
socket in the pthread. The application that calls the framework initialization will have the server
socket created, either under the path that the application has passed or under the default path
i.e. either /var/run/.dpdk for root user or ~/.dpdk for non root user.
Applications that request enabling or disabling of the packet capture will have the client socket
created either under the path that the application has passed or under the default path i.e.
either /var/run/.dpdk for root user or ~/.dpdk for not root user to send the requests to
the server. The server socket will listen for client requests for enabling or disabling the packet
capture.

Implementation Details

The library API rte_pdump_init(), initializes the packet capture framework by creating the
pthread and the server socket. The server socket in the pthread context will be listening to the
client requests to enable or disable the packet capture.
The library APIs rte_pdump_enable() and rte_pdump_enable_by_deviceid() en-
ables the packet capture. On each call to these APIs, the library creates a separate client
socket, creates the “pdump enable” request and sends the request to the server. The server
that is listening on the socket will take the request and enable the packet capture by registering
the Ethernet RX and TX callbacks for the given port or device_id and queue combinations.
Then the server will mirror the packets to the new mempool and enqueue them to the rte_ring
that clients have passed to these APIs. The server also sends the response back to the client
about the status of the request that was processed. After the response is received from the
server, the client socket is closed.

4.19. The librte_pdump Library 356


DPDK documentation, Release 17.05.0-rc0

The library APIs rte_pdump_disable() and rte_pdump_disable_by_deviceid() dis-


ables the packet capture. On each call to these APIs, the library creates a separate client
socket, creates the “pdump disable” request and sends the request to the server. The server
that is listening on the socket will take the request and disable the packet capture by removing
the Ethernet RX and TX callbacks for the given port or device_id and queue combinations.
The server also sends the response back to the client about the status of the request that was
processed. After the response is received from the server, the client socket is closed.
The library API rte_pdump_uninit(), uninitializes the packet capture framework by closing
the pthread and the server socket.
The library API rte_pdump_set_socket_dir(), sets the given path as either server socket
path or client socket path based on the type argument of the API. If the given path is NULL,
default path will be selected, i.e. either /var/run/.dpdk for root user or ~/.dpdk for non
root user. Clients also need to call this API to set their server socket path if the server socket
path is different from default path.

Use Case: Packet Capturing

The DPDK app/pdump tool is developed based on this library to capture packets in DPDK.
Users can use this as an example to develop their own packet capturing tools.

Multi-process Support

In the DPDK, multi-process support is designed to allow a group of DPDK processes to work
together in a simple transparent manner to perform packet processing, or other workloads. To
support this functionality, a number of additions have been made to the core DPDK Environ-
ment Abstraction Layer (EAL).
The EAL has been modified to allow different types of DPDK processes to be spawned, each
with different permissions on the hugepage memory used by the applications. For now, there
are two types of process specified:
• primary processes, which can initialize and which have full permissions on shared mem-
ory
• secondary processes, which cannot initialize shared memory, but can attach to pre- ini-
tialized shared memory and create objects in it.
Standalone DPDK processes are primary processes, while secondary processes can only run
alongside a primary process or after a primary process has already configured the hugepage
shared memory for them.
To support these two process types, and other multi-process setups described later, two addi-
tional command-line parameters are available to the EAL:
• --proc-type: for specifying a given process instance as the primary or secondary
DPDK instance
• --file-prefix: to allow processes that do not want to co-operate to have different
memory regions

4.20. Multi-process Support 357


DPDK documentation, Release 17.05.0-rc0

A number of example applications are provided that demonstrate how multiple DPDK pro-
cesses can be used together. These are more fully documented in the “Multi- process Sample
Application” chapter in the DPDK Sample Application’s User Guide.

Memory Sharing

The key element in getting a multi-process application working using the DPDK is to ensure that
memory resources are properly shared among the processes making up the multi-process ap-
plication. Once there are blocks of shared memory available that can be accessed by multiple
processes, then issues such as inter-process communication (IPC) becomes much simpler.
On application start-up in a primary or standalone process, the DPDK records to memory-
mapped files the details of the memory configuration it is using - hugepages in use, the virtual
addresses they are mapped at, the number of memory channels present, etc. When a sec-
ondary process is started, these files are read and the EAL recreates the same memory con-
figuration in the secondary process so that all memory zones are shared between processes
and all pointers to that memory are valid, and point to the same objects, in both processes.

Note: Refer to Multi-process Limitations for details of how Linux kernel Address-Space Layout
Randomization (ASLR) can affect memory sharing.

Fig. 4.46: Memory Sharing in the DPDK Multi-process Sample Application

The EAL also supports an auto-detection mode (set by EAL --proc-type=auto flag ),
whereby an DPDK process is started as a secondary instance if a primary instance is already
running.

Deployment Models

Symmetric/Peer Processes

DPDK multi-process support can be used to create a set of peer processes where each pro-
cess performs the same workload. This model is equivalent to having multiple threads each
running the same main-loop function, as is done in most of the supplied DPDK sample ap-
plications. In this model, the first of the processes spawned should be spawned using the
--proc-type=primary EAL flag, while all subsequent instances should be spawned using
the --proc-type=secondary flag.
The simple_mp and symmetric_mp sample applications demonstrate this usage model. They
are described in the “Multi-process Sample Application” chapter in the DPDK Sample Applica-
tion’s User Guide.

Asymmetric/Non-Peer Processes

An alternative deployment model that can be used for multi-process applications is to have
a single primary process instance that acts as a load-balancer or server distributing received
packets among worker or client threads, which are run as secondary processes. In this case,
extensive use of rte_ring objects is made, which are located in shared hugepage memory.

4.20. Multi-process Support 358


DPDK documentation, Release 17.05.0-rc0

The client_server_mp sample application shows this usage model. It is described in the “Multi-
process Sample Application” chapter in the DPDK Sample Application’s User Guide.

Running Multiple Independent DPDK Applications

In addition to the above scenarios involving multiple DPDK processes working together, it is
possible to run multiple DPDK processes side-by-side, where those processes are all work-
ing independently. Support for this usage scenario is provided using the --file-prefix
parameter to the EAL.
By default, the EAL creates hugepage files on each hugetlbfs filesystem using the rtemap_X
filename, where X is in the range 0 to the maximum number of hugepages -1. Similarly, it cre-
ates shared configuration files, memory mapped in each process, using the /var/run/.rte_config
filename, when run as root (or $HOME/.rte_config when run as a non-root user; if filesystem
and device permissions are set up to allow this). The rte part of the filenames of each of the
above is configurable using the file-prefix parameter.
In addition to specifying the file-prefix parameter, any DPDK applications that are to be run
side-by-side must explicitly limit their memory use. This is done by passing the -m flag to
each process to specify how much hugepage memory, in megabytes, each process can use
(or passing --socket-mem to specify how much hugepage memory on each socket each
process can use).

Note: Independent DPDK instances running side-by-side on a single machine cannot share
any network ports. Any network ports being used by one process should be blacklisted in every
other process.

Running Multiple Independent Groups of DPDK Applications

In the same way that it is possible to run independent DPDK applications side- by-side on a
single system, this can be trivially extended to multi-process groups of DPDK applications run-
ning side-by-side. In this case, the secondary processes must use the same --file-prefix
parameter as the primary process whose shared memory they are connecting to.

Note: All restrictions and issues with multiple independent DPDK processes running side-by-
side apply in this usage scenario also.

Multi-process Limitations

There are a number of limitations to what can be done when running DPDK multi-process
applications. Some of these are documented below:
• The multi-process feature requires that the exact same hugepage memory mappings be
present in all applications. The Linux security feature - Address-Space Layout Random-
ization (ASLR) can interfere with this mapping, so it may be necessary to disable this
feature in order to reliably run multi-process applications.

4.20. Multi-process Support 359


DPDK documentation, Release 17.05.0-rc0

Warning: Disabling Address-Space Layout Randomization (ASLR) may have security im-
plications, so it is recommended that it be disabled only when absolutely necessary, and
only when the implications of this change have been understood.

• All DPDK processes running as a single application and using shared memory must
have distinct coremask arguments. It is not possible to have a primary and secondary
instance, or two secondary instances, using any of the same logical cores. Attempting to
do so can cause corruption of memory pool caches, among other issues.
• The delivery of interrupts, such as Ethernet* device link status interrupts, do not work
in secondary processes. All interrupts are triggered inside the primary process only.
Any application needing interrupt notification in multiple processes should provide its
own mechanism to transfer the interrupt information from the primary process to any
secondary process that needs the information.
• The use of function pointers between multiple processes running based of different com-
piled binaries is not supported, since the location of a given function in one process may
be different to its location in a second. This prevents the librte_hash library from behav-
ing properly as in a multi-threaded instance, since it uses a pointer to the hash function
internally.
To work around this issue, it is recommended that multi-process applications perform the
hash calculations by directly calling the hashing function from the code and then using the
rte_hash_add_with_hash()/rte_hash_lookup_with_hash() functions instead of the functions
which do the hashing internally, such as rte_hash_add()/rte_hash_lookup().
• Depending upon the hardware in use, and the number of DPDK processes used, it may
not be possible to have HPET timers available in each DPDK instance. The minimum
number of HPET comparators available to Linux* userspace can be just a single com-
parator, which means that only the first, primary DPDK process instance can open and
mmap /dev/hpet. If the number of required DPDK processes exceeds that of the number
of available HPET comparators, the TSC (which is the default timer in this release) must
be used as a time source across all processes instead of the HPET.

Kernel NIC Interface

The DPDK Kernel NIC Interface (KNI) allows userspace applications access to the Linux*
control plane.
The benefits of using the DPDK KNI are:
• Faster than existing Linux TUN/TAP interfaces (by eliminating system calls and
copy_to_user()/copy_from_user() operations.
• Allows management of DPDK ports using standard Linux net tools such as ethtool, ifcon-
fig and tcpdump.
• Allows an interface with the kernel network stack.
The components of an application using the DPDK Kernel NIC Interface are shown in Fig.
4.47.

4.21. Kernel NIC Interface 360


DPDK documentation, Release 17.05.0-rc0

Fig. 4.47: Components of a DPDK KNI Application

4.21. Kernel NIC Interface 361


DPDK documentation, Release 17.05.0-rc0

The DPDK KNI Kernel Module

The KNI kernel loadable module provides support for two types of devices:
• A Miscellaneous device (/dev/kni) that:
– Creates net devices (via ioctl calls).
– Maintains a kernel thread context shared by all KNI instances (simulating the RX
side of the net driver).
– For single kernel thread mode, maintains a kernel thread context shared by all KNI
instances (simulating the RX side of the net driver).
– For multiple kernel thread mode, maintains a kernel thread context for each KNI
instance (simulating the RX side of the new driver).
• Net device:
– Net functionality provided by implementing several operations such as netdev_ops,
header_ops, ethtool_ops that are defined by struct net_device, including support for
DPDK mbufs and FIFOs.
– The interface name is provided from userspace.
– The MAC address can be the real NIC MAC address or random.

KNI Creation and Deletion

The KNI interfaces are created by a DPDK application dynamically. The interface name and
FIFO details are provided by the application through an ioctl call using the rte_kni_device_info
struct which contains:
• The interface name.
• Physical addresses of the corresponding memzones for the relevant FIFOs.
• Mbuf mempool details, both physical and virtual (to calculate the offset for mbuf pointers).
• PCI information.
• Core affinity.
Refer to rte_kni_common.h in the DPDK source code for more details.
The physical addresses will be re-mapped into the kernel address space and stored in separate
KNI contexts.
The affinity of kernel RX thread (both single and multi-threaded modes) is controlled by
force_bind and core_id config parameters.
The KNI interfaces can be deleted by a DPDK application dynamically after being created.
Furthermore, all those KNI interfaces not deleted will be deleted on the release operation of
the miscellaneous device (when the DPDK application is closed).

DPDK mbuf Flow

To minimize the amount of DPDK code running in kernel space, the mbuf mempool is managed
in userspace only. The kernel module will be aware of mbufs, but all mbuf allocation and free

4.21. Kernel NIC Interface 362


DPDK documentation, Release 17.05.0-rc0

operations will be handled by the DPDK application only.


Fig. 4.48 shows a typical scenario with packets sent in both directions.

Fig. 4.48: Packet Flow via mbufs in the DPDK KNI

Use Case: Ingress

On the DPDK RX side, the mbuf is allocated by the PMD in the RX thread context. This thread
will enqueue the mbuf in the rx_q FIFO. The KNI thread will poll all KNI active devices for the
rx_q. If an mbuf is dequeued, it will be converted to a sk_buff and sent to the net stack via
netif_rx(). The dequeued mbuf must be freed, so the same pointer is sent back in the free_q
FIFO.
The RX thread, in the same main loop, polls this FIFO and frees the mbuf after dequeuing it.

Use Case: Egress

For packet egress the DPDK application must first enqueue several mbufs to create an mbuf
cache on the kernel side.
The packet is received from the Linux net stack, by calling the kni_net_tx() callback. The mbuf
is dequeued (without waiting due the cache) and filled with data from sk_buff. The sk_buff is
then freed and the mbuf sent in the tx_q FIFO.
The DPDK TX thread dequeues the mbuf and sends it to the PMD (via rte_eth_tx_burst()). It
then puts the mbuf back in the cache.

Ethtool

Ethtool is a Linux-specific tool with corresponding support in the kernel where each net device
must register its own callbacks for the supported operations. The current implementation uses

4.21. Kernel NIC Interface 363


DPDK documentation, Release 17.05.0-rc0

the igb/ixgbe modified Linux drivers for ethtool support. Ethtool is not supported in i40e and
VMs (VF or EM devices).

Link state and MTU change

Link state and MTU change are network interface specific operations usually done via ifconfig.
The request is initiated from the kernel side (in the context of the ifconfig process) and handled
by the user space DPDK application. The application polls the request, calls the application
handler and returns the response back into the kernel space.
The application handlers can be registered upon interface creation or explicitly regis-
tered/unregistered in runtime. This provides flexibility in multiprocess scenarios (where the
KNI is created in the primary process but the callbacks are handled in the secondary one).
The constraint is that a single process can register and handle the requests.

KNI Working as a Kernel vHost Backend

vHost is a kernel module usually working as the backend of virtio (a para- virtualization driver
framework) to accelerate the traffic from the guest to the host. The DPDK Kernel NIC interface
provides the ability to hookup vHost traffic into userspace DPDK application. Together with
the DPDK PMD virtio, it significantly improves the throughput between guest and host. In the
scenario where DPDK is running as fast path in the host, kni-vhost is an efficient path for the
traffic.

Overview

vHost-net has three kinds of real backend implementations. They are: 1) tap, 2) macvtap and
3) RAW socket. The main idea behind kni-vhost is making the KNI work as a RAW socket,
attaching it as the backend instance of vHost-net. It is using the existing interface with vHost-
net, so it does not require any kernel hacking, and is fully-compatible with the kernel vhost
module. As vHost is still taking responsibility for communicating with the front-end virtio, it
naturally supports both legacy virtio -net and the DPDK PMD virtio. There is a little penalty that
comes from the non-polling mode of vhost. However, it scales throughput well when using KNI
in multi-thread mode.

Packet Flow

There is only a minor difference from the original KNI traffic flows. On transmit side, vhost
kthread calls the RAW socket’s ops sendmsg and it puts the packets into the KNI transmit
FIFO. On the receive side, the kni kthread gets packets from the KNI receive FIFO, puts them
into the queue of the raw socket, and wakes up the task in vhost kthread to begin receiving. All
the packet copying, irrespective of whether it is on the transmit or receive side, happens in the
context of vhost kthread. Every vhost-net device is exposed to a front end virtio device in the
guest.

Sample Usage

Before starting to use KNI as the backend of vhost, the CONFIG_RTE_KNI_VHOST configu-
ration option must be turned on. Otherwise, by default, KNI will not enable its backend support

4.21. Kernel NIC Interface 364


DPDK documentation, Release 17.05.0-rc0

Fig. 4.49: vHost-net Architecture Overview

Fig. 4.50: KNI Traffic Flow

4.21. Kernel NIC Interface 365


DPDK documentation, Release 17.05.0-rc0

capability.
Of course, as a prerequisite, the vhost/vhost-net kernel CONFIG should be chosen before
compiling the kernel.
1. Compile the DPDK and insert uio_pci_generic/igb_uio kernel modules as normal.
2. Insert the KNI kernel module:
insmod ./rte_kni.ko

If using KNI in multi-thread mode, use the following command line:


insmod ./rte_kni.ko kthread_mode=multiple

3. Running the KNI sample application:


examples/kni/build/app/kni -c -0xf0 -n 4 -- -p 0x3 -P --config="(0,4,6),(1,5,7)"

This command runs the kni sample application with two physical ports. Each port pins
two forwarding cores (ingress/egress) in user space.
4. Assign a raw socket to vhost-net during qemu-kvm startup. The DPDK does not provide
a script to do this since it is easy for the user to customize. The following shows the key
steps to launch qemu-kvm with kni-vhost:
#!/bin/bash
echo 1 > /sys/class/net/vEth0/sock_en
fd=`cat /sys/class/net/vEth0/sock_fd`
qemu-kvm \
-name vm1 -cpu host -m 2048 -smp 1 -hda /opt/vm-fc16.img \
-netdev tap,fd=$fd,id=hostnet1,vhost=on \
-device virti-net-pci,netdev=hostnet1,id=net1,bus=pci.0,addr=0x4

It is simple to enable raw socket using sysfs sock_en and get raw socket fd using sock_fd under
the KNI device node.
Then, using the qemu-kvm command with the -netdev option to assign such raw socket fd as
vhost’s backend.

Note: The key word tap must exist as qemu-kvm now only supports vhost with a tap backend,
so here we cheat qemu-kvm by an existing fd.

Compatibility Configure Option

There is a CONFIG_RTE_KNI_VHOST_VNET_HDR_EN configuration option in DPDK config-


uration file. By default, it set to n, which means do not turn on the virtio net header, which is
used to support additional features (such as, csum offload, vlan offload, generic-segmentation
and so on), since the kni-vhost does not yet support those features.
Even if the option is turned on, kni-vhost will ignore the information that the header contains.
When working with legacy virtio on the guest, it is better to turn off unsupported offload features
using ethtool -K. Otherwise, there may be problems such as an incorrect L4 checksum error.

4.21. Kernel NIC Interface 366


DPDK documentation, Release 17.05.0-rc0

Thread Safety of DPDK Functions

The DPDK is comprised of several libraries. Some of the functions in these libraries can be
safely called from multiple threads simultaneously, while others cannot. This section allows the
developer to take these issues into account when building their own application.
The run-time environment of the DPDK is typically a single thread per logical core. In some
cases, it is not only multi-threaded, but multi-process. Typically, it is best to avoid sharing data
structures between threads and/or processes where possible. Where this is not possible, then
the execution blocks must access the data in a thread- safe manner. Mechanisms such as
atomics or locking can be used that will allow execution blocks to operate serially. However,
this can have an effect on the performance of the application.

Fast-Path APIs

Applications operating in the data plane are performance sensitive but certain functions within
those libraries may not be safe to call from multiple threads simultaneously. The hash, LPM
and mempool libraries and RX/TX in the PMD are examples of this.
The hash and LPM libraries are, by design, thread unsafe in order to maintain performance.
However, if required the developer can add layers on top of these libraries to provide thread
safety. Locking is not needed in all situations, and in both the hash and LPM libraries, lookups
of values can be performed in parallel in multiple threads. Adding, removing or modifying
values, however, cannot be done in multiple threads without using locking when a single hash
or LPM table is accessed. Another alternative to locking would be to create multiple instances
of these tables allowing each thread its own copy.
The RX and TX of the PMD are the most critical aspects of a DPDK application and it is
recommended that no locking be used as it will impact performance. Note, however, that these
functions can safely be used from multiple threads when each thread is performing I/O on a
different NIC queue. If multiple threads are to use the same hardware queue on the same NIC
port, then locking, or some other form of mutual exclusion, is necessary.
The ring library is based on a lockless ring-buffer algorithm that maintains its original de-
sign for thread safety. Moreover, it provides high performance for either multi- or single-
consumer/producer enqueue/dequeue operations. The mempool library is based on the DPDK
lockless ring library and therefore is also multi-thread safe.

Performance Insensitive API

Outside of the performance sensitive areas described in Section 25.1, the DPDK provides a
thread-safe API for most other libraries. For example, malloc and memzone functions are safe
for use in multi-threaded and multi-process environments.
The setup and configuration of the PMD is not performance sensitive, but is not thread safe
either. It is possible that the multiple read/writes during PMD setup and configuration could be
corrupted in a multi-thread environment. Since this is not performance sensitive, the developer
can choose to add their own layer to provide thread-safe setup and configuration. It is expected
that, in most applications, the initial configuration of the network ports would be done by a
single thread at startup.

4.22. Thread Safety of DPDK Functions 367


DPDK documentation, Release 17.05.0-rc0

Library Initialization

It is recommended that DPDK libraries are initialized in the main thread at application startup
rather than subsequently in the forwarding threads. However, the DPDK performs checks to
ensure that libraries are only initialized once. If initialization is attempted more than once, an
error is returned.
In the multi-process case, the configuration information of shared memory will only be initialized
by the master process. Thereafter, both master and secondary processes can allocate/release
any objects of memory that finally rely on rte_malloc or memzones.

Interrupt Thread

The DPDK works almost entirely in Linux user space in polling mode. For certain infrequent
operations, such as receiving a PMD link status change notification, callbacks may be called
in an additional thread outside the main DPDK processing threads. These function callbacks
should avoid manipulating DPDK objects that are also managed by the normal DPDK threads,
and if they need to do so, it is up to the application to provide the appropriate locking or mutual
exclusion restrictions around those objects.

Quality of Service (QoS) Framework

This chapter describes the DPDK Quality of Service (QoS) framework.

Packet Pipeline with QoS Support

An example of a complex packet processing pipeline with QoS support is shown in the following
figure.

Fig. 4.51: Complex Packet Processing Pipeline with QoS Support

4.23. Quality of Service (QoS) Framework 368


DPDK documentation, Release 17.05.0-rc0

This pipeline can be built using reusable DPDK software libraries. The main blocks implement-
ing QoS in this pipeline are: the policer, the dropper and the scheduler. A functional description
of each block is provided in the following table.

Table 4.51: Packet Processing Pipeline Implementing QoS


# Block Functional Description
1 Packet I/O Packet reception/ transmission from/to multiple NIC ports. Poll mode
RX & TX drivers (PMDs) for Intel 1 GbE/10 GbE NICs.
2 Packet Identify the protocol stack of the input packet. Check the integrity of the
parser packet headers.
3 Flow clas- Map the input packet to one of the known traffic flows. Exact match table
sification lookup using configurable hash function (jhash, CRC and so on) and
bucket logic to handle collisions.
4 Policer Packet metering using srTCM (RFC 2697) or trTCM (RFC2698)
algorithms.
5 Load Distribute the input packets to the application workers. Provide uniform
Balancer load to each worker. Preserve the affinity of traffic flows to workers and
the packet order within each flow.
6 Worker Placeholders for the customer specific application workload (for
threads example, IP stack and so on).
7 Dropper Congestion management using the Random Early Detection (RED)
algorithm (specified by the Sally Floyd - Van Jacobson paper) or
Weighted RED (WRED). Drop packets based on the current scheduler
queue load level and packet priority. When congestion is experienced,
lower priority packets are dropped first.
8 Hierarchi- 5-level hierarchical scheduler (levels are: output port, subport, pipe,
cal traffic class and queue) with thousands (typically 64K) leaf nodes
Scheduler (queues). Implements traffic shaping (for subport and pipe levels), strict
priority (for traffic class level) and Weighted Round Robin (WRR) (for
queues within each pipe traffic class).
The infrastructure blocks used throughout the packet processing pipeline are listed in the fol-
lowing table.

Table 4.52: Infrastructure Blocks Used by the Packet Processing Pipeline


# Block Functional Description
1 Buffer manager Support for global buffer pools and private per-thread buffer caches.
2 Queue manager Support for message passing between pipeline blocks.
3 Power saving Support for power saving during low activity periods.
The mapping of pipeline blocks to CPU cores is configurable based on the performance level
required by each specific application and the set of features enabled for each block. Some
blocks might consume more than one CPU core (with each CPU core running a different
instance of the same block on different input packets), while several other blocks could be
mapped to the same CPU core.

Hierarchical Scheduler

The hierarchical scheduler block, when present, usually sits on the TX side just before the
transmission stage. Its purpose is to prioritize the transmission of packets from different users

4.23. Quality of Service (QoS) Framework 369


DPDK documentation, Release 17.05.0-rc0

and different traffic classes according to the policy specified by the Service Level Agreements
(SLAs) of each network node.

Overview

The hierarchical scheduler block is similar to the traffic manager block used by network proces-
sors that typically implement per flow (or per group of flows) packet queuing and scheduling. It
typically acts like a buffer that is able to temporarily store a large number of packets just before
their transmission (enqueue operation); as the NIC TX is requesting more packets for trans-
mission, these packets are later on removed and handed over to the NIC TX with the packet
selection logic observing the predefined SLAs (dequeue operation).

Fig. 4.52: Hierarchical Scheduler Block Internal Diagram

The hierarchical scheduler is optimized for a large number of packet queues. When only a
small number of queues are needed, message passing queues should be used instead of this
block. See Worst Case Scenarios for Performance for a more detailed discussion.

Scheduling Hierarchy

The scheduling hierarchy is shown in Fig. 4.53. The first level of the hierarchy is the Ethernet
TX port 1/10/40 GbE, with subsequent hierarchy levels defined as subport, pipe, traffic class
and queue.
Typically, each subport represents a predefined group of users, while each pipe represents an
individual user/subscriber. Each traffic class is the representation of a different traffic type with
specific loss rate, delay and jitter requirements, such as voice, video or data transfers. Each

4.23. Quality of Service (QoS) Framework 370


DPDK documentation, Release 17.05.0-rc0

queue hosts packets from one or multiple connections of the same type belonging to the same
user.

Fig. 4.53: Scheduling Hierarchy per Port

The functionality of each hierarchical level is detailed in the following table.

4.23. Quality of Service (QoS) Framework 371


DPDK documentation, Release 17.05.0-rc0

Table 4.53: Port Scheduling Hierarchy


# Level Siblings per Parent Functional Descrip-
tion
1 Port
• 1. Output Ethernet
port 1/10/40
GbE.
2. Multiple ports
are scheduled
in round robin
order with all
ports having
equal priority.

2 Subport Configurable (default:


1. Traffic shaping
8)
using token
bucket algo-
rithm (one token
bucket per
subport).
2. Upper limit en-
forced per Traf-
fic Class (TC)
at the subport
level.
3. Lower priority
TCs able to
reuse subport
bandwidth cur-
rently unused by
higher priority
TCs.

3 Pipe Configurable (default:


1. Traffic shaping
4K)
using the token
bucket algo-
rithm (one token
bucket per pipe.

4 Traffic Class (TC) 4


1. TCs of the same
pipe handled in
strict priority or-
der.
2. Upper limit en-
forced per TC at
the pipe level.
3. Lower prior-
ity TCs able
to reuse pipe
bandwidth cur-
rently unused by
4.23. Quality of Service (QoS) Framework 372
higher priority
TCs.
4. When subport
DPDK documentation, Release 17.05.0-rc0

Application Programming Interface (API)

Port Scheduler Configuration API

The rte_sched.h file contains configuration functions for port, subport and pipe.

Port Scheduler Enqueue API

The port scheduler enqueue API is very similar to the API of the DPDK PMD TX function.
int rte_sched_port_enqueue(struct rte_sched_port *port, struct rte_mbuf **pkts, uint32_t n_pkts

Port Scheduler Dequeue API

The port scheduler dequeue API is very similar to the API of the DPDK PMD RX function.
int rte_sched_port_dequeue(struct rte_sched_port *port, struct rte_mbuf **pkts, uint32_t n_pkts

Usage Example

/* File "application.c" */

#define N_PKTS_RX 64
#define N_PKTS_TX 48
#define NIC_RX_PORT 0
#define NIC_RX_QUEUE 0
#define NIC_TX_PORT 1
#define NIC_TX_QUEUE 0

struct rte_sched_port *port = NULL;


struct rte_mbuf *pkts_rx[N_PKTS_RX], *pkts_tx[N_PKTS_TX];
uint32_t n_pkts_rx, n_pkts_tx;

/* Initialization */

<initialization code>

/* Runtime */
while (1) {
/* Read packets from NIC RX queue */

n_pkts_rx = rte_eth_rx_burst(NIC_RX_PORT, NIC_RX_QUEUE, pkts_rx, N_PKTS_RX);

/* Hierarchical scheduler enqueue */

rte_sched_port_enqueue(port, pkts_rx, n_pkts_rx);

/* Hierarchical scheduler dequeue */

n_pkts_tx = rte_sched_port_dequeue(port, pkts_tx, N_PKTS_TX);

/* Write packets to NIC TX queue */

rte_eth_tx_burst(NIC_TX_PORT, NIC_TX_QUEUE, pkts_tx, n_pkts_tx);


}

4.23. Quality of Service (QoS) Framework 373


DPDK documentation, Release 17.05.0-rc0

Implementation

Internal Data Structures per Port

A schematic of the internal data structures in shown in with details in.

4.23. Quality of Service (QoS) Framework 374


DPDK documentation, Release 17.05.0-rc0

Fig. 4.54: Internal Data Structures per Port

4.23. Quality of Service (QoS) Framework 375


DPDK documentation, Release 17.05.0-rc0

Table 4.54: Scheduler Internal Data Structures per Port


Access type Description
# Data structure Size (bytes) # per port
Enq Deq
1 Subport ta- 64 # subports Rd, Wr Persistent

ble entry per port subport data
(credits,
etc).
2 Pipe table 64 # pipes per Rd, Wr Persistent

entry port data for
pipe, its
TCs and
its queues
(credits, etc)
that is up-
dated during
run-time.
The pipe
configura-
tion param-
eters do not
change dur-
ing run-time.
The same
pipe con-
figuration
parameters
are shared
by multiple
pipes, there-
fore they are
not part of
pipe table
entry.
3 Queue table 4 #queues per Rd, Wr Rd, Wr Persistent
entry port queue data
(read and
write point-
ers). The
queue size
is the same
per TC for
all queues,
allowing
the queue
base ad-
dress to be
computed
using a fast
formula, so
these two
parameters
are not part
4.23. Quality of Service (QoS) Framework 376of queue
table entry.
The queue
DPDK documentation, Release 17.05.0-rc0

Multicore Scaling Strategy

The multicore scaling strategy is:


1. Running different physical ports on different threads. The enqueue and dequeue of the
same port are run by the same thread.
2. Splitting the same physical port to different threads by running different sets of subports
of the same physical port (virtual ports) on different threads. Similarly, a subport can
be split into multiple subports that are each run by a different thread. The enqueue
and dequeue of the same port are run by the same thread. This is only required if, for
performance reasons, it is not possible to handle a full port with a single core.

Enqueue and Dequeue for the Same Output Port Running enqueue and dequeue oper-
ations for the same output port from different cores is likely to cause significant impact on
scheduler’s performance and it is therefore not recommended.
The port enqueue and dequeue operations share access to the following data structures:
1. Packet descriptors
2. Queue table
3. Queue storage area
4. Bitmap of active queues
The expected drop in performance is due to:
1. Need to make the queue and bitmap operations thread safe, which requires either using
locking primitives for access serialization (for example, spinlocks/ semaphores) or using
atomic primitives for lockless access (for example, Test and Set, Compare And Swap, an
so on). The impact is much higher in the former case.
2. Ping-pong of cache lines storing the shared data structures between the cache hierar-
chies of the two cores (done transparently by the MESI protocol cache coherency CPU
hardware).
Therefore, the scheduler enqueue and dequeue operations have to be run from the same
thread, which allows the queues and the bitmap operations to be non-thread safe and keeps
the scheduler data structures internal to the same core.

Performance Scaling Scaling up the number of NIC ports simply requires a proportional
increase in the number of CPU cores to be used for traffic scheduling.

Enqueue Pipeline

The sequence of steps per packet:


1. Access the mbuf to read the data fields required to identify the destination queue for the
packet. These fields are: port, subport, traffic class and queue within traffic class, and
are typically set by the classification stage.
2. Access the queue structure to identify the write location in the queue array. If the queue
is full, then the packet is discarded.

4.23. Quality of Service (QoS) Framework 377


DPDK documentation, Release 17.05.0-rc0

3. Access the queue array location to store the packet (i.e. write the mbuf pointer).
It should be noted the strong data dependency between these steps, as steps 2 and 3 cannot
start before the result from steps 1 and 2 becomes available, which prevents the processor out
of order execution engine to provide any significant performance optimizations.
Given the high rate of input packets and the large amount of queues, it is expected that the
data structures accessed to enqueue the current packet are not present in the L1 or L2 data
cache of the current core, thus the above 3 memory accesses would result (on average) in L1
and L2 data cache misses. A number of 3 L1/L2 cache misses per packet is not acceptable for
performance reasons.
The workaround is to prefetch the required data structures in advance. The prefetch operation
has an execution latency during which the processor should not attempt to access the data
structure currently under prefetch, so the processor should execute other work. The only other
work available is to execute different stages of the enqueue sequence of operations on other
input packets, thus resulting in a pipelined implementation for the enqueue operation.
Fig. 4.55 illustrates a pipelined implementation for the enqueue operation with 4 pipeline stages
and each stage executing 2 different input packets. No input packet can be part of more than
one pipeline stage at a given time.

Fig. 4.55: Prefetch Pipeline for the Hierarchical Scheduler Enqueue Operation

The congestion management scheme implemented by the enqueue pipeline described above
is very basic: packets are enqueued until a specific queue becomes full, then all the packets
destined to the same queue are dropped until packets are consumed (by the dequeue oper-
ation). This can be improved by enabling RED/WRED as part of the enqueue pipeline which
looks at the queue occupancy and packet priority in order to yield the enqueue/drop decision for
a specific packet (as opposed to enqueuing all packets / dropping all packets indiscriminately).

Dequeue State Machine

The sequence of steps to schedule the next packet from the current pipe is:
1. Identify the next active pipe using the bitmap scan operation, prefetch pipe.
2. Read pipe data structure. Update the credits for the current pipe and its subport. Identify
the first active traffic class within the current pipe, select the next queue using WRR,
prefetch queue pointers for all the 16 queues of the current pipe.
3. Read next element from the current WRR queue and prefetch its packet descriptor.
4. Read the packet length from the packet descriptor (mbuf structure). Based on the packet
length and the available credits (of current pipe, pipe traffic class, subport and subport
traffic class), take the go/no go scheduling decision for the current packet.
To avoid the cache misses, the above data structures (pipe, queue, queue array, mbufs) are
prefetched in advance of being accessed. The strategy of hiding the latency of the prefetch

4.23. Quality of Service (QoS) Framework 378


DPDK documentation, Release 17.05.0-rc0

operations is to switch from the current pipe (in grinder A) to another pipe (in grinder B) imme-
diately after a prefetch is issued for the current pipe. This gives enough time to the prefetch
operation to complete before the execution switches back to this pipe (in grinder A).
The dequeue pipe state machine exploits the data presence into the processor cache, therefore
it tries to send as many packets from the same pipe TC and pipe as possible (up to the available
packets and credits) before moving to the next active TC from the same pipe (if any) or to
another active pipe.

Fig. 4.56: Pipe Prefetch State Machine for the Hierarchical Scheduler Dequeue Operation

Timing and Synchronization

The output port is modeled as a conveyor belt of byte slots that need to be filled by the sched-
uler with data for transmission. For 10 GbE, there are 1.25 billion byte slots that need to be

4.23. Quality of Service (QoS) Framework 379


DPDK documentation, Release 17.05.0-rc0

filled by the port scheduler every second. If the scheduler is not fast enough to fill the slots, pro-
vided that enough packets and credits exist, then some slots will be left unused and bandwidth
will be wasted.
In principle, the hierarchical scheduler dequeue operation should be triggered by NIC TX.
Usually, once the occupancy of the NIC TX input queue drops below a predefined threshold,
the port scheduler is woken up (interrupt based or polling based, by continuously monitoring
the queue occupancy) to push more packets into the queue.

Internal Time Reference The scheduler needs to keep track of time advancement for the
credit logic, which requires credit updates based on time (for example, subport and pipe traffic
shaping, traffic class upper limit enforcement, and so on).
Every time the scheduler decides to send a packet out to the NIC TX for transmission, the
scheduler will increment its internal time reference accordingly. Therefore, it is convenient
to keep the internal time reference in units of bytes, where a byte signifies the time duration
required by the physical interface to send out a byte on the transmission medium. This way,
as a packet is scheduled for transmission, the time is incremented with (n + h), where n is the
packet length in bytes and h is the number of framing overhead bytes per packet.

Internal Time Reference Re-synchronization The scheduler needs to align its internal time
reference to the pace of the port conveyor belt. The reason is to make sure that the scheduler
does not feed the NIC TX with more bytes than the line rate of the physical medium in order
to prevent packet drop (by the scheduler, due to the NIC TX input queue being full, or later on,
internally by the NIC TX).
The scheduler reads the current time on every dequeue invocation. The CPU time stamp can
be obtained by reading either the Time Stamp Counter (TSC) register or the High Precision
Event Timer (HPET) register. The current CPU time stamp is converted from number of CPU
clocks to number of bytes: time_bytes = time_cycles / cycles_per_byte, where cycles_per_byte
is the amount of CPU cycles that is equivalent to the transmission time for one byte on the wire
(e.g. for a CPU frequency of 2 GHz and a 10GbE port,*cycles_per_byte = 1.6*).
The scheduler maintains an internal time reference of the NIC time. Whenever a packet is
scheduled, the NIC time is incremented with the packet length (including framing overhead).
On every dequeue invocation, the scheduler checks its internal reference of the NIC time
against the current time:
1. If NIC time is in the future (NIC time >= current time), no adjustment of NIC time is
needed. This means that scheduler is able to schedule NIC packets before the NIC
actually needs those packets, so the NIC TX is well supplied with packets;
2. If NIC time is in the past (NIC time < current time), then NIC time should be adjusted by
setting it to the current time. This means that the scheduler is not able to keep up with
the speed of the NIC byte conveyor belt, so NIC bandwidth is wasted due to poor packet
supply to the NIC TX.

Scheduler Accuracy and Granularity The scheduler round trip delay (SRTD) is the time
(number of CPU cycles) between two consecutive examinations of the same pipe by the sched-
uler.
To keep up with the output port (that is, avoid bandwidth loss), the scheduler should be able to
schedule n packets faster than the same n packets are transmitted by NIC TX.

4.23. Quality of Service (QoS) Framework 380


DPDK documentation, Release 17.05.0-rc0

The scheduler needs to keep up with the rate of each individual pipe, as configured for the pipe
token bucket, assuming that no port oversubscription is taking place. This means that the size
of the pipe token bucket should be set high enough to prevent it from overflowing due to big
SRTD, as this would result in credit loss (and therefore bandwidth loss) for the pipe.

Credit Logic

Scheduling Decision The scheduling decision to send next packet from (subport S, pipe P,
traffic class TC, queue Q) is favorable (packet is sent) when all the conditions below are met:
• Pipe P of subport S is currently selected by one of the port grinders;
• Traffic class TC is the highest priority active traffic class of pipe P;
• Queue Q is the next queue selected by WRR within traffic class TC of pipe P;
• Subport S has enough credits to send the packet;
• Subport S has enough credits for traffic class TC to send the packet;
• Pipe P has enough credits to send the packet;
• Pipe P has enough credits for traffic class TC to send the packet.
If all the above conditions are met, then the packet is selected for transmission and the nec-
essary credits are subtracted from subport S, subport S traffic class TC, pipe P, pipe P traffic
class TC.

Framing Overhead As the greatest common divisor for all packet lengths is one byte, the
unit of credit is selected as one byte. The number of credits required for the transmission of a
packet of n bytes is equal to (n+h), where h is equal to the number of framing overhead bytes
per packet.

Table 4.55: Ethernet Frame Overhead Fields


# Packet field Length Comments
(bytes)
1 Preamble 7
2 Start of Frame 1
Delimiter (SFD)
3 Frame Check 4 Considered overhead only if not included in the
Sequence (FCS) mbuf packet length field.
4 Inter Frame Gap 12
(IFG)
5 Total 24

Traffic Shaping The traffic shaping for subport and pipe is implemented using a token bucket
per subport/per pipe. Each token bucket is implemented using one saturated counter that
keeps track of the number of available credits.
The token bucket generic parameters and operations are presented in Table 4.56 and Table
4.57.

4.23. Quality of Service (QoS) Framework 381


DPDK documentation, Release 17.05.0-rc0

Table 4.56: Token Bucket Generic Operations


# Token Bucket Unit Description
Parameter
1 bucket_rate Credits per Rate of adding credits to the bucket.
second
2 bucket_size Credits Max number of credits that can be stored in
the bucket.
Table 4.57: Token Bucket Generic Parameters
# Token Description
Bucket
Operation
1 Initialization Bucket set to a predefined value, e.g. zero or half of the bucket size.
2 Credit Credits are added to the bucket on top of existing ones, either
update periodically or on demand, based on the bucket_rate. Credits cannot
exceed the upper limit defined by the bucket_size, so any credits to be
added to the bucket while the bucket is full are dropped.
3 Credit con- As result of packet scheduling, the necessary number of credits is
sumption removed from the bucket. The packet can only be sent if enough credits
are in the bucket to send the full packet (packet bytes and framing
overhead for the packet).
To implement the token bucket generic operations described above, the current design uses
the persistent data structure presented in Table 4.58, while the implementation of the token
bucket operations is described in Table 4.59.

4.23. Quality of Service (QoS) Framework 382


DPDK documentation, Release 17.05.0-rc0

Table 4.58: Token Bucket Persistent Data Structure


# Token bucket field Unit Description
1 tb_time Bytes Time of the last credit
update. Measured in
bytes instead of sec-
onds or CPU cycles
for ease of credit con-
sumption operation
(as the current time
is also maintained in
bytes).
See Section
26.2.4.5.1 “Inter-
nal Time Reference”
for an explanation
of why the time is
maintained in byte
units.
2 tb_period Bytes Time period that
should elapse since
the last credit up-
date in order for the
bucket to be awarded
tb_credits_per_period
worth or credits.
3 tb_credits_per_period Bytes Credit allowance per
tb_period.
4 tb_size Bytes Bucket size, i.e. upper
limit for the tb_credits.
5 tb_credits Bytes Number of credits cur-
rently in the bucket.
The bucket rate (in bytes per second) can be computed with the following formula:
bucket_rate = (tb_credits_per_period / tb_period) * r
where, r = port line rate (in bytes per second).

4.23. Quality of Service (QoS) Framework 383


DPDK documentation, Release 17.05.0-rc0

Table 4.59: Token Bucket Operations


# Token bucket operation Description
1 Initialization tb_credits = 0; or tb_credits =
tb_size / 2;
2 Credit update Credit update options:
• Every time a packet is
sent for a port, update
the credits of all the the
subports and pipes of
that port. Not feasible.
• Every time a packet is
sent, update the cred-
its for the pipe and sub-
port. Very accurate, but
not needed (a lot of cal-
culations).
• Every time a pipe is se-
lected (that is, picked by
one of the grinders), up-
date the credits for the
pipe and its subport.
The current implementation
is using option 3. Accord-
ing to Section Dequeue State
Machine, the pipe and sub-
port credits are updated every
time a pipe is selected by the
dequeue process before the
pipe and subport credits are
actually used.
The implementation uses a
tradeoff between accuracy
and speed by updating the
bucket credits only when at
least a full tb_period has
elapsed since the last update.
• Full accuracy can
be achieved by se-
lecting the value for
tb_period for which
tb_credits_per_period =
1.
• When full accuracy is
not required, better per-
formance is achieved by
setting tb_credits to a
larger value.
Update operations:
• n_periods = (time -
tb_time) / tb_period;
• tb_credits += n_periods
* tb_credits_per_period;
4.23. Quality of Service (QoS) Framework • tb_credits 384 =
min(tb_credits, tb_size);
• tb_time += n_periods *
DPDK documentation, Release 17.05.0-rc0

Traffic Classes

Implementation of Strict Priority Scheduling Strict priority scheduling of traffic classes


within the same pipe is implemented by the pipe dequeue state machine, which selects the
queues in ascending order. Therefore, queues 0..3 (associated with TC 0, highest priority TC)
are handled before queues 4..7 (TC 1, lower priority than TC 0), which are handled before
queues 8..11 (TC 2), which are handled before queues 12..15 (TC 3, lowest priority TC).

Upper Limit Enforcement The traffic classes at the pipe and subport levels are not traffic
shaped, so there is no token bucket maintained in this context. The upper limit for the traffic
classes at the subport and pipe levels is enforced by periodically refilling the subport / pipe
traffic class credit counter, out of which credits are consumed every time a packet is scheduled
for that subport / pipe, as described in Table 4.60 and Table 4.61.

Table 4.60: Subport/Pipe Traffic Class Upper Limit Enforcement Persistent Data Structure
# Subport or pipe field Unit Description
1 tc_time Bytes Time of the next up-
date (upper limit refill)
for the 4 TCs of the
current subport / pipe.
See Section Internal
Time Reference for
the explanation of why
the time is maintained
in byte units.
2 tc_period Bytes Time between two
consecutive updates
for the 4 TCs of the
current subport / pipe.
This is expected to
be many times bigger
than the typical value
of the token bucket
tb_period.
3 tc_credits_per_period Bytes Upper limit for the
number of credits al-
lowed to be consumed
by the current TC dur-
ing each enforcement
period tc_period.
4 tc_credits Bytes Current upper limit for
the number of credits
that can be consumed
by the current traffic
class for the remain-
der of the current en-
forcement period.

4.23. Quality of Service (QoS) Framework 385


DPDK documentation, Release 17.05.0-rc0

Table 4.61: Subport/Pipe Traffic Class Upper Limit Enforcement Operations


# Traffic Class Operation Description
1 Initialization tc_credits =
tc_credits_per_period;
tc_time = tc_period;
2 Credit update Update operations:
if (time >= tc_time) {
tc_credits =
tc_credits_per_period;
tc_time = time + tc_period;
}
3 Credit consumption (on As result of packet schedul-
packet scheduling) ing, the TC limit is decreased
with the necessary number of
credits. The packet can only
be sent if enough credits are
currently available in the TC
limit to send the full packet
(packet bytes and framing
overhead for the packet).
Scheduling operations:
pkt_credits = pk_len +
frame_overhead;
if (tc_credits >= pkt_credits)
{tc_credits -= pkt_credits;}

Weighted Round Robin (WRR) The evolution of the WRR design solution from simple to
complex is shown in Table 4.62.

4.23. Quality of Service (QoS) Framework 386


DPDK documentation, Release 17.05.0-rc0

Table 4.62: Weighted Round Robin (WRR)


# All Queues Ac- Equal Weights for All Packets Strategy
tive? All Queues? Equal?
1 Yes Yes Yes Byte level round
robin
Next queue
queue #i, i = (i +
1) % n
2 Yes Yes No Packet level
round robin
Consuming one
byte from queue
#i requires con-
suming exactly
one token for
queue #i.
T(i) = Accumu-
lated number of
tokens previously
consumed from
queue #i. Every
time a packet is
consumed from
queue #i, T(i) is
updated as: T(i)
+= pkt_len.
Next queue :
queue with the
smallest T.
3 Yes No No Packet level
weighted round
robin
This case can be
reduced to the
previous case
by introducing
a cost per byte
that is different
for each queue.
Queues with
lower weights
have a higher
cost per byte.
This way, it is
still meaningful
to compare the
consumption
amongst different
queues in order
to select the next
queue.
w(i) = Weight of
4.23. Quality of Service (QoS) Framework queue #i387
t(i) = Tokens per
byte for queue
DPDK documentation, Release 17.05.0-rc0

Subport Traffic Class Oversubscription

Problem Statement Oversubscription for subport traffic class X is a configuration-time event


that occurs when more bandwidth is allocated for traffic class X at the level of subport member
pipes than allocated for the same traffic class at the parent subport level.
The existence of the oversubscription for a specific subport and traffic class is solely the result
of pipe and subport-level configuration as opposed to being created due to dynamic evolution
of the traffic load at run-time (as congestion is).
When the overall demand for traffic class X for the current subport is low, the existence of
the oversubscription condition does not represent a problem, as demand for traffic class X is
completely satisfied for all member pipes. However, this can no longer be achieved when the
aggregated demand for traffic class X for all subport member pipes exceeds the limit configured
at the subport level.

Solution Space summarizes some of the possible approaches for handling this problem,
with the third approach selected for implementation.

4.23. Quality of Service (QoS) Framework 388


DPDK documentation, Release 17.05.0-rc0

Table 4.63: Subport Traffic Class Oversubscription


No. Approach Description
1 Don’t care First come, first served.
This approach is not fair
amongst subport member
pipes, as pipes that are
served first will use up as
much bandwidth for TC X
as they need, while pipes
that are served later will
receive poor service due to
bandwidth for TC X at the
subport level being scarce.
2 Scale down all pipes All pipes within the subport
have their bandwidth limit for
TC X scaled down by the
same factor.
This approach is not fair
among subport member
pipes, as the low end pipes
(that is, pipes configured
with low bandwidth) can
potentially experience severe
service degradation that
might render their service un-
usable (if available bandwidth
for these pipes drops below
the minimum requirements
for a workable service), while
the service degradation for
high end pipes might not be
noticeable at all.
3 Cap the high demand pipes Each subport member pipe
receives an equal share of the
bandwidth available at run-
time for TC X at the sub-
port level. Any bandwidth left
unused by the low-demand
pipes is redistributed in equal
portions to the high-demand
pipes. This way, the high-
demand pipes are truncated
while the low-demand pipes
are not impacted.
Typically, the subport TC oversubscription feature is enabled only for the lowest priority traffic
class (TC 3), which is typically used for best effort traffic, with the management plane prevent-
ing this condition from occurring for the other (higher priority) traffic classes.
To ease implementation, it is also assumed that the upper limit for subport TC 3 is set to 100%
of the subport rate, and that the upper limit for pipe TC 3 is set to 100% of pipe rate for all

4.23. Quality of Service (QoS) Framework 389


DPDK documentation, Release 17.05.0-rc0

subport member pipes.

Implementation Overview The algorithm computes a watermark, which is periodically up-


dated based on the current demand experienced by the subport member pipes, whose purpose
is to limit the amount of traffic that each pipe is allowed to send for TC 3. The watermark is
computed at the subport level at the beginning of each traffic class upper limit enforcement
period and the same value is used by all the subport member pipes throughout the current
enforcement period. illustrates how the watermark computed as subport level at the beginning
of each period is propagated to all subport member pipes.
At the beginning of the current enforcement period (which coincides with the end of the pre-
vious enforcement period), the value of the watermark is adjusted based on the amount of
bandwidth allocated to TC 3 at the beginning of the previous period that was not left unused
by the subport member pipes at the end of the previous period.
If there was subport TC 3 bandwidth left unused, the value of the watermark for the current
period is increased to encourage the subport member pipes to consume more bandwidth. Oth-
erwise, the value of the watermark is decreased to enforce equality of bandwidth consumption
among subport member pipes for TC 3.
The increase or decrease in the watermark value is done in small increments, so several
enforcement periods might be required to reach the equilibrium state. This state can change
at any moment due to variations in the demand experienced by the subport member pipes for
TC 3, for example, as a result of demand increase (when the watermark needs to be lowered)
or demand decrease (when the watermark needs to be increased).
When demand is low, the watermark is set high to prevent it from impeding the subport member
pipes from consuming more bandwidth. The highest value for the watermark is picked as the
highest rate configured for a subport member pipe. Table 4.64 and Table 4.65 illustrates the
watermark operation.

4.23. Quality of Service (QoS) Framework 390


DPDK documentation, Release 17.05.0-rc0

Table 4.64: Watermark Propagation from Subport Level to Member Pipes at the Beginning of
Each Traffic Class Upper Limit Enforcement Period
No. Subport Traffic Class Opera- Description
tion
1 Initialization Subport level: sub-
port_period_id= 0
Pipe level: pipe_period_id =
0
2 Credit update Subport Level:
if (time>=subport_tc_time)
{ subport_wm = wa-
ter_mark_update();
subport_tc_time = time
+ subport_tc_period;
subport_period_id++;
}
Pipelevel:
if(pipe_period_id != sub-
port_period_id)
{
pipe_ov_credits
= subport_wm *
pipe_weight;
pipe_period_id
= sub-
port_period_id;
}
3 Credit consumption (on Pipe level:
packet scheduling) pkt_credits = pk_len +
frame_overhead;
if(pipe_ov_credits >=
pkt_credits{
pipe_ov_credits -
= pkt_credits;
}

4.23. Quality of Service (QoS) Framework 391


DPDK documentation, Release 17.05.0-rc0

Table 4.65: Watermark Calculation


No. Subport Traffic Class Opera- Description
tion
1 Initialization Subport level:
wm = WM_MAX
2 Credit update Subport level (wa-
ter_mark_update):
tc0_cons = sub-
port_tc0_credits_per_period -
subport_tc0_credits;
tc1_cons = sub-
port_tc1_credits_per_period -
subport_tc1_credits;
tc2_cons = sub-
port_tc2_credits_per_period -
subport_tc2_credits;
tc3_cons = sub-
port_tc3_credits_per_period -
subport_tc3_credits;
tc3_cons_max = sub-
port_tc3_credits_per_period
- (tc0_cons + tc1_cons +
tc2_cons);
if(tc3_consumption >
(tc3_consumption_max -
MTU)){
wm -= wm >> 7;
if(wm < WM_MIN)
wm = WM_MIN;
} else {
wm += (wm >> 7)
+ 1;
if(wm >
WM_MAX) wm =
WM_MAX;
}

Worst Case Scenarios for Performance

Lots of Active Queues with Not Enough Credits

The more queues the scheduler has to examine for packets and credits in order to select one
packet, the lower the performance of the scheduler is.
The scheduler maintains the bitmap of active queues, which skips the non-active queues, but
in order to detect whether a specific pipe has enough credits, the pipe has to be drilled down
using the pipe dequeue state machine, which consumes cycles regardless of the scheduling
result (no packets are produced or at least one packet is produced).
This scenario stresses the importance of the policer for the scheduler performance: if the pipe

4.23. Quality of Service (QoS) Framework 392


DPDK documentation, Release 17.05.0-rc0

does not have enough credits, its packets should be dropped as soon as possible (before they
reach the hierarchical scheduler), thus rendering the pipe queues as not active, which allows
the dequeue side to skip that pipe with no cycles being spent on investigating the pipe credits
that would result in a “not enough credits” status.

Single Queue with 100% Line Rate

The port scheduler performance is optimized for a large number of queues. If the number of
queues is small, then the performance of the port scheduler for the same level of active traffic
is expected to be worse than the performance of a small set of message passing queues.

Dropper

The purpose of the DPDK dropper is to drop packets arriving at a packet scheduler to avoid
congestion. The dropper supports the Random Early Detection (RED), Weighted Random
Early Detection (WRED) and tail drop algorithms. Fig. 4.57 illustrates how the dropper inte-
grates with the scheduler. The DPDK currently does not support congestion management so
the dropper provides the only method for congestion avoidance.

Fig. 4.57: High-level Block Diagram of the DPDK Dropper

The dropper uses the Random Early Detection (RED) congestion avoidance algorithm as doc-
umented in the reference publication. The purpose of the RED algorithm is to monitor a packet
queue, determine the current congestion level in the queue and decide whether an arriving
packet should be enqueued or dropped. The RED algorithm uses an Exponential Weighted

4.23. Quality of Service (QoS) Framework 393


DPDK documentation, Release 17.05.0-rc0

Moving Average (EWMA) filter to compute average queue size which gives an indication of the
current congestion level in the queue.
For each enqueue operation, the RED algorithm compares the average queue size to minimum
and maximum thresholds. Depending on whether the average queue size is below, above or in
between these thresholds, the RED algorithm calculates the probability that an arriving packet
should be dropped and makes a random decision based on this probability.
The dropper also supports Weighted Random Early Detection (WRED) by allowing the sched-
uler to select different RED configurations for the same packet queue at run-time. In the case
of severe congestion, the dropper resorts to tail drop. This occurs when a packet queue has
reached maximum capacity and cannot store any more packets. In this situation, all arriving
packets are dropped.
The flow through the dropper is illustrated in Fig. 4.58. The RED/WRED algorithm is exercised
first and tail drop second.
The use cases supported by the dropper are:
• – Initialize configuration data
• – Initialize run-time data
• – Enqueue (make a decision to enqueue or drop an arriving packet)
• – Mark empty (record the time at which a packet queue becomes empty)
The configuration use case is explained in Section2.23.3.1, the enqueue operation is explained
in Section 2.23.3.2 and the mark empty operation is explained in Section 2.23.3.3.

Configuration

A RED configuration contains the parameters given in Table 4.66.

Table 4.66: RED Configuration Parameters


Parameter Minimum Maximum Typical
Minimum Threshold 0 1022 1/4 x queue size
Maximum Threshold 1 1023 1/2 x queue size
Inverse Mark Probability 1 255 10
EWMA Filter Weight 1 12 9
The meaning of these parameters is explained in more detail in the following sections. The
format of these parameters as specified to the dropper module API corresponds to the format
used by Cisco* in their RED implementation. The minimum and maximum threshold parame-
ters are specified to the dropper module in terms of number of packets. The mark probability
parameter is specified as an inverse value, for example, an inverse mark probability parameter
value of 10 corresponds to a mark probability of 1/10 (that is, 1 in 10 packets will be dropped).
The EWMA filter weight parameter is specified as an inverse log value, for example, a filter
weight parameter value of 9 corresponds to a filter weight of 1/29.

Enqueue Operation

In the example shown in Fig. 4.59, q (actual queue size) is the input value, avg (average
queue size) and count (number of packets since the last drop) are run-time values, decision is

4.23. Quality of Service (QoS) Framework 394


DPDK documentation, Release 17.05.0-rc0

Fig. 4.58: Flow Through the Dropper

4.23. Quality of Service (QoS) Framework 395


DPDK documentation, Release 17.05.0-rc0

the output value and the remaining values are configuration parameters.

Fig. 4.59: Example Data Flow Through Dropper

EWMA Filter Microblock

The purpose of the EWMA Filter microblock is to filter queue size values to smooth out transient
changes that result from “bursty” traffic. The output value is the average queue size which gives
a more stable view of the current congestion level in the queue.
The EWMA filter has one configuration parameter, filter weight, which determines how quickly
or slowly the average queue size output responds to changes in the actual queue size input.
Higher values of filter weight mean that the average queue size responds more quickly to
changes in actual queue size.

Average Queue Size Calculation when the Queue is not Empty The definition of the
EWMA filter is given in the following equation.

Where:
• avg = average queue size
• wq = filter weight
• q = actual queue size

Note:
The filter weight, wq = 1/2^n, where n is the filter weight parameter value passed to the dropper modu
on configuration (see Section2.23.3.1 ).

4.23. Quality of Service (QoS) Framework 396


DPDK documentation, Release 17.05.0-rc0

Average Queue Size Calculation when the Queue is Empty

The EWMA filter does not read time stamps and instead assumes that enqueue operations
will happen quite regularly. Special handling is required when the queue becomes empty as
the queue could be empty for a short time or a long time. When the queue becomes empty,
average queue size should decay gradually to zero instead of dropping suddenly to zero or
remaining stagnant at the last computed value. When a packet is enqueued on an empty
queue, the average queue size is computed using the following formula:

Where:
• m = the number of enqueue operations that could have occurred on this queue while the
queue was empty
In the dropper module, m is defined as:

Where:
• time = current time
• qtime = time the queue became empty
• s = typical time between successive enqueue operations on this queue
The time reference is in units of bytes, where a byte signifies the time duration required by the
physical interface to send out a byte on the transmission medium (see Section Internal Time
Reference). The parameter s is defined in the dropper module as a constant with the value:
s=2^22. This corresponds to the time required by every leaf node in a hierarchy with 64K leaf
nodes to transmit one 64-byte packet onto the wire and represents the worst case scenario.
For much smaller scheduler hierarchies, it may be necessary to reduce the parameter s, which
is defined in the red header source file (rte_red.h) as:
#define RTE_RED_S

Since the time reference is in bytes, the port speed is implied in the expression: time-qtime.
The dropper does not have to be configured with the actual port speed. It adjusts automatically
to low speed and high speed links.

Implementation A numerical method is used to compute the factor (1-wq)^m that appears in
Equation 2.
This method is based on the following identity:

This allows us to express the following:

4.23. Quality of Service (QoS) Framework 397


DPDK documentation, Release 17.05.0-rc0

In the dropper module, a look-up table is used to compute log2(1-wq) for each value of wq
supported by the dropper module. The factor (1-wq)^m can then be obtained by multiplying
the table value by m and applying shift operations. To avoid overflow in the multiplication, the
value, m, and the look-up table values are limited to 16 bits. The total size of the look-up table
is 56 bytes. Once the factor (1-wq)^m is obtained using this method, the average queue size
can be calculated from Equation 2.

Alternative Approaches Other methods for calculating the factor (1-wq)^m in the expression
for computing average queue size when the queue is empty (Equation 2) were considered.
These approaches include:
• Floating-point evaluation
• Fixed-point evaluation using a small look-up table (512B) and up to 16 multiplications
(this is the approach used in the FreeBSD* ALTQ RED implementation)
• Fixed-point evaluation using a small look-up table (512B) and 16 SSE multiplications
(SSE optimized version of the approach used in the FreeBSD* ALTQ RED implementa-
tion)
• Large look-up table (76 KB)
The method that was finally selected (described above in Section 26.3.2.2.1) out performs all
of these approaches in terms of run-time performance and memory requirements and also
achieves accuracy comparable to floating-point evaluation. Table 4.67 lists the performance of
each of these alternative approaches relative to the method that is used in the dropper. As can
be seen, the floating-point implementation achieved the worst performance.

Table 4.67: Relative Performance of Alt


Method Relative Performance
Current dropper method (see Section 23.3.2.1.3) 100%
Fixed-point method with small (512B) look-up table 148%
SSE method with small (512B) look-up table 114%
Large (76KB) look-up table 118%
Floating-point 595%
Note: In this case, since performance is expressed as time spent executing the operation in a specific condi

Drop Decision Block

The Drop Decision block:


• Compares the average queue size with the minimum and maximum thresholds
• Calculates a packet drop probability
• Makes a random decision to enqueue or drop an arriving packet
The calculation of the drop probability occurs in two stages. An initial drop probability is calcu-
lated based on the average queue size, the minimum and maximum thresholds and the mark
probability. An actual drop probability is then computed from the initial drop probability. The
actual drop probability takes the count run-time value into consideration so that the actual drop
probability increases as more packets arrive to the packet queue since the last packet was
dropped.

4.23. Quality of Service (QoS) Framework 398


DPDK documentation, Release 17.05.0-rc0

Initial Packet Drop Probability The initial drop probability is calculated using the following
equation.

Where:
• maxp = mark probability
• avg = average queue size
• minth = minimum threshold
• maxth = maximum threshold
The calculation of the packet drop probability using Equation 3 is illustrated in Fig. 4.60. If the
average queue size is below the minimum threshold, an arriving packet is enqueued. If the
average queue size is at or above the maximum threshold, an arriving packet is dropped. If
the average queue size is between the minimum and maximum thresholds, a drop probability
is calculated to determine if the packet should be enqueued or dropped.

Fig. 4.60: Packet Drop Probability for a Given RED Configuration

Actual Drop Probability If the average queue size is between the minimum and maximum
thresholds, then the actual drop probability is calculated from the following equation.

4.23. Quality of Service (QoS) Framework 399


DPDK documentation, Release 17.05.0-rc0

Where:
• Pb = initial drop probability (from Equation 3)
• count = number of packets that have arrived since the last drop
The constant 2, in Equation 4 is the only deviation from the drop probability formulae given in
the reference document where a value of 1 is used instead. It should be noted that the value pa
computed from can be negative or greater than 1. If this is the case, then a value of 1 should
be used instead.
The initial and actual drop probabilities are shown in Fig. 4.61. The actual drop probabil-
ity is shown for the case where the formula given in the reference document1 is used (blue
curve) and also for the case where the formula implemented in the dropper module, is used
(red curve). The formula in the reference document results in a significantly higher drop rate
compared to the mark probability configuration parameter specified by the user. The choice to
deviate from the reference document is simply a design decision and one that has been taken
by other RED implementations, for example, FreeBSD* ALTQ RED.

Fig. 4.61: Initial Drop Probability (pb), Actual Drop probability (pa) Computed Using a Factor
1 (Blue Curve) and a Factor 2 (Red Curve)

Queue Empty Operation

The time at which a packet queue becomes empty must be recorded and saved with the RED
run-time data so that the EWMA filter block can calculate the average queue size on the next
enqueue operation. It is the responsibility of the calling application to inform the dropper mod-
ule through the API that a queue has become empty.

4.23. Quality of Service (QoS) Framework 400


DPDK documentation, Release 17.05.0-rc0

Source Files Location

The source files for the DPDK dropper are located at:
• DPDK/lib/librte_sched/rte_red.h
• DPDK/lib/librte_sched/rte_red.c

Integration with the DPDK QoS Scheduler

RED functionality in the DPDK QoS scheduler is disabled by default. To enable it, use the
DPDK configuration parameter:
CONFIG_RTE_SCHED_RED=y

This parameter must be set to y. The parameter is found in the build configuration files in
the DPDK/config directory, for example, DPDK/config/common_linuxapp. RED configuration
parameters are specified in the rte_red_params structure within the rte_sched_port_params
structure that is passed to the scheduler on initialization. RED parameters are specified sep-
arately for four traffic classes and three packet colors (green, yellow and red) allowing the
scheduler to implement Weighted Random Early Detection (WRED).

Integration with the DPDK QoS Scheduler Sample Application

The DPDK QoS Scheduler Application reads a configuration file on start-up. The configura-
tion file includes a section containing RED parameters. The format of these parameters is
described in Section2.23.3.1. A sample RED configuration is shown below. In this example,
the queue size is 64 packets.

Note: For correct operation, the same EWMA filter weight parameter (wred weight) should be
used for each packet color (green, yellow, red) in the same traffic class (tc).

; RED params per traffic class and color (Green / Yellow / Red)

[red]
tc 0 wred min = 28 22 16
tc 0 wred max = 32 32 32
tc 0 wred inv prob = 10 10 10
tc 0 wred weight = 9 9 9

tc 1 wred min = 28 22 16
tc 1 wred max = 32 32 32
tc 1 wred inv prob = 10 10 10
tc 1 wred weight = 9 9 9

tc 2 wred min = 28 22 16
tc 2 wred max = 32 32 32
tc 2 wred inv prob = 10 10 10
tc 2 wred weight = 9 9 9

tc 3 wred min = 28 22 16
tc 3 wred max = 32 32 32
tc 3 wred inv prob = 10 10 10
tc 3 wred weight = 9 9 9

4.23. Quality of Service (QoS) Framework 401


DPDK documentation, Release 17.05.0-rc0

With this configuration file, the RED configuration that applies to green, yellow and red packets
in traffic class 0 is shown in Table 4.68.

Table 4.68: RED Configuration Corresponding to RED Configuration File


RED Parameter Configuration Name Green Yellow Red
Minimum Threshold tc 0 wred min 28 22 16
Maximum Threshold tc 0 wred max 32 32 32
Mark Probability tc 0 wred inv prob 10 10 10
EWMA Filter Weight tc 0 wred weight 9 9 9

Application Programming Interface (API)

Enqueue API

The syntax of the enqueue API is as follows:


int rte_red_enqueue(const struct rte_red_config *red_cfg, struct rte_red *red, const unsigned q

The arguments passed to the enqueue API are configuration data, run-time data, the current
size of the packet queue (in packets) and a value representing the current time. The time
reference is in units of bytes, where a byte signifies the time duration required by the physical
interface to send out a byte on the transmission medium (see Section 26.2.4.5.1 “Internal Time
Reference” ). The dropper reuses the scheduler time stamps for performance reasons.

Empty API

The syntax of the empty API is as follows:


void rte_red_mark_queue_empty(struct rte_red *red, const uint64_t time)

The arguments passed to the empty API are run-time data and the current time in bytes.

Traffic Metering

The traffic metering component implements the Single Rate Three Color Marker (srTCM) and
Two Rate Three Color Marker (trTCM) algorithms, as defined by IETF RFC 2697 and 2698
respectively. These algorithms meter the stream of incoming packets based on the allowance
defined in advance for each traffic flow. As result, each incoming packet is tagged as green,
yellow or red based on the monitored consumption of the flow the packet belongs to.

Functional Overview

The srTCM algorithm defines two token buckets for each traffic flow, with the two buckets
sharing the same token update rate:
• Committed (C) bucket: fed with tokens at the rate defined by the Committed Information
Rate (CIR) parameter (measured in IP packet bytes per second). The size of the C bucket
is defined by the Committed Burst Size (CBS) parameter (measured in bytes);
• Excess (E) bucket: fed with tokens at the same rate as the C bucket. The size of the E
bucket is defined by the Excess Burst Size (EBS) parameter (measured in bytes).

4.23. Quality of Service (QoS) Framework 402


DPDK documentation, Release 17.05.0-rc0

The trTCM algorithm defines two token buckets for each traffic flow, with the two buckets being
updated with tokens at independent rates:
• Committed (C) bucket: fed with tokens at the rate defined by the Committed Information
Rate (CIR) parameter (measured in bytes of IP packet per second). The size of the C
bucket is defined by the Committed Burst Size (CBS) parameter (measured in bytes);
• Peak (P) bucket: fed with tokens at the rate defined by the Peak Information Rate (PIR)
parameter (measured in IP packet bytes per second). The size of the P bucket is defined
by the Peak Burst Size (PBS) parameter (measured in bytes).
Please refer to RFC 2697 (for srTCM) and RFC 2698 (for trTCM) for details on how tokens are
consumed from the buckets and how the packet color is determined.

Color Blind and Color Aware Modes

For both algorithms, the color blind mode is functionally equivalent to the color aware mode
with input color set as green. For color aware mode, a packet with red input color can only get
the red output color, while a packet with yellow input color can only get the yellow or red output
colors.
The reason why the color blind mode is still implemented distinctly than the color aware mode
is that color blind mode can be implemented with fewer operations than the color aware mode.

Implementation Overview

For each input packet, the steps for the srTCM / trTCM algorithms are:
• Update the C and E / P token buckets. This is done by reading the current time (from
the CPU timestamp counter), identifying the amount of time since the last bucket update
and computing the associated number of tokens (according to the pre-configured bucket
rate). The number of tokens in the bucket is limited by the pre-configured bucket size;
• Identify the output color for the current packet based on the size of the IP packet and the
amount of tokens currently available in the C and E / P buckets; for color aware mode
only, the input color of the packet is also considered. When the output color is not red, a
number of tokens equal to the length of the IP packet are subtracted from the C or E /P
or both buckets, depending on the algorithm and the output color of the packet.

Power Management

The DPDK Power Management feature allows users space applications to save power by dy-
namically adjusting CPU frequency or entering into different C-States.
• Adjusting the CPU frequency dynamically according to the utilization of RX queue.
• Entering into different deeper C-States according to the adaptive algorithms to speculate
brief periods of time suspending the application if no packets are received.
The interfaces for adjusting the operating CPU frequency are in the power management library.
C-State control is implemented in applications according to the different use cases.

4.24. Power Management 403


DPDK documentation, Release 17.05.0-rc0

CPU Frequency Scaling

The Linux kernel provides a cpufreq module for CPU frequency scaling for each lcore. For
example, for cpuX, /sys/devices/system/cpu/cpuX/cpufreq/ has the following sys files for fre-
quency scaling:
• affected_cpus
• bios_limit
• cpuinfo_cur_freq
• cpuinfo_max_freq
• cpuinfo_min_freq
• cpuinfo_transition_latency
• related_cpus
• scaling_available_frequencies
• scaling_available_governors
• scaling_cur_freq
• scaling_driver
• scaling_governor
• scaling_max_freq
• scaling_min_freq
• scaling_setspeed
In the DPDK, scaling_governor is configured in user space. Then, a user space application
can prompt the kernel by writing scaling_setspeed to adjust the CPU frequency according to
the strategies defined by the user space application.

Core-load Throttling through C-States

Core state can be altered by speculative sleeps whenever the specified lcore has nothing to
do. In the DPDK, if no packet is received after polling, speculative sleeps can be triggered
according the strategies defined by the user space application.

API Overview of the Power Library

The main methods exported by power library are for CPU frequency scaling and include the
following:
• Freq up: Prompt the kernel to scale up the frequency of the specific lcore.
• Freq down: Prompt the kernel to scale down the frequency of the specific lcore.
• Freq max: Prompt the kernel to scale up the frequency of the specific lcore to the maxi-
mum.
• Freq min: Prompt the kernel to scale down the frequency of the specific lcore to the
minimum.

4.24. Power Management 404


DPDK documentation, Release 17.05.0-rc0

• Get available freqs: Read the available frequencies of the specific lcore from the sys
file.
• Freq get: Get the current frequency of the specific lcore.
• Freq set: Prompt the kernel to set the frequency for the specific lcore.

User Cases

The power management mechanism is used to save power when performing L3 forwarding.

References

• l3fwd-power: The sample application in DPDK that performs L3 forwarding with power
management.
• The “L3 Forwarding with Power Management Sample Application” chapter in the DPDK
Sample Application’s User Guide.

Packet Classification and Access Control

The DPDK provides an Access Control library that gives the ability to classify an input packet
based on a set of classification rules.
The ACL library is used to perform an N-tuple search over a set of rules with multiple categories
and find the best match (highest priority) for each category. The library API provides the
following basic operations:
• Create a new Access Control (AC) context.
• Add rules into the context.
• For all rules in the context, build the runtime structures necessary to perform packet
classification.
• Perform input packet classifications.
• Destroy an AC context and its runtime structures and free the associated memory.

Overview

Rule definition

The current implementation allows the user for each AC context to specify its own rule (set of
fields) over which packet classification will be performed. Though there are few restrictions on
the rule fields layout:
• First field in the rule definition has to be one byte long.
• All subsequent fields has to be grouped into sets of 4 consecutive bytes.
This is done mainly for performance reasons - search function processes the first input byte as
part of the flow setup and then the inner loop of the search function is unrolled to process four
input bytes at a time.

4.25. Packet Classification and Access Control 405


DPDK documentation, Release 17.05.0-rc0

To define each field inside an AC rule, the following structure is used:


struct rte_acl_field_def {
uint8_t type; /*< type - ACL_FIELD_TYPE. */
uint8_t size; /*< size of field 1,2,4, or 8. */
uint8_t field_index; /*< index of field inside the rule. */
uint8_t input_index; /*< 0-N input index. */
uint32_t offset; /*< offset to start of field. */
};

• type The field type is one of three choices:


– _MASK - for fields such as IP addresses that have a value and a mask defining the
number of relevant bits.
– _RANGE - for fields such as ports that have a lower and upper value for the field.
– _BITMASK - for fields such as protocol identifiers that have a value and a bit mask.
• size The size parameter defines the length of the field in bytes. Allowable values are 1,
2, 4, or 8 bytes. Note that due to the grouping of input bytes, 1 or 2 byte fields must be
defined as consecutive fields that make up 4 consecutive input bytes. Also, it is best to
define fields of 8 or more bytes as 4 byte fields so that the build processes can eliminate
fields that are all wild.
• field_index A zero-based value that represents the position of the field inside the rule; 0
to N-1 for N fields.
• input_index As mentioned above, all input fields, except the very first one, must be in
groups of 4 consecutive bytes. The input index specifies to which input group that field
belongs to.
• offset The offset field defines the offset for the field. This is the offset from the beginning
of the buffer parameter for the search.
For example, to define classification for the following IPv4 5-tuple structure:
struct ipv4_5tuple {
uint8_t proto;
uint32_t ip_src;
uint32_t ip_dst;
uint16_t port_src;
uint16_t port_dst;
};

The following array of field definitions can be used:


struct rte_acl_field_def ipv4_defs[5] = {
/* first input field - always one byte long. */
{
.type = RTE_ACL_FIELD_TYPE_BITMASK,
.size = sizeof (uint8_t),
.field_index = 0,
.input_index = 0,
.offset = offsetof (struct ipv4_5tuple, proto),
},

/* next input field (IPv4 source address) - 4 consecutive bytes. */


{
.type = RTE_ACL_FIELD_TYPE_MASK,
.size = sizeof (uint32_t),
.field_index = 1,
.input_index = 1,
.offset = offsetof (struct ipv4_5tuple, ip_src),

4.25. Packet Classification and Access Control 406


DPDK documentation, Release 17.05.0-rc0

},

/* next input field (IPv4 destination address) - 4 consecutive bytes. */


{
.type = RTE_ACL_FIELD_TYPE_MASK,
.size = sizeof (uint32_t),
.field_index = 2,
.input_index = 2,
.offset = offsetof (struct ipv4_5tuple, ip_dst),
},

/*
* Next 2 fields (src & dst ports) form 4 consecutive bytes.
* They share the same input index.
*/
{
.type = RTE_ACL_FIELD_TYPE_RANGE,
.size = sizeof (uint16_t),
.field_index = 3,
.input_index = 3,
.offset = offsetof (struct ipv4_5tuple, port_src),
},

{
.type = RTE_ACL_FIELD_TYPE_RANGE,
.size = sizeof (uint16_t),
.field_index = 4,
.input_index = 3,
.offset = offsetof (struct ipv4_5tuple, port_dst),
},
};

A typical example of such an IPv4 5-tuple rule is a follows:


source addr/mask destination addr/mask source ports dest ports protocol/mask
192.168.1.0/24 192.168.2.31/32 0:65535 1234:1234 17/0xff

Any IPv4 packets with protocol ID 17 (UDP), source address 192.168.1.[0-255], destination
address 192.168.2.31, source port [0-65535] and destination port 1234 matches the above
rule.
To define classification for the IPv6 2-tuple: <protocol, IPv6 source address> over the following
IPv6 header structure:
struct struct ipv6_hdr {
uint32_t vtc_flow; /* IP version, traffic class & flow label. */
uint16_t payload_len; /* IP packet length - includes sizeof(ip_header). */
uint8_t proto; /* Protocol, next header. */
uint8_t hop_limits; /* Hop limits. */
uint8_t src_addr[16]; /* IP address of source host. */
uint8_t dst_addr[16]; /* IP address of destination host(s). */
} __attribute__((__packed__));

The following array of field definitions can be used:


struct struct rte_acl_field_def ipv6_2tuple_defs[5] = {
{
.type = RTE_ACL_FIELD_TYPE_BITMASK,
.size = sizeof (uint8_t),
.field_index = 0,
.input_index = 0,
.offset = offsetof (struct ipv6_hdr, proto),
},

4.25. Packet Classification and Access Control 407


DPDK documentation, Release 17.05.0-rc0

{
.type = RTE_ACL_FIELD_TYPE_MASK,
.size = sizeof (uint32_t),
.field_index = 1,
.input_index = 1,
.offset = offsetof (struct ipv6_hdr, src_addr[0]),
},

{
.type = RTE_ACL_FIELD_TYPE_MASK,
.size = sizeof (uint32_t),
.field_index = 2,
.input_index = 2,
.offset = offsetof (struct ipv6_hdr, src_addr[4]),
},

{
.type = RTE_ACL_FIELD_TYPE_MASK,
.size = sizeof (uint32_t),
.field_index = 3,
.input_index = 3,
.offset = offsetof (struct ipv6_hdr, src_addr[8]),
},

{
.type = RTE_ACL_FIELD_TYPE_MASK,
.size = sizeof (uint32_t),
.field_index = 4,
.input_index = 4,
.offset = offsetof (struct ipv6_hdr, src_addr[12]),
},
};

A typical example of such an IPv6 2-tuple rule is a follows:


source addr/mask protocol/mask
2001:db8:1234:0000:0000:0000:0000:0000/48 6/0xff

Any IPv6 packets with protocol ID 6 (TCP), and source address inside the range
[2001:db8:1234:0000:0000:0000:0000:0000 - 2001:db8:1234:ffff:ffff:ffff:ffff:ffff] matches the
above rule.
In the following example the last element of the search key is 8-bit long. So it is a case
where the 4 consecutive bytes of an input field are not fully occupied. The structure for the
classification is:
struct acl_key {
uint8_t ip_proto;
uint32_t ip_src;
uint32_t ip_dst;
uint8_t tos; /*< This is partially using a 32-bit input element */
};

The following array of field definitions can be used:


struct rte_acl_field_def ipv4_defs[4] = {
/* first input field - always one byte long. */
{
.type = RTE_ACL_FIELD_TYPE_BITMASK,
.size = sizeof (uint8_t),
.field_index = 0,
.input_index = 0,
.offset = offsetof (struct acl_key, ip_proto),
},

4.25. Packet Classification and Access Control 408


DPDK documentation, Release 17.05.0-rc0

/* next input field (IPv4 source address) - 4 consecutive bytes. */


{
.type = RTE_ACL_FIELD_TYPE_MASK,
.size = sizeof (uint32_t),
.field_index = 1,
.input_index = 1,
.offset = offsetof (struct acl_key, ip_src),
},

/* next input field (IPv4 destination address) - 4 consecutive bytes. */


{
.type = RTE_ACL_FIELD_TYPE_MASK,
.size = sizeof (uint32_t),
.field_index = 2,
.input_index = 2,
.offset = offsetof (struct acl_key, ip_dst),
},

/*
* Next element of search key (Type of Service) is indeed 1 byte long.
* Anyway we need to allocate all the 4 consecutive bytes for it.
*/
{
.type = RTE_ACL_FIELD_TYPE_BITMASK,
.size = sizeof (uint32_t), /* All the 4 consecutive bytes are allocated */
.field_index = 3,
.input_index = 3,
.offset = offsetof (struct acl_key, tos),
},
};

A typical example of such an IPv4 4-tuple rule is as follows:


source addr/mask destination addr/mask tos/mask protocol/mask
192.168.1.0/24 192.168.2.31/32 1/0xff 6/0xff

Any IPv4 packets with protocol ID 6 (TCP), source address 192.168.1.[0-255], destination
address 192.168.2.31, ToS 1 matches the above rule.
When creating a set of rules, for each rule, additional information must be supplied also:
• priority: A weight to measure the priority of the rules (higher is better). If the input tuple
matches more than one rule, then the rule with the higher priority is returned. Note that
if the input tuple matches more than one rule and these rules have equal priority, it is
undefined which rule is returned as a match. It is recommended to assign a unique
priority for each rule.
• category_mask: Each rule uses a bit mask value to select the relevant category(s) for
the rule. When a lookup is performed, the result for each category is returned. This ef-
fectively provides a “parallel lookup” by enabling a single search to return multiple results
if, for example, there were four different sets of ACL rules, one for access control, one for
routing, and so on. Each set could be assigned its own category and by combining them
into a single database, one lookup returns a result for each of the four sets.
• userdata: A user-defined value. For each category, a successful match returns the
userdata field of the highest priority matched rule. When no rules match, returned value
is zero.

Note: When adding new rules into an ACL context, all fields must be in host byte order (LSB).

4.25. Packet Classification and Access Control 409


DPDK documentation, Release 17.05.0-rc0

When the search is performed for an input tuple, all fields in that tuple must be in network byte
order (MSB).

RT memory size limit

Build phase (rte_acl_build()) creates for a given set of rules internal structure for further run-
time traversal. With current implementation it is a set of multi-bit tries (with stride == 8).
Depending on the rules set, that could consume significant amount of memory. In attempt
to conserve some space ACL build process tries to split the given rule-set into several non-
intersecting subsets and construct a separate trie for each of them. Depending on the rule-set,
it might reduce RT memory requirements but might increase classification time. There is a
possibility at build-time to specify maximum memory limit for internal RT structures for given
AC context. It could be done via max_size field of the rte_acl_config structure. Setting it to
the value greater than zero, instructs rte_acl_build() to:
• attempt to minimize number of tries in the RT table, but
• make sure that size of RT table wouldn’t exceed given value.
Setting it to zero makes rte_acl_build() to use the default behavior: try to minimize size of the
RT structures, but doesn’t expose any hard limit on it.
That gives the user the ability to decisions about performance/space trade-off. For example:
struct rte_acl_ctx * acx;
struct rte_acl_config cfg;
int ret;

/*
* assuming that acx points to already created and
* populated with rules AC context and cfg filled properly.
*/

/* try to build AC context, with RT structures less then 8MB. */


cfg.max_size = 0x800000;
ret = rte_acl_build(acx, &cfg);

/*
* RT structures can't fit into 8MB for given context.
* Try to build without exposing any hard limit.
*/
if (ret == -ERANGE) {
cfg.max_size = 0;
ret = rte_acl_build(acx, &cfg);
}

Classification methods

After rte_acl_build() over given AC context has finished successfully, it can be used to perform
classification - search for a rule with highest priority over the input data. There are several
implementations of classify algorithm:
• RTE_ACL_CLASSIFY_SCALAR: generic implementation, doesn’t require any specific
HW support.
• RTE_ACL_CLASSIFY_SSE: vector implementation, can process up to 8 flows in paral-
lel. Requires SSE 4.1 support.

4.25. Packet Classification and Access Control 410


DPDK documentation, Release 17.05.0-rc0

• RTE_ACL_CLASSIFY_AVX2: vector implementation, can process up to 16 flows in par-


allel. Requires AVX2 support.
It is purely a runtime decision which method to choose, there is no build-time difference. All
implementations operates over the same internal RT structures and use similar principles. The
main difference is that vector implementations can manually exploit IA SIMD instructions and
process several input data flows in parallel. At startup ACL library determines the highest
available classify method for the given platform and sets it as default one. Though the user has
an ability to override the default classifier function for a given ACL context or perform particular
search using non-default classify method. In that case it is user responsibility to make sure
that given platform supports selected classify implementation.

Application Programming Interface (API) Usage

Note: For more details about the Access Control API, please refer to the DPDK API Refer-
ence.

The following example demonstrates IPv4, 5-tuple classification for rules defined above with
multiple categories in more detail.

Classify with Multiple Categories

struct rte_acl_ctx * acx;


struct rte_acl_config cfg;
int ret;

/* define a structure for the rule with up to 5 fields. */

RTE_ACL_RULE_DEF(acl_ipv4_rule, RTE_DIM(ipv4_defs));

/* AC context creation parameters. */

struct rte_acl_param prm = {


.name = "ACL_example",
.socket_id = SOCKET_ID_ANY,
.rule_size = RTE_ACL_RULE_SZ(RTE_DIM(ipv4_defs)),

/* number of fields per rule. */

.max_rule_num = 8, /* maximum number of rules in the AC context. */


};

struct acl_ipv4_rule acl_rules[] = {

/* matches all packets traveling to 192.168.0.0/16, applies for categories: 0,1 */


{
.data = {.userdata = 1, .category_mask = 3, .priority = 1},

/* destination IPv4 */
.field[2] = {.value.u32 = IPv4(192,168,0,0),. mask_range.u32 = 16,},

/* source port */
.field[3] = {.value.u16 = 0, .mask_range.u16 = 0xffff,},

/* destination port */

4.25. Packet Classification and Access Control 411


DPDK documentation, Release 17.05.0-rc0

.field[4] = {.value.u16 = 0, .mask_range.u16 = 0xffff,},


},

/* matches all packets traveling to 192.168.1.0/24, applies for categories: 0 */


{
.data = {.userdata = 2, .category_mask = 1, .priority = 2},

/* destination IPv4 */
.field[2] = {.value.u32 = IPv4(192,168,1,0),. mask_range.u32 = 24,},

/* source port */
.field[3] = {.value.u16 = 0, .mask_range.u16 = 0xffff,},

/* destination port */
.field[4] = {.value.u16 = 0, .mask_range.u16 = 0xffff,},
},

/* matches all packets traveling from 10.1.1.1, applies for categories: 1 */


{
.data = {.userdata = 3, .category_mask = 2, .priority = 3},

/* source IPv4 */
.field[1] = {.value.u32 = IPv4(10,1,1,1),. mask_range.u32 = 32,},

/* source port */
.field[3] = {.value.u16 = 0, .mask_range.u16 = 0xffff,},

/* destination port */
.field[4] = {.value.u16 = 0, .mask_range.u16 = 0xffff,},
},

};

/* create an empty AC context */

if ((acx = rte_acl_create(&prm)) == NULL) {

/* handle context create failure. */

/* add rules to the context */

ret = rte_acl_add_rules(acx, acl_rules, RTE_DIM(acl_rules));


if (ret != 0) {
/* handle error at adding ACL rules. */
}

/* prepare AC build config. */

cfg.num_categories = 2;
cfg.num_fields = RTE_DIM(ipv4_defs);

memcpy(cfg.defs, ipv4_defs, sizeof (ipv4_defs));

/* build the runtime structures for added rules, with 2 categories. */

ret = rte_acl_build(acx, &cfg);


if (ret != 0) {
/* handle error at build runtime structures for ACL context. */
}

4.25. Packet Classification and Access Control 412


DPDK documentation, Release 17.05.0-rc0

For a tuple with source IP address: 10.1.1.1 and destination IP address: 192.168.1.15, once
the following lines are executed:
uint32_t results[4]; /* make classify for 4 categories. */

rte_acl_classify(acx, data, results, 1, 4);

then the results[] array contains:


results[4] = {2, 3, 0, 0};

• For category 0, both rules 1 and 2 match, but rule 2 has higher priority, therefore results[0]
contains the userdata for rule 2.
• For category 1, both rules 1 and 3 match, but rule 3 has higher priority, therefore results[1]
contains the userdata for rule 3.
• For categories 2 and 3, there are no matches, so results[2] and results[3] contain zero,
which indicates that no matches were found for those categories.
For a tuple with source IP address: 192.168.1.1 and destination IP address: 192.168.2.11,
once the following lines are executed:
uint32_t results[4]; /* make classify by 4 categories. */

rte_acl_classify(acx, data, results, 1, 4);

the results[] array contains:


results[4] = {1, 1, 0, 0};

• For categories 0 and 1, only rule 1 matches.


• For categories 2 and 3, there are no matches.
For a tuple with source IP address: 10.1.1.1 and destination IP address: 201.212.111.12, once
the following lines are executed:
uint32_t results[4]; /* make classify by 4 categories. */
rte_acl_classify(acx, data, results, 1, 4);

the results[] array contains:


results[4] = {0, 3, 0, 0};

• For category 1, only rule 3 matches.


• For categories 0, 2 and 3, there are no matches.

Packet Framework

Design Objectives

The main design objectives for the DPDK Packet Framework are:
• Provide standard methodology to build complex packet processing pipelines. Provide
reusable and extensible templates for the commonly used pipeline functional blocks;
• Provide capability to switch between pure software and hardware-accelerated implemen-
tations for the same pipeline functional block;

4.26. Packet Framework 413


DPDK documentation, Release 17.05.0-rc0

• Provide the best trade-off between flexibility and performance. Hardcoded pipelines usu-
ally provide the best performance, but are not flexible, while developing flexible frame-
works is never a problem, but performance is usually low;
• Provide a framework that is logically similar to Open Flow.

Overview

Packet processing applications are frequently structured as pipelines of multiple stages, with
the logic of each stage glued around a lookup table. For each incoming packet, the table
defines the set of actions to be applied to the packet, as well as the next stage to send the
packet to.
The DPDK Packet Framework minimizes the development effort required to build packet pro-
cessing pipelines by defining a standard methodology for pipeline development, as well as
providing libraries of reusable templates for the commonly used pipeline blocks.
The pipeline is constructed by connecting the set of input ports with the set of output ports
through the set of tables in a tree-like topology. As result of lookup operation for the current
packet in the current table, one of the table entries (on lookup hit) or the default table entry (on
lookup miss) provides the set of actions to be applied on the current packet, as well as the next
hop for the packet, which can be either another table, an output port or packet drop.
An example of packet processing pipeline is presented in Fig. 4.62:

Fig. 4.62: Example of Packet Processing Pipeline where Input Ports 0 and 1 are Connected
with Output Ports 0, 1 and 2 through Tables 0 and 1

Port Library Design

Port Types

Table 4.69 is a non-exhaustive list of ports that can be implemented with the Packet Framework.

4.26. Packet Framework 414


DPDK documentation, Release 17.05.0-rc0

Table 4.69: Port Types


# Port Description
type
1 SW ring SW circular buffer used for message passing between the application
threads. Uses the DPDK rte_ring primitive. Expected to be the most
commonly used type of port.
2 HW ring Queue of buffer descriptors used to interact with NIC, switch or accelerator
ports. For NIC ports, it uses the DPDK rte_eth_rx_queue or
rte_eth_tx_queue primitives.
3 IP re- Input packets are either IP fragments or complete IP datagrams. Output
assem- packets are complete IP datagrams.
bly
4 IP frag- Input packets are jumbo (IP datagrams with length bigger than MTU) or
menta- non-jumbo packets. Output packets are non-jumbo packets.
tion
5 Traffic Traffic manager attached to a specific NIC output port, performing
man- congestion management and hierarchical scheduling according to
ager pre-defined SLAs.
6 KNI Send/receive packets to/from Linux kernel space.
7 Source Input port used as packet generator. Similar to Linux kernel /dev/zero
character device.
8 Sink Output port used to drop all input packets. Similar to Linux kernel /dev/null
character device.

Port Interface

Each port is unidirectional, i.e. either input port or output port. Each input/output port is
required to implement an abstract interface that defines the initialization and run-time operation
of the port. The port abstract interface is described in.

Table 4.70: 20 Port Abstract Interface


# Port Description
Operation
1 Create Create the low-level port object (e.g. queue). Can internally allocate
memory.
2 Free Free the resources (e.g. memory) used by the low-level port object.
3 RX Read a burst of input packets. Non-blocking operation. Only defined for
input ports.
4 TX Write a burst of input packets. Non-blocking operation. Only defined for
output ports.
5 Flush Flush the output buffer. Only defined for output ports.

Table Library Design

Table Types

Table 4.71 is a non-exhaustive list of types of tables that can be implemented with the Packet
Framework.

4.26. Packet Framework 415


DPDK documentation, Release 17.05.0-rc0

Table 4.71: Table Types


# Table Type Description
1 Hash table Lookup key is n-tuple based.
Typically, the lookup key is
hashed to produce a signa-
ture that is used to identify
a bucket of entries where the
lookup key is searched next.
The signature associated with
the lookup key of each in-
put packet is either read from
the packet descriptor (pre-
computed signature) or com-
puted at table lookup time.
The table lookup, add entry
and delete entry operations,
as well as any other pipeline
block that pre-computes the
signature all have to use the
same hashing algorithm to
generate the signature.
Typically used to implement
flow classification tables, ARP
caches, routing table for tun-
nelling protocols, etc.
2 Longest Prefix Match (LPM) Lookup key is the IP address.
Each table entries has an
associated IP prefix (IP and
depth).
The table lookup operation
selects the IP prefix that is
matched by the lookup key; in
case of multiple matches, the
entry with the longest prefix
depth wins.
Typically used to implement
IP routing tables.
3 Access Control List (ACLs) Lookup key is 7-tuple of two
VLAN/MPLS labels, IP desti-
nation address, IP source ad-
dresses, L4 protocol, L4 des-
tination port, L4 source port.
Each table entry has an asso-
ciated ACL and priority. The
ACL contains bit masks for
the VLAN/MPLS labels, IP
prefix for IP destination ad-
dress, IP prefix for IP source
addresses, L4 protocol and
bitmask, L4 destination port
and bit mask, L4 source port
and bit mask.
4.26. Packet Framework The table lookup opera- 416
tion selects the ACL that is
matched by the lookup key; in
DPDK documentation, Release 17.05.0-rc0

Table Interface

Each table is required to implement an abstract interface that defines the initialization and
run-time operation of the table. The table abstract interface is described in Table 4.72.

Table 4.72: Table Abstract Interface


# Table operation Description
1 Create Create the low-level data
structures of the lookup table.
Can internally allocate mem-
ory.
2 Free Free up all the resources
used by the lookup table.
3 Add entry Add new entry to the lookup
table.
4 Delete entry Delete specific entry from the
lookup table.
5 Lookup Look up a burst of input pack-
ets and return a bit mask
specifying the result of the
lookup operation for each
packet: a set bit signifies
lookup hit for the correspond-
ing packet, while a cleared bit
a lookup miss.
For each lookup hit packet,
the lookup operation also re-
turns a pointer to the table en-
try that was hit, which con-
tains the actions to be applied
on the packet and any associ-
ated metadata.
For each lookup miss packet,
the actions to be applied on
the packet and any associ-
ated metadata are specified
by the default table entry pre-
configured for lookup miss.

Hash Table Design

Hash Table Overview

Hash tables are important because the key lookup operation is optimized for speed: instead of
having to linearly search the lookup key through all the keys in the table, the search is limited
to only the keys stored in a single table bucket.
Associative Arrays
An associative array is a function that can be specified as a set of (key, value) pairs, with each
key from the possible set of input keys present at most once. For a given associative array, the

4.26. Packet Framework 417


DPDK documentation, Release 17.05.0-rc0

possible operations are:


1. add (key, value): When no value is currently associated with key, then the (key, value ) as-
sociation is created. When key is already associated value value0, then the association
(key, value0) is removed and association (key, value) is created;
2. delete key : When no value is currently associated with key, this operation has no effect.
When key is already associated value, then association (key, value) is removed;
3. lookup key : When no value is currently associated with key, then this operation returns
void value (lookup miss). When key is associated with value, then this operation returns
value. The (key, value) association is not changed.
The matching criterion used to compare the input key against the keys in the associative array
is exact match, as the key size (number of bytes) and the key value (array of bytes) have to
match exactly for the two keys under comparison.
Hash Function
A hash function deterministically maps data of variable length (key) to data of fixed size (hash
value or key signature). Typically, the size of the key is bigger than the size of the key signature.
The hash function basically compresses a long key into a short signature. Several keys can
share the same signature (collisions).
High quality hash functions have uniform distribution. For large number of keys, when dividing
the space of signature values into a fixed number of equal intervals (buckets), it is desirable
to have the key signatures evenly distributed across these intervals (uniform distribution), as
opposed to most of the signatures going into only a few of the intervals and the rest of the
intervals being largely unused (non-uniform distribution).
Hash Table
A hash table is an associative array that uses a hash function for its operation. The reason for
using a hash function is to optimize the performance of the lookup operation by minimizing the
number of table keys that have to be compared against the input key.
Instead of storing the (key, value) pairs in a single list, the hash table maintains multiple lists
(buckets). For any given key, there is a single bucket where that key might exist, and this bucket
is uniquely identified based on the key signature. Once the key signature is computed and the
hash table bucket identified, the key is either located in this bucket or it is not present in the
hash table at all, so the key search can be narrowed down from the full set of keys currently in
the table to just the set of keys currently in the identified table bucket.
The performance of the hash table lookup operation is greatly improved, provided that the table
keys are evenly distributed among the hash table buckets, which can be achieved by using a
hash function with uniform distribution. The rule to map a key to its bucket can simply be to
use the key signature (modulo the number of table buckets) as the table bucket ID:
bucket_id = f_hash(key) % n_buckets;
By selecting the number of buckets to be a power of two, the modulo operator can be replaced
by a bitwise AND logical operation:
bucket_id = f_hash(key) & (n_buckets - 1);
considering n_bits as the number of bits set in bucket_mask = n_buckets - 1, this means that
all the keys that end up in the same hash table bucket have the lower n_bits of their signature
identical. In order to reduce the number of keys in the same bucket (collisions), the number of
hash table buckets needs to be increased.

4.26. Packet Framework 418


DPDK documentation, Release 17.05.0-rc0

In packet processing context, the sequence of operations involved in hash table operations is
described in Fig. 4.63:

Fig. 4.63: Sequence of Steps for Hash Table Operations in a Packet Processing Context

Hash Table Use Cases

Flow Classification
Description: The flow classification is executed at least once for each input packet. This oper-
ation maps each incoming packet against one of the known traffic flows in the flow database
that typically contains millions of flows.
Hash table name: Flow classification table
Number of keys: Millions
Key format: n-tuple of packet fields that uniquely identify a traffic flow/connection. Example:
DiffServ 5-tuple of (Source IP address, Destination IP address, L4 protocol, L4 protocol source
port, L4 protocol destination port). For IPv4 protocol and L4 protocols like TCP, UDP or SCTP,
the size of the DiffServ 5-tuple is 13 bytes, while for IPv6 it is 37 bytes.
Key value (key data): actions and action meta-data describing what processing to be applied
for the packets of the current flow. The size of the data associated with each traffic flow can
vary from 8 bytes to kilobytes.
Address Resolution Protocol (ARP)
Description: Once a route has been identified for an IP packet (so the output interface and
the IP address of the next hop station are known), the MAC address of the next hop station is
needed in order to send this packet onto the next leg of the journey towards its destination (as
identified by its destination IP address). The MAC address of the next hop station becomes
the destination MAC address of the outgoing Ethernet frame.
Hash table name: ARP table
Number of keys: Thousands
Key format: The pair of (Output interface, Next Hop IP address), which is typically 5 bytes for
IPv4 and 17 bytes for IPv6.
Key value (key data): MAC address of the next hop station (6 bytes).

Hash Table Types

Table 4.73 lists the hash table configuration parameters shared by all different hash table types.

4.26. Packet Framework 419


DPDK documentation, Release 17.05.0-rc0

Table 4.73: Configuration Parameters Common for All Hash Table Types
# Parameter Details
1 Key size Measured as number of bytes. All keys have the same size.
2 Key value (key Measured as number of bytes.
data) size
3 Number of Needs to be a power of two.
buckets
4 Maximum number Needs to be a power of two.
of keys
5 Hash function Examples: jhash, CRC hash, etc.
6 Hash function Parameter to be passed to the hash function.
seed
7 Key offset Offset of the lookup key byte array within the packet meta-data
stored in the packet buffer.

Bucket Full Problem On initialization, each hash table bucket is allocated space for exactly
4 keys. As keys are added to the table, it can happen that a given bucket already has 4 keys
when a new key has to be added to this bucket. The possible options are:
1. Least Recently Used (LRU) Hash Table. One of the existing keys in the bucket is
deleted and the new key is added in its place. The number of keys in each bucket never
grows bigger than 4. The logic to pick the key to be dropped from the bucket is LRU. The
hash table lookup operation maintains the order in which the keys in the same bucket are
hit, so every time a key is hit, it becomes the new Most Recently Used (MRU) key, i.e.
the last candidate for drop. When a key is added to the bucket, it also becomes the new
MRU key. When a key needs to be picked and dropped, the first candidate for drop, i.e.
the current LRU key, is always picked. The LRU logic requires maintaining specific data
structures per each bucket.
2. Extendable Bucket Hash Table. The bucket is extended with space for 4 more keys.
This is done by allocating additional memory at table initialization time, which is used to
create a pool of free keys (the size of this pool is configurable and always a multiple of 4).
On key add operation, the allocation of a group of 4 keys only happens successfully within
the limit of free keys, otherwise the key add operation fails. On key delete operation, a
group of 4 keys is freed back to the pool of free keys when the key to be deleted is the
only key that was used within its group of 4 keys at that time. On key lookup operation,
if the current bucket is in extended state and a match is not found in the first group of 4
keys, the search continues beyond the first group of 4 keys, potentially until all keys in
this bucket are examined. The extendable bucket logic requires maintaining specific data
structures per table and per each bucket.

Table 4.74: Configuration Parameters Specific to Extendable Bucket Hash Table


# Parameter Details
1 Number of additional keys Needs to be a power of two, at least equal to 4.

Signature Computation The possible options for key signature computation are:
1. Pre-computed key signature. The key lookup operation is split between two CPU cores.
The first CPU core (typically the CPU core that performs packet RX) extracts the key
from the input packet, computes the key signature and saves both the key and the key

4.26. Packet Framework 420


DPDK documentation, Release 17.05.0-rc0

signature in the packet buffer as packet meta-data. The second CPU core reads both
the key and the key signature from the packet meta-data and performs the bucket search
step of the key lookup operation.
2. Key signature computed on lookup (“do-sig” version). The same CPU core reads the
key from the packet meta-data, uses it to compute the key signature and also performs
the bucket search step of the key lookup operation.

Table 4.75: Configuration Parameters Specific to Pre-computed Key Signature Hash Table
# Parameter Details
1 Signature Offset of the pre-computed key signature within the packet
offset meta-data.

Key Size Optimized Hash Tables For specific key sizes, the data structures and algorithm
of key lookup operation can be specially handcrafted for further performance improvements,
so following options are possible:
1. Implementation supporting configurable key size.
2. Implementation supporting a single key size. Typical key sizes are 8 bytes and 16
bytes.

Bucket Search Logic for Configurable Key Size Hash Tables

The performance of the bucket search logic is one of the main factors influencing the perfor-
mance of the key lookup operation. The data structures and algorithm are designed to make
the best use of Intel CPU architecture resources like: cache memory space, cache memory
bandwidth, external memory bandwidth, multiple execution units working in parallel, out of
order instruction execution, special CPU instructions, etc.
The bucket search logic handles multiple input packets in parallel. It is built as a pipeline of
several stages (3 or 4), with each pipeline stage handling two different packets from the burst
of input packets. On each pipeline iteration, the packets are pushed to the next pipeline stage:
for the 4-stage pipeline, two packets (that just completed stage 3) exit the pipeline, two packets
(that just completed stage 2) are now executing stage 3, two packets (that just completed stage
1) are now executing stage 2, two packets (that just completed stage 0) are now executing
stage 1 and two packets (next two packets to read from the burst of input packets) are entering
the pipeline to execute stage 0. The pipeline iterations continue until all packets from the burst
of input packets execute the last stage of the pipeline.
The bucket search logic is broken into pipeline stages at the boundary of the next memory
access. Each pipeline stage uses data structures that are stored (with high probability) into
the L1 or L2 cache memory of the current CPU core and breaks just before the next memory
access required by the algorithm. The current pipeline stage finalizes by prefetching the data
structures required by the next pipeline stage, so given enough time for the prefetch to com-
plete, when the next pipeline stage eventually gets executed for the same packets, it will read
the data structures it needs from L1 or L2 cache memory and thus avoid the significant penalty
incurred by L2 or L3 cache memory miss.
By prefetching the data structures required by the next pipeline stage in advance (before they
are used) and switching to executing another pipeline stage for different packets, the number of
L2 or L3 cache memory misses is greatly reduced, hence one of the main reasons for improved

4.26. Packet Framework 421


DPDK documentation, Release 17.05.0-rc0

performance. This is because the cost of L2/L3 cache memory miss on memory read accesses
is high, as usually due to data dependency between instructions, the CPU execution units have
to stall until the read operation is completed from L3 cache memory or external DRAM memory.
By using prefetch instructions, the latency of memory read accesses is hidden, provided that it
is preformed early enough before the respective data structure is actually used.
By splitting the processing into several stages that are executed on different packets (the pack-
ets from the input burst are interlaced), enough work is created to allow the prefetch instruc-
tions to complete successfully (before the prefetched data structures are actually accessed)
and also the data dependency between instructions is loosened. For example, for the 4-stage
pipeline, stage 0 is executed on packets 0 and 1 and then, before same packets 0 and 1 are
used (i.e. before stage 1 is executed on packets 0 and 1), different packets are used: packets
2 and 3 (executing stage 1), packets 4 and 5 (executing stage 2) and packets 6 and 7 (exe-
cuting stage 3). By executing useful work while the data structures are brought into the L1 or
L2 cache memory, the latency of the read memory accesses is hidden. By increasing the gap
between two consecut