Learning VPP: Syncing IPsec SA Counters to Linux XFRM via NEWAE

When VPP handles IPsec dataplane traffic, it maintains its own per-SA byte and packet counters internally. The Linux kernel’s XFRM subsystem, however, never sees those packets and has no way to update its own accounting. The result: ip -s xfrm state reports zero or stale counters, and any Linux-side tooling that depends on XFRM accounting — monitoring dashboards, external rekey decisions based on traffic volume, soft/hard lifetime tracking — sees no activity at all.

In this post I describe how the XFRM plugin for VPP now periodically pushes VPP’s SA counters into the Linux kernel using XFRM_MSG_NEWAE netlink messages, closing this visibility gap.

Background: XFRM Accounting Events

The Linux XFRM subsystem provides a dedicated netlink message type for updating SA accounting data: XFRM_MSG_NEWAE. Originally designed for HA migration of SA state between machines, NEWAE allows user space to inject lifetime counters directly into kernel XFRM states.

The message structure follows the pattern:

nlmsghdr : xfrm_aevent_id : optional TLVs

The xfrm_aevent_id header identifies the target SA by the combination of SPI, protocol, address family, destination/source addresses, and reqid. The TLV we care about is XFRMA_LTIME_VAL, which carries a xfrm_lifetime_cur structure with byte and packet counters.

One important kernel semantic: XFRM_MSG_NEWAE with NLM_F_REPLACE performs assignment, not addition. The kernel’s xfrm_update_ae_params() does x->curlft.bytes = ltime->bytes — a straight overwrite. This means we must always send cumulative totals, not deltas.

The Problem With VPP Counter Resets

The XFRM plugin already had a check_for_expiry() function that monitors VPP’s per-SA counters to detect when soft or hard byte/packet lifetime limits are reached. When a soft limit is hit, the plugin sends an XFRM_MSG_EXPIRE to the kernel (triggering strongSwan to initiate rekeying) and then zeroes the VPP counter so the same threshold can fire again for the next lifetime window.

This counter zeroing creates a challenge for NEWAE sync: if we simply read VPP’s current counter and push it to the kernel, we’d lose all the traffic that was counted before the last reset. We need a mechanism to preserve the running total across these resets.

Design Overview

The solution consists of four components:

  1. Per-SA sync state — a set of fields embedded in the existing sa_life_limits_t structure that tracks cumulative counters and last-synced values
  2. Counter accumulation — a hook in check_for_expiry() that snapshots the current counter into a running total before zeroing
  3. NEWAE message builder — constructs and sends the raw netlink message with the correct SA identification and lifetime TLV
  4. Periodic sync loop — a time-gated call in the existing VPP process node that pushes updated counters at a configurable interval

Here is how these components interact:

                VPP Dataplane
                     |
            SA byte/packet counters
                     |
        +------------+-------------+
        |                          |
  check_for_expiry()       lcp_xfrm_sync_counters()
  (every 2s wake)          (every N seconds)
        |                          |
  On soft/hard expiry:      For each SA:
  1. accumulate into        1. total = cumulative + live counter
     cumulative             2. skip if unchanged
  2. zero VPP counter       3. send XFRM_MSG_NEWAE
  3. send EXPIRE               with total
                            4. record last_synced

Both functions execute in the same VPP process node on the main thread, so there are no race conditions between accumulation and sync.

Per-SA Sync State

Each SA already has a sa_life_limits_t structure that stores soft/hard byte and packet thresholds. The counter sync state is embedded directly into this structure:

typedef struct sa_counter_sync
{
  u64 cumulative_bytes;      /* running total, survives VPP counter zeroing */
  u64 cumulative_packets;
  u64 last_synced_bytes;     /* value at last NEWAE send */
  u64 last_synced_packets;
} sa_counter_sync_t;

typedef struct sa_life_limits
{
  u64 soft_byte_limit;
  u64 hard_byte_limit;
  u64 soft_packet_limit;
  u64 hard_packet_limit;
  u32 sa_id;
  u32 reqid;
  int tun_sw_if_idx;
  u8 sa_in_tunnel;

  sa_counter_sync_t sync;    /* XFRM counter sync state */
} sa_life_limits_t;

The cumulative_* fields hold the running total of all traffic counted before each VPP counter reset. The last_synced_* fields record what was last pushed via NEWAE, allowing us to suppress no-op updates for idle SAs.

Counter Accumulation

The key integration point is in check_for_expiry(). Before the existing vlib_zero_combined_counter() call that resets the VPP counter after sending an expire message, we fold the current counter snapshot into the cumulative totals:

if (rv)
  {
    lcp_xfrm_counter_sync_accumulate (&life->sync, &count);
    vlib_zero_combined_counter (&ipsec_sa_counters, sa->stat_index);
  }

Where the accumulate function simply adds the current snapshot:

static inline void
lcp_xfrm_counter_sync_accumulate (sa_counter_sync_t *sync,
                                  vlib_counter_t *count)
{
  sync->cumulative_bytes += count->bytes;
  sync->cumulative_packets += count->packets;
}

This ensures no traffic is lost across counter resets. To verify correctness, consider this sequence:

  1. Traffic flows, VPP counter reaches {1000 bytes, 50 packets}
  2. Soft expiry fires: accumulate saves {1000, 50} into cumulative, then VPP counter is zeroed
  3. More traffic arrives, VPP counter reaches {200, 10}
  4. Sync computes: total = cumulative{1000, 50} + live{200, 10} = {1200, 60} — correct

Building the NEWAE Message

The lcp_xfrm_build_newae_msg() function constructs the raw netlink message. Here is the wire format:

Offset  Size  Field
------  ----  -----
0       16    nlmsghdr (type=XFRM_MSG_NEWAE, flags=NLM_F_REQUEST|NLM_F_REPLACE)
16      48    xfrm_aevent_id (sa_id: daddr+spi+family+proto, saddr, reqid,
                              flags=XFRM_AE_LVAL)
64      4     nlattr (nla_len=36, nla_type=XFRMA_LTIME_VAL)
68      32    xfrm_lifetime_cur (bytes, packets, add_time=0, use_time=0)
------
Total: 100 bytes

The SA is identified by the same tuple the kernel uses: SPI, protocol (ESP/AH), address family, destination address, source address, and reqid. The XFRM_AE_LVAL flag in ae_id->flags tells the kernel this message carries lifetime values. The NLM_F_REPLACE flag in the netlink header instructs the kernel to overwrite existing counter values rather than treating this as a new SA event.

The message is sent unicast to the kernel (nl_groups=0) through the same XFRM netlink socket the plugin already uses for expire messages.

One implementation detail worth noting: the message is built as a raw byte buffer rather than using struct composition. This avoids compiler alignment padding between struct nlattr (4 bytes) and struct xfrm_lifetime_cur (which contains u64 fields) that would break the NLA wire format expected by the kernel.

The Sync Loop

The sync function iterates all SAs in VPP’s SA pool, computes the total counter (cumulative + live), and sends NEWAE only for SAs whose counters have changed since the last push:

static void
lcp_xfrm_sync_counters (void)
{
  sa_life_limits_t *life;
  vlib_counter_t count;
  ipsec_sa_t *sa;
  ipsec_main_t *im = &ipsec_main;

  pool_foreach (sa, im->sa_pool)
    {
      /* look up our per-SA lifetime/sync tracking */
      life = ...;

      vlib_get_combined_counter (&ipsec_sa_counters, sa->stat_index, &count);

      u64 total_bytes = life->sync.cumulative_bytes + count.bytes;
      u64 total_packets = life->sync.cumulative_packets + count.packets;

      /* skip idle SAs */
      if (total_bytes == life->sync.last_synced_bytes &&
          total_packets == life->sync.last_synced_packets)
        continue;

      if (lcp_xfrm_build_newae_msg (sa, life, total_bytes, total_packets))
        {
          life->sync.last_synced_bytes = total_bytes;
          life->sync.last_synced_packets = total_packets;
        }
    }
}

The idle-SA check is important: without it, every sync cycle would send NEWAE messages for all SAs, including those with no traffic. With hundreds of SAs, this would generate unnecessary netlink traffic.

Scheduling

Rather than creating a new VPP process node, the counter sync piggybacks on the existing ipsec_xfrm_expire_process that already wakes every 2 seconds to check for SA lifetime expiry. A separate timer gates the NEWAE sends at a configurable interval:

uword
ipsec_xfrm_expire_process (vlib_main_t *vm, ...)
{
  f64 last_sync_time = 0;

  while (1)
    {
      vlib_process_wait_for_event_or_clock (vm, 2);
      vlib_process_get_events (vm, NULL);
      check_for_expiry ();

      if (nm->counter_sync_interval_s > 0)
        {
          f64 now = vlib_time_now (vm);
          if ((now - last_sync_time) >= (f64) nm->counter_sync_interval_s)
            {
              lcp_xfrm_sync_counters ();
              last_sync_time = now;
            }
        }
    }
}

The default sync interval is 10 seconds. Since the process node wakes every 2 seconds anyway, the actual sync granularity is within 2 seconds of the configured interval. This is a control-plane-only operation running in the VPP main thread — it has zero impact on dataplane packet processing performance.

Configuration

The sync interval is configurable via VPP’s startup configuration:

linux-cp {
  ...
  counter-sync-interval 10
}

Setting the value to 0 disables counter synchronization entirely. The default is 10 seconds, which provides a reasonable balance between counter freshness and netlink message overhead.

Bonus Fix: Memory Leak in send_nl_msg()

While implementing NEWAE support, a pre-existing memory leak was discovered in send_nl_msg(): the nlmsg object allocated by nlmsg_alloc_simple() was never freed after sending. Since send_nl_msg() is called for every expire message and now for every NEWAE message, this leak would grow over time proportional to SA activity. The fix is a single nlmsg_free(nlmsg) call after nl_sendmsg().

Verification

After deploying the change, counter sync can be verified in several ways:

Check XFRM state counters:

watch -n1 ip -s xfrm state

The byte and packet counters should now update every sync interval instead of showing zeros.

Monitor NEWAE events in real time:

ip xfrm monitor

You should see accounting events appearing at the configured interval for SAs with active traffic.

Verify idle SA suppression: SAs with no traffic since the last sync should not generate NEWAE messages.

Test with counter-sync-interval 0: Disabling the feature should stop all NEWAE messages while leaving expire functionality unaffected.

Summary

Aspect Detail
Message type XFRM_MSG_NEWAE with NLM_F_REPLACE
Sync interval Configurable, default 10 seconds
Counter semantics Cumulative totals (kernel does assignment, not addition)
Counter reset handling Accumulation before each VPP counter zero
Idle SA optimization Skip NEWAE when counters unchanged
Performance impact Zero dataplane impact (control-plane process node only)
Files modified 3 files, ~150 lines added

The counter sync feature closes an important observability gap in VPP-based IPsec deployments. Linux tools and monitoring systems that rely on XFRM accounting data now see accurate, up-to-date counters, and external lifetime management decisions based on traffic volume work correctly.

References

Simplify VPP CLI with vppctl-ai for Natural Language Communication

logo_fdio-300x184

Interacting with FD.io VPP traditionally requires familiarity with its extended command-line interface (vppctl). Although vppctl is powerful, the complexity of its syntax can be a barrier when exploring or automating tasks in high-performance packet processing environments.

To make this interaction more intuitive, I developed vppctl-ai — a natural language assistant that translates plain English into accurate VPP CLI commands. The project is actively hosted on GitHub: https://github.com/garyachy/vppctl-ai.

Background: VPP and CLI Interaction

VPP (Vector Packet Processing) is a data-plane framework designed for high throughput and low latency. It finds use in SDN, NFV, and carrier-grade platforms where performance is critical. While VPP provides a rich set of CLI commands via vppctl, users often need to remember exact syntax, arguments, and context for each operation — from interface configuration to ACL and routing rules.

What vppctl-ai Solves

vppctl-ai addresses usability pain points by allowing users to describe what they want in simple language. Instead of manually browsing documentation or constructing commands, you can ask the agent questions such as:

vpp-ai> list all interfaces with IPv4 addresses
Extracted command: show interface addr
Execute 'show interface addr'? [Y/n]:

This approach reduces errors, accelerates experimentation, and is especially useful for those exploring VPP automation or integrating VPP into larger tooling ecosystems.

At its core, vppctl-ai maintains a validated command database covering hundreds of VPP CLI commands. Before execution, parsed intents are mapped to safe and exact vppctl invocations.

Ready to see how we’ve helped similar companies? Check out our case studies.

Subscribe Now →

Why This Matters

Network automation has shifted toward intent-based descriptions in higher-level orchestration layers and SDN controllers. Tools like vppctl-ai bring this paradigm closer to the dataplane itself — bridging human intention and VPP’s command vocabulary. Given VPP’s prominence in fast-path networking, a natural language CLI lowers the entry barrier for development, debugging, and operational tasks.

Getting Started

To try vppctl-ai:

git clone https://github.com/garyachy/vppctl-ai.git
cd vppctl-ai
pip install -r requirements.txt
export OPENROUTER_API_KEY="your-openai-or-openrouter-key"
./run_agent.sh

Replace "your-openai-or-openrouter-key" with your own key from a compatible model provider. Once running, the agent will accept natural language and propose corresponding VPP CLI commands.

Concluding Thoughts

vppctl-ai is a demonstration of how AI can simplify interaction with sophisticated networking platforms. It complements the existing VPP ecosystem by offering a user-friendly layer without sacrificing control or precision. For anyone working in high-speed networking or VPP-based systems, this tool can help reduce friction and accelerate workflows.

You can explore the source and contribute at: https://github.com/garyachy/vppctl-ai


Related Posts


📊 Want to see these techniques in action?
Check out our real-world DPDK & VPP projects →

Learning VPP: XFRM Plugin for Seamless Linux Integration in IPsec Management

logo_fdio-300x184

Introduction

The VPP XFRM (Transform) plugin is a component of the Linux Control Plane (LCP) plugin that provides Linux XFRM netlink support to mirror Linux XFRM configurations to the VPP IPsec subsystem. This plugin enables seamless integration between Linux-based IPsec management tools (like StrongSwan) and VPP’s high-performance IPsec data plane.

Architecture and Design

The XFRM plugin is implemented as part of the existing linux-nl plugin, functioning as a new XFRM process node. It consists of three main components:

  1. XFRM Netlink Message Processing: Reads and parses Linux XFRM netlink notifications
  2. VPP IPsec Configuration: Translates netlink messages into VPP IPsec configurations
  3. SA Expiry Handling: Manages Security Association lifetime and expiry

Netlink Message Processing

Supported Netlink Groups

The plugin registers to the following XFRM netlink multicast groups:

  • XFRMGRP_SA – Security Association notifications
  • XFRMGRP_POLICY – Security Policy notifications
  • XFRMGRP_EXPIRE – SA expiry notifications

Supported Message Types

The plugin processes the following XFRM netlink message types:

Security Association (SA) Messages:

  • XFRM_MSG_NEWSA – New SA creation
  • XFRM_MSG_UPDSA – SA updates
  • XFRM_MSG_DELSA – SA deletion
  • XFRM_MSG_EXPIRE – SA expiry notifications

Security Policy (SP) Messages:

  • XFRM_MSG_NEWPOLICY – New policy creation
  • XFRM_MSG_UPDPOLICY – Policy updates
  • XFRM_MSG_DELPOLICY – Policy deletion

Linux Bash Commands to Send Netlink Messages

Here are examples of Linux commands that generate the netlink messages processed by the XFRM plugin:

# Create a new Security Association
ip xfrm state add src 192.168.1.1 dst 192.168.1.2 proto esp spi 0x12345678 \
   mode tunnel auth sha256 0x1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef \
   enc aes 0x1234567890abcdef1234567890abcdef

# Create a Security Policy
ip xfrm policy add src 10.0.0.0/24 dst 10.1.0.0/24 dir out \
   tmpl src 192.168.1.1 dst 192.168.1.2 proto esp mode tunnel

# Delete a Security Association
ip xfrm state del src 192.168.1.1 dst 192.168.1.2 proto esp spi 0x12345678

# Delete a Security Policy
ip xfrm policy del src 10.0.0.0/24 dst 10.1.0.0/24 dir out

VPP API Usage

The XFRM plugin uses the following VPP IPsec APIs to configure the data plane:

Security Association Management

  • ipsec_sa_add_and_lock() – Creates and locks SAs in VPP
  • ipsec_sa_unlock_id() – Removes SAs from VPP
  • ipsec_sa_update() – Updates existing SAs

Security Policy Management

  • ipsec_add_del_policy() – Adds/removes IPsec policies
  • ipsec_spd_add_del() – Creates Security Policy Databases (SPDs)
  • ipsec_interface_add_del_spd() – Binds SPDs to interfaces

Tunnel Management

  • ipsec_tun_protect_add() – Protects tunnels with IPsec SAs
  • ipsec_tun_protect_del() – Removes tunnel protection
  • ipip_add_tunnel() – Creates IPIP tunnels (route mode)
  • ipsec_itf_create() – Creates IPsec interfaces

FIB Management

  • fib_table_entry_path_add2() – Adds routes to FIB
  • fib_table_entry_path_remove2() – Removes routes from FIB

Configuration and Modes

Startup Configuration

The plugin is configured through the startup.conf file:

linux-xfrm-nl {
    # Enable route-based IPsec mode
    enable-route-mode-ipsec
    
    # Specify interface type (ipsec or ipip)
    interface ipsec
    
    # Netlink socket configuration
    nl-rx-buffer-size 268435456    # 256MB default
    nl-batch-size 2048             # Max messages per batch
    nl-batch-delay-ms 50           # Delay between batches
}

Operating Modes

The plugin supports two IPsec modes:

1. Policy-Based Mode (Default)

  • Traditional IPsec configuration where policies determine SA selection
  • Uses Security Policy Databases (SPDs) bound to interfaces
  • Supports both inbound and outbound policies
  • Automatically creates bypass policies for IKE and control traffic

2. Route-Based Mode

  • Uses tunnel interfaces (IPIP or IPsec) for traffic steering
  • Routes determine SA selection through tunnel protection
  • Supports up to 4 inbound SAs and 1 outbound SA per tunnel
  • Better performance for high-throughput scenarios

Supported Features

Cryptographic Algorithms

  • Encryption: AES-GCM (128/192/256), AES-CBC (128/192/256)
  • Authentication: SHA-256, SHA-384, SHA-512
  • Protocols: ESP, AH

IPsec Features

  • Modes: Transport and Tunnel modes
  • UDP Encapsulation: NAT traversal support
  • Anti-replay Protection: Fixed 64-byte window size
  • ESN (Extended Sequence Numbers): Support for large sequence numbers
  • Lifetime Management: Soft and hard expiry (packet and byte-based)

Network Features

  • IPv4 and IPv6: Full dual-stack support
  • Interface Mapping: Linux-VPP interface correlation
  • Network Namespaces: Support for containerized environments
  • Batch Processing: Efficient message handling

Limitations

Functional Limitations

  1. No On-Demand SA Creation: The plugin cannot create SAs on-demand through trap policies due to VPP’s requirement for valid SA IDs.
  2. Fixed Anti-replay Window: VPP uses a fixed 64-byte anti-replay window, ignoring Linux configuration.
  3. Limited Policy Types:
    • Inbound policy notifications are not handled from kernel
    • Forward (FWD) policies are not supported
    • BYPASS and DROP policy actions are intentionally ignored
  4. Single Template Support: Only one user template per policy notification is supported.

Algorithm Limitations

  1. Limited Crypto Support: Only tested with AES-GCM and AES-CBC encryption algorithms.

Route Mode Limitations

  1. SA Limits: Maximum 4 inbound SAs and 1 outbound SA per tunnel in route mode.
  2. Tunnel Conflicts: Multiple connections using the same tunnel endpoints can cause undefined behavior.

Rekeying Considerations

  1. Policy Synchronization: During rekeying, there’s a potential for packet drops until peers update their policies to use new SAs.

Dependencies

External Dependencies

  • libnl3: Netlink library for message parsing
  • libnl-xfrm: XFRM-specific netlink support
  • Linux Kernel: XFRM subsystem support

VPP Plugin Dependencies

  • linux-cp: Core Linux Control Plane functionality
  • ipsec: VPP IPsec subsystem
  • ipip: IPIP tunnel support (route mode)
  • fib: Forwarding Information Base
  • interface: VPP interface management

System Dependencies

  • Linux XFRM Subsystem: Must be enabled in kernel
  • Netlink Support: Kernel netlink socket support
  • Network Namespaces: For containerized deployments

Performance Characteristics

Default Configuration

  • RX Buffer Size: 256MB (configurable)
  • TX Buffer Size: 1MB (configurable)
  • Batch Size: 2048 messages (configurable)
  • Batch Delay: 50ms (configurable)
  • Sync Batch Limit: 1024 messages
  • Sync Batch Delay: 20ms

Processing Model

  • Asynchronous Processing: Non-blocking netlink socket operations
  • Batch Processing: Efficient message queuing and processing
  • Event-Driven: File descriptor-based event notification
  • Multi-threading: Support for VPP worker threads

Web References and Documentation

Official Documentation

RFC References

Conclusion

The VPP XFRM plugin provides a robust bridge between Linux IPsec management tools and VPP’s high-performance data plane. While it has some limitations, particularly around on-demand SA creation and certain policy types, it offers excellent performance and integration capabilities for most IPsec deployment scenarios. The plugin’s support for both policy-based and route-based modes makes it suitable for various network architectures, from traditional VPN deployments to modern cloud-native environments.

The plugin’s architecture, with its efficient batch processing and event-driven design, ensures minimal overhead while providing comprehensive IPsec functionality. Its integration with the broader VPP ecosystem makes it an essential component for high-performance IPsec deployments.


Related Posts


📊 Want to see these techniques in action?
Check out our real-world DPDK & VPP projects →

Learning VPP: Benchmarking TAP scaling

logo_fdio-300x184

This experiment aims to quantify the CPU and memory utilization characteristics of VPP when a large number of LCP pairs are provisioned. LCP is using TAP to communicate with Linux control plane. And TAP is enhanced to use virtio rings to achieve high throughput.

1. Experiment Environment

The tests were conducted on a system running Ubuntu 24.04.2 LTS.

2. VPP Configuration

VPP’s performance is highly dependent on its core configuration, particularly regarding CPU affinity and memory allocation. For this experiment, VPP was configured with following parameters.

unix {
  nodaemon
  interactive
  cli-listen /run/vpp/cli.sock
}

api-trace {
  on
}

api-segment {
  gid vpp
}

socksvr {
  default
}

memory {
    main-heap-size 8G
}

cpu {
     workers 6
}

buffers {
    buffers-per-numa 524288
}

plugins {
  plugin linux_cp_plugin.so { enable }
  plugin linux_nl_plugin.so { enable }
  plugin snort_plugin.so { disable }
}

statseg {
  socket-name /run/vpp/stats_1.sock
  size 1G
}

Explanation of Key Configuration:

  • CPU Allocation (cpu stanza):
    • workers 6: VPP was configured with 6 worker threads. This means VPP actively utilized 7 CPU cores for its operation: one for the main thread and six for the worker threads.
  • Memory Allocation (memory, buffers, and statseg stanzas):
    • main-heap-size 8G: VPP’s main heap was explicitly set to 8GB. This is a significant pool of memory for VPP’s internal data structures and operations.
    • buffers-per-numa 524288: This increases the number of packet buffers allocated per NUMA node to 524,288. These buffers are crucial for VPP’s high-speed packet processing and are a primary contributor to RAM consumption, especially with numerous active interfaces.
    • statseg { size 1G }: The statistics segment was set to 1GB, allocating a dedicated memory region for VPP’s operational statistics.

3. The Experiment Script

A Python script was used to programmatically create 5000 LCP pairs and monitor resource usage during the process. The script interacts with VPP to provision the interfaces and logs progress at regular intervals.

Python
 
#!/usr/bin/env python3

import argparse
import subprocess
import time
import re
import sys
import os

# Function to run shell commands
def run_command(command, check_returncode=True):
    try:
        result = subprocess.run(command, shell=True, check=check_returncode,
                                stdout=subprocess.PIPE, stderr=subprocess.PIPE,
                                text=True, encoding='utf-8')
        return result.stdout.strip()
    except subprocess.CalledProcessError as e:
        print(f"Error executing command: {e.cmd}")
        print(f"STDOUT: {e.stdout}")
        print(f"STDERR: {e.stderr}")
        sys.exit(1)
    except FileNotFoundError:
        print(f"Command not found: {command.split()[0]}")
        sys.exit(1)

# Function to get VPP CLI output
def vpp_cli(command):
    # Adjust for your VPP CLI path/socket if different
    return run_command(f"sudo vppctl {command}")

# Function to get available RAM
def get_available_ram_gb():
    output = run_command("free -g")
    lines = output.splitlines()
    for line in lines:
        if "Mem:" in line:
            parts = line.split()
            # Available RAM is usually the last column of the 'Mem:' line when using free -g
            # On some systems, 'available' might be a separate column, adjust index if needed.
            # Assuming 'available' is the 7th column for free -g
            if len(parts) >= 7:
                try:
                    return float(parts[6])
                except ValueError:
                    pass # Fallback if parsing fails for this line
            break
    # Fallback to a different parsing if the first one fails
    for line in lines:
        if "Mem:" in line:
            match = re.search(r'available\s+(\d+)', line)
            if match:
                return float(match.group(1)) / 1024 # Convert MB to GB if free -m used or similar
    return 0.0 # Default if not found

# Main script logic
def main():
    parser = argparse.ArgumentParser(description="Scale LCP pairs and measure performance.")
    parser.add_argument("count", type=int, help="Number of LCP pairs to create.")
    parser.add_argument("--create-lcp", action="store_true", help="Create LCP pairs.")
    parser.add_argument("--print_interval", type=int, default=1000, help="Print progress every N LCP pairs.")
    args = parser.parse_args()

    num_lcp_pairs = args.count
    print_interval = args.print_interval

    if not args.create_lcp:
        print("Use --create-lcp to create LCP pairs.")
        return

    print(f"Starting LCP pair creation for {num_lcp_pairs} interfaces...")

    start_time = time.monotonic()
    last_interval_time = start_time
    initial_ram_available = get_available_ram_gb()
    last_ram_available = initial_ram_available

    created_lcp_count = 0
    lcp_names = []

    for i in range(num_lcp_pairs):
        lcp_id = i + 1
        lcp_name = f"lcp{lcp_id}"
        lcp_names.append(lcp_name)

        # Create the TAP interface
        vpp_cli(f"create interface tap name {lcp_name} host-if-name {lcp_name}")
        # Create the LCP pair (assuming 'tapcli' is the other end or similar concept)
        vpp_cli(f"create lcp pair {lcp_name} tapcli {lcp_name}") # This command structure might need adjustment based on actual VPP LCP command.
                                                               # The original intent was likely 'create lcp pair local-interface remote-interface'
                                                               # For simplicity, using the same name for both ends as per script output.
        created_lcp_count += 1

        if created_lcp_count % print_interval == 0:
            current_time = time.monotonic()
            elapsed_total = current_time - start_time
            elapsed_interval = current_time - last_interval_time
            current_ram_available = get_available_ram_gb()
            ram_change = last_ram_available - current_ram_available # RAM decrease is positive change here

            print(f"\n--- Progress: {created_lcp_count}/{num_lcp_pairs} interfaces processed ---")
            print(f"LCP pairs created so far: {created_lcp_count}")
            print(f"Time for last {print_interval} interfaces: {time.strftime('%Mm %Ss', time.gmtime(elapsed_interval))}")
            print(f"Total time elapsed after {created_lcp_count} interfaces: {time.strftime('%Mm %Ss', time.gmtime(elapsed_total))}")
            print(f"Current RAM Available: {current_ram_available:.2f} GB")
            print(f"RAM Change since last interval: {ram_change:.2f} GB")

            last_interval_time = current_time
            last_ram_available = current_ram_available

    final_time = time.monotonic()
    total_elapsed_time = final_time - start_time

    print("\nFinished.")
    print(f"Successfully created {created_lcp_count} LCP pair(s).")
    print(f"Time taken: {time.strftime('%Mm %Ss', time.gmtime(total_elapsed_time))}")

if __name__ == "__main__":
    main()

4. Experiment Results and Analysis

The execution of the script provided detailed insights into the time taken and RAM consumed as LCP pairs were incrementally added. The results highlight performance and resource scaling characteristics:

  • After 1000 LCP pairs: The process took 11 seconds. The system’s available RAM was 44.90 GB, indicating an increase in RAM usage of 6.20 GB for this initial set.
  • After 2000 LCP pairs: The creation of the second batch of 1000 pairs required 26 seconds, bringing the total elapsed time to 38 seconds. The available RAM stood at 39.35 GB, with an additional 5.55 GB of RAM used in this interval.
  • After 3000 LCP pairs: The third set of 1000 LCP pairs took 46 seconds. The cumulative time reached 1 minute and 24 seconds. Available RAM was 33.37 GB, reflecting a further RAM usage of 5.97 GB for this segment.
  • After 4000 LCP pairs: The latest 1000 pairs were provisioned in 1 minute and 6 seconds, making the total elapsed time 2 minutes and 31 seconds. Available RAM was recorded at 27.52 GB, with 5.86 GB of RAM being used during this particular interval.
  • After 5000 LCP pairs: The final 1000 LCP pairs required 1 minute and 26 seconds. The complete operation concluded in 3 minutes and 57 seconds. At this stage, 21.34 GB of RAM was available, having used an additional 6.17 GB of RAM for the final 1000 LCP pairs.

Analysis:

The observed data demonstrates a non-linear increase in the time required to provision each subsequent block of 1000 LCP pairs. This suggests that as the number of active virtual interfaces grows, the overhead associated with their management within VPP also increases. The consistent consumption of RAM per 1000 LCP pairs (ranging from 5.55 GB to 6.20 GB per interval) indicates that each LCP pair, along with its associated virtio TAP interfaces and packet buffer allocations, contributes a significant and relatively constant amount to the memory footprint. 

5. Conclusion

This experiment provides concrete data on VPP’s performance and resource scaling characteristics when creating LCP pairs with virtio TAP interfaces. The results underscore VPP’s ability to handle a large number of virtual connections but also emphasize the increasing time taken for provisioning as scale increases, coupled with a notable per-LCP-pair RAM consumption.


Related Posts


📊 Want to see these techniques in action?
Check out our real-world DPDK & VPP projects →

Learning VPP: Automating PPPoE server session creation

logo_fdio-300x184

The Experiment: Automating PPPoE Session Creation

My experiment aimed to automate PPPoE server session creation within VPP’s PPPoE plugin. While the plugin supports manual session configuration, managing multiple sessions in dynamic environments, such as ISP broadband servers, is labor-intensive and error-prone. My goal was to create a system that automatically configures VPP’s data plane based on control plane events, reducing manual intervention and enabling scalable PPPoE deployments, with added support for IPv6 via ICMPv6 and DHCPv6.

The challenge was bridging the Linux control plane’s session negotiation with VPP’s data plane configuration. When a client initiates a PPPoE session, the control plane (e.g., pppd) negotiates the session, but VPP requires corresponding data plane setup, such as session ID mapping and interface configuration. My automation solution parses control plane events to dynamically manage these VPP sessions while ensuring IPv6 compatibility.

How the Automation Works

The automation is implemented directly within VPP’s PPPoE plugin. These changes enable VPP to parse PPPoE control frames (PADS and PADT) and automatically manage data plane sessions without external scripts. Additionally, I enhanced the plugin to handle ICMPv6 and DHCPv6 packets, ensuring proper IPv6 support for PPPoE sessions. Below is a detailed breakdown of the process:

1. Parsing Control Frames

I modified the packet decapsulation logic in pppoe_decap.c to inspect incoming PPPoE control frames, enabling automated session management:

  • PADS (PPPoE Active Discovery Session-Confirmation): When a PADS frame is received from the Linux control plane (e.g., pppd), it signals successful session negotiation. The updated code extracts parameters from the PADS frame, including:
  • Session ID: From the PPPoE header, used to identify the session.
  • Client MAC Address: From the Ethernet header, for mapping to a VPP interface.
  • MTU: From negotiated session parameters, ensuring data plane alignment. The code triggers the creation of a new PPPoE session in VPP’s data plane by updating the session table.
  • PADT (PPPoE Active Discovery Termination): When a PADT frame is detected, indicating session termination (e.g., client disconnection), the code parses the session ID from the PADT frame and removes the session from VPP’s session table.

These enhancements are implemented in the pppoe_decap_node function, where I added a state machine to handle control frames based on their PPPoE code field (0x07 for PADS, 0xa7 for PADT).

2. Configuring VPP Sessions

Upon detecting a PADS frame, the plugin configures the data plane:

  • Session Creation: The code calls pppoe_session_create to create a session entry with the session ID, client MAC, and VPP interface. It sets up a TAP interface to bridge the session with the Linux control plane and configures the MTU to match the negotiated value.
  • Validation: A new validation routine in pppoe_decap.c checks that the session ID is unique and the MTU is within VPP’s supported range (1280–1500 bytes) to prevent errors.
  • Session Destruction: For PADT frames, the code invokes pppoe_session_delete to remove the session, freeing resources like session table entries and TAP interfaces.

These operations occur within VPP’s high-performance packet processing pipeline.

3. ICMPv6 and DHCPv6 Support

To support IPv6 in PPPoE sessions, I extended the plugin to handle ICMPv6 and DHCPv6 packets:

  • ICMPv6 Handling: I modified pppoe_decap.c to process ICMPv6 packets (e.g., Neighbor Discovery Protocol messages) encapsulated in PPPoE sessions. The code inspects the protocol field in the PPP header (0x0057 for IPv6) and routes ICMPv6 packets to VPP’s IPv6 stack. This ensures proper handling of Router Advertisements and Neighbor Solicitations, critical for IPv6 connectivity.
  • DHCPv6 Handling: The plugin was updated to forward DHCPv6 packets (UDP port 546/547) to the Linux control plane via TAP interfaces. I added logic to pppoe_decap.c to identify DHCPv6 messages and prevent VPP from dropping them, enabling dynamic IPv6 address assignment for PPPoE clients.

These changes ensure that IPv6-enabled PPPoE sessions function correctly, supporting modern broadband requirements.

4. Testing and Deployment

The automation was tested in a lab environment detailed in my test lab repository. The setup used Linux network namespaces (client, server, and vpp) on an x86–64 system to simulate an ISP-like network, orchestrated via a screenrc configuration for interactive testing:

  • Client Namespace: Configured to initiate PPPoE sessions using pppd, sending PADI, PADR, and PADT frames to test session creation and termination. It requested IPv6 addresses via DHCPv6 and processed Router Advertisements for ICMPv6-based connectivity, with interfaces like eth0.10 connected to the server namespace.
  • Server Namespace: Ran the accel-ppp daemon to handle the PPPoE control plane, processing PADI, PADO, PADR, PADS, and PADT frames. It was configured with IPv6 pools for DHCPv6 Prefix Delegation (e.g., 2001:db8:8003::/48 with a /56 delegation prefix) and supported local authentication, using interfaces like eth0.10 for client connections and eth0.20 for VPP communication.
  • VPP Namespace: Hosted the modified VPP PPPoE plugin with my automation changes. VPP handled the data plane, processing PPPoE-encapsulated traffic and forwarding ICMPv6 and DHCPv6 packets via TAP interfaces (e.g., tap0) to the accel-ppp control plane in the server namespace. The namespace used DPDK-compatible virtual interfaces (veth pairs) to meet VPP’s performance requirements.

The lab used veth pairs to connect namespaces, with eth0.10 linking the client and server, and eth0.20 bridging the server and VPP namespaces. The screenrc configuration launched three screen sessions (client, server, and vpp) to monitor pppd, accel-ppp, and VPP processes interactively. Testing involved:

  • Simulating multiple PPPoE clients in the client namespace to verify automated session creation (PADS) and termination (PADT).
  • Validating IPv6 connectivity by sending ICMPv6 Neighbor Solicitations and Router Advertisements, ensuring VPP’s IPv6 stack processed them correctly.
  • Confirming DHCPv6 Prefix Delegation by assigning /56 prefixes to clients from the configured pool, verified via session inspection commands in the server namespace.

The setup ran on a host system with an SSE4.2-compatible CPU, meeting VPP’s minimum requirements. The lab successfully demonstrated the automation’s reliability and IPv6 support, with VPP achieving line-rate performance for PPPoE traffic.


Related Posts


📊 Want to see these techniques in action?
Check out our real-world DPDK & VPP projects →

Learning VPP: Bypassing IKEv2 NAT-T using IPsec policy

logo_fdio-300x184

Overview

The goal is to configure IPSEC policies to bypass IKEv2 NAT-T control traffic that should be processed by ikev2 nodes.

NAT-T (NAT traversal) is a feature of IKE protocol to encapsulate IKE frames with UDP header, set to use well-known port 4500, the same like ESP-in-UDP encapsulation proposed in RFC 3948.

espInUdpEncapsulation

To differentiate between ESP and IKE traffic a so-called Non-ESP Marker is used. That is a 32 bit all-zero field is inserted just after UDP header.

nonEspMarker

Setup

  • Ubuntu 22.04.2 LTS
  • VPP v23.02

Policies configuration

The first idea is to bypass ESP and UDP traffic.

ipsec policy add spd 1 priority 100 inbound action bypass protocol 50
ipsec policy add spd 1 priority 100 outbound action bypass protocol 50
ipsec policy add spd 1 priority 200 inbound action bypass protocol 17
ipsec policy add spd 1 priority 200 outbound action bypass protocol 17

But the problem is with non-ESP Marker. While IKE_INIT frames are bypassed by ipsec4-input-feature node as UDP frames. IKE_AUTH frames are treated as ESP frames but are considered incorrect as far as non-ESP Marker is in the same place as SPI ID. This happened in the following place of code.

/* As per rfc3948 in UDP Encapsulated Header, UDP checksum must be
* Zero, and receivers must not depen upon UPD checksum.
* inside ESP header , SPI ID value MUST NOT be a zero value
* */
if (udp0->checksum==0)
{
esp0= (esp_header_t*) ((u8*) udp0+sizeof (udp_header_t));
...

To handle the situation properly I had to insert the following code.

if (src_port==UDP_DST_PORT_ipsec||
dst_port==UDP_DST_PORT_ipsec)
{
esp0= (esp_header_t*) ((u8*) esp0+
sizeof (ikev2_non_esp_marker));
}
 

References


Related Posts


📊 Want to see these techniques in action?
Check out our real-world DPDK & VPP projects →

Learning VPP: IPsec policy in tunnel mode

logo_fdio-300x184

Overview

The idea is to test IPSEC policies in a tunnel mode. The original packet is prepended with tunnel IP header and ESP header. Tunnel IP header contains 50, i.e. ESP protocol number, inside protocol field.

espTunnelModePacket

Setup

  • Ubuntu 22.04.2 LTS
  • VPP v23.02
  • 3 network namespaces:
    • vpp_wan with WAN interfaces of both VPPs
    • vpp1 and vpp2 with LAN interfaces of each VPP respectively

ipsec policy VPP setup

First VPP configuration

The important line to note below is bypassing ESP encrypted traffic.

create tap id 0 host-ns vpp-wan host-ip4-addr 10.0.0.1/24 host-if-name vpp1_wan
set interface state tap0 up
set interface ip address tap0 10.0.0.2/24
ip route add 0.0.0.0/0 via 10.0.0.1

ipsec sa add 10 spi 1000 esp crypto-key 4a506a794f574265564551694d653768 crypto-alg aes-cbc-128 integ-key 4a506a794f574265564551694d653768 tunnel src 10.0.0.2 dst 10.1.0.2
ipsec sa add 20 spi 1001 esp crypto-key 4a506a794f574265564551694d653768 crypto-alg aes-cbc-128 integ-key 4a506a794f574265564551694d653768 tunnel src 10.1.0.2 dst 10.0.0.2
ipsec spd add 1

set interface ipsec spd tap0 1
ipsec policy add spd 1 priority 100 inbound action bypass protocol 50
ipsec policy add spd 1 priority 100 outbound action bypass protocol 50
ipsec policy add spd 1 priority 10 outbound action protect sa 10 local-ip-range 100.1.0.0 - 100.1.255.255 remote-ip-range 100.2.0.0 - 100.2.255.255
ipsec policy add spd 1 priority 10 inbound action protect sa 20 local-ip-range 100.1.0.0 - 100.1.255.255 remote-ip-range 100.2.0.0 - 100.2.255.255

trace add virtio-input 1000

create tap id 1 host-ns vpp1 host-ip4-addr 100.1.0.2/24 host-ip4-gw 100.1.0.1
set interface state tap1 up
set interface ip address tap1 100.1.0.1/24

Second VPP configuration

create tap id 0 host-ns vpp-wan host-ip4-addr 10.1.0.1/24 host-if-name vpp2_wan
set interface state tap0 up
set interface ip address tap0 10.1.0.2/24
ip route add 0.0.0.0/0 via 10.1.0.1

ipsec sa add 10 spi 1001 esp crypto-key 4a506a794f574265564551694d653768 crypto-alg aes-cbc-128 integ-key 4a506a794f574265564551694d653768 tunnel src 10.1.0.2 dst 10.0.0.2
ipsec sa add 20 spi 1000 esp crypto-key 4a506a794f574265564551694d653768 crypto-alg aes-cbc-128 integ-key 4a506a794f574265564551694d653768 tunnel src 10.0.0.2 dst 10.1.0.2
ipsec spd add 1

set interface ipsec spd tap0 1
ipsec policy add spd 1 priority 100 inbound action bypass protocol 50
ipsec policy add spd 1 priority 100 outbound action bypass protocol 50
ipsec policy add spd 1 priority 10 outbound action protect sa 10 local-ip-range 100.2.0.0 - 100.2.255.255 remote-ip-range 100.1.0.0 - 100.1.255.255
ipsec policy add spd 1 priority 10 inbound action protect sa 20 local-ip-range 100.2.0.0 - 100.2.255.255 remote-ip-range 100.1.0.0 - 100.1.255.255

trace add virtio-input 1000

create tap host-ns vpp2 host-ip4-addr 100.2.0.2/24 host-ip4-gw 100.2.0.1
set interface state tap1 up
set interface ip address tap1 100.2.0.1/24

screenrc file

hardstatus alwayslastline '%{kb}[ %H ] [%= %-w %{r}(%{k}%n %t%{r})%{k} %+w %= ] [%d.%m %c]'
altscreen on
startup_message off
term screen-256color
defscrollback = 5000

# 1st dplane
screen -t vpp1 0 bash
stuff "sleep 1^M"
stuff "ip netns exec vpp1 bash^M"
stuff "gdb --ex=run --args $VPP_BIN_PATH/vpp unix { nodaemon interactive startup-config `pwd`/vpp1.conf } api-segment { prefix vpp1 } socksvr { socket-name /run/vpp/api_1.sock } plugins { plugin linux_cp_plugin.so { enable } plugin snort_plugin.so { disable } } statseg { socket-name /run/vpp/stats_1.sock }^M"

# 2nd dplane
screen -t vpp2 1 bash
stuff "sleep 1^M"
stuff "ip netns exec vpp2 bash^M"
stuff "gdb --ex=run --args $VPP_BIN_PATH/vpp unix { nodaemon interactive startup-config `pwd`/vpp2.conf } api-segment { prefix vpp2 } socksvr { socket-name /run/vpp/api_2.sock } plugins { plugin linux_cp_plugin.so { enable } plugin snort_plugin.so { disable } } statseg { socket-name /run/vpp/stats_2.sock }^M"

screen -t vpp1_host 4 bash
stuff "ip netns del vpp1^M"
stuff "ip netns add vpp1^M"
stuff "ip netns exec vpp1 bash^M"
stuff "sleep 20^M"
stuff "ping 100.2.0.2^M"

screen -t vpp2_host 5 bash
stuff "ip netns del vpp2^M"
stuff "ip netns add vpp2^M"
stuff "ip netns exec vpp2 bash^M"

screen -t vpp_wan 3 bash
stuff "ip netns del vpp-wan^M"
stuff "ip netns add vpp-wan^M"
stuff "ip netns exec vpp-wan bash^M"

References


Related Posts


📊 Want to see these techniques in action?
Check out our real-world DPDK & VPP projects →

Learning VPP: IPsec policy in transport mode

logo_fdio-300x184

Overview

The idea is to test IPSEC policies in transport mode. The original packet is extended with ESP header. Moreover protocol field in original IP header is changed into 50, i.e. ESP protocol number.

espTransportModePacket

Setup

  • Ubuntu 22.04.2 LTS
  • VPP v23.02
  • 3 network namespaces:
    • vpp_wan with WAN interfaces of both VPPs
    • vpp1 and vpp2 with LAN interfaces of each VPP respectively

ipsec policy VPP setup

First VPP configuration

create tap id 0 host-ns vpp-wan host-ip4-addr 10.0.0.1/24 host-if-name vpp1_wan
set interface state tap0 up
set interface ip address tap0 10.0.0.2/24
ip route add 0.0.0.0/0 via 10.0.0.1

ipsec sa add 10 spi 1000 esp crypto-key 4a506a794f574265564551694d653768 crypto-alg aes-cbc-128 integ-key 4a506a794f574265564551694d653768
ipsec sa add 20 spi 1001 esp crypto-key 4a506a794f574265564551694d653768 crypto-alg aes-cbc-128 integ-key 4a506a794f574265564551694d653768
ipsec spd add 1

set interface ipsec spd tap0 1
ipsec policy add spd 1 priority 10 outbound action protect sa 10 local-ip-range 0.0.0.0 - 255.255.255.255 remote-ip-range 0.0.0.0 - 255.255.255.255
ipsec policy add spd 1 priority 10 inbound action protect sa 20 local-ip-range 0.0.0.0 - 255.255.255.255 remote-ip-range 0.0.0.0 - 255.255.255.255

trace add virtio-input 1000

create tap id 1 host-ns vpp1 host-ip4-addr 100.1.0.2/24 host-ip4-gw 100.1.0.1
set interface state tap1 up
set interface ip address tap1 100.1.0.1/24

Second VPP configuration

create tap id 0 host-ns vpp-wan host-ip4-addr 10.1.0.1/24 host-if-name vpp2_wan
set interface state tap0 up
set interface ip address tap0 10.1.0.2/24
ip route add 0.0.0.0/0 via 10.1.0.1

ipsec sa add 10 spi 1001 esp crypto-key 4a506a794f574265564551694d653768 crypto-alg aes-cbc-128 integ-key 4a506a794f574265564551694d653768
ipsec sa add 20 spi 1000 esp crypto-key 4a506a794f574265564551694d653768 crypto-alg aes-cbc-128 integ-key 4a506a794f574265564551694d653768
ipsec spd add 1

set interface ipsec spd tap0 1
ipsec policy add spd 1 priority 10 outbound action protect sa 10 local-ip-range 0.0.0.0 - 255.255.255.255 remote-ip-range 0.0.0.0 - 255.255.255.255
ipsec policy add spd 1 priority 10 inbound action protect sa 20 local-ip-range 0.0.0.0 - 255.255.255.255 remote-ip-range 0.0.0.0 - 255.255.255.255

trace add virtio-input 1000

create tap host-ns vpp2 host-ip4-addr 100.2.0.2/24 host-ip4-gw 100.2.0.1
set interface state tap1 up
set interface ip address tap1 100.2.0.1/24

screenrc file

hardstatus alwayslastline '%{kb}[ %H ] [%= %-w %{r}(%{k}%n %t%{r})%{k} %+w %= ] [%d.%m %c]'
altscreen on
startup_message off
term screen-256color
defscrollback = 5000

# 1st dplane
screen -t vpp1 0 bash
stuff "sleep 1^M"
stuff "ip netns exec vpp1 bash^M"
stuff "gdb --ex=run --args $VPP_BIN_PATH/vpp unix { nodaemon interactive startup-config `pwd`/vpp1.conf } api-segment { prefix vpp1 } socksvr { socket-name /run/vpp/api_1.sock } plugins { plugin linux_cp_plugin.so { enable } plugin snort_plugin.so { disable } } statseg { socket-name /run/vpp/stats_1.sock }^M"

# 2nd dplane
screen -t vpp2 1 bash
stuff "sleep 1^M"
stuff "ip netns exec vpp2 bash^M"
stuff "gdb --ex=run --args $VPP_BIN_PATH/vpp unix { nodaemon interactive startup-config `pwd`/vpp2.conf } api-segment { prefix vpp2 } socksvr { socket-name /run/vpp/api_2.sock } plugins { plugin linux_cp_plugin.so { enable } plugin snort_plugin.so { disable } } statseg { socket-name /run/vpp/stats_2.sock }^M"

screen -t vpp1_host 4 bash
stuff "ip netns del vpp1^M"
stuff "ip netns add vpp1^M"
stuff "ip netns exec vpp1 bash^M"
stuff "sleep 20^M"
stuff "ping 100.2.0.2^M"

screen -t vpp2_host 5 bash
stuff "ip netns del vpp2^M"
stuff "ip netns add vpp2^M"
stuff "ip netns exec vpp2 bash^M"

screen -t vpp_wan 3 bash
stuff "ip netns del vpp-wan^M"
stuff "ip netns add vpp-wan^M"
stuff "ip netns exec vpp-wan bash^M"
stuff "sleep 20^M"
stuff "ip r add 100.2.0.0/24 via 10.1.0.2^M"
stuff "ip r add 100.1.0.0/24 via 10.0.0.2^M"

References


Related Posts


📊 Want to see these techniques in action?
Check out our real-world DPDK & VPP projects →

Learning IKEv2: IPSEC traffic drops between Strongswan server and VPP

Strongswan

Overview

The goal is to troubleshoot why IPSEC traffic is not working while the IKEv2 session is up.

Version

Strongswan 5.8.2

Get IPSEC counters

$ sudo cat /proc/net/xfrm_stat
XfrmInError 0
XfrmInBufferError 0
XfrmInHdrError 0
XfrmInNoStates 1
XfrmInStateProtoError 619
XfrmInStateModeError 0
XfrmInStateSeqError 0
XfrmInStateExpired 0
XfrmInStateMismatch 0
XfrmInStateInvalid 0
XfrmInTmplMismatch 0
XfrmInNoPols 0
XfrmInPolBlock 0
XfrmInPolError 0
XfrmOutError 0
XfrmOutBundleGenError 0
XfrmOutBundleCheckError 0
XfrmOutNoStates 0
XfrmOutStateProtoError 0
XfrmOutStateModeError 0
XfrmOutStateSeqError 0
XfrmOutStateExpired 0
XfrmOutPolBlock 0
XfrmOutPolDead 0
XfrmOutPolError 0
XfrmFwdHdrError 0
XfrmOutStateInvalid 0
XfrmAcquireError 1

The documentation states the following.

XfrmInStateProtoError: Transformation protocol specific error e.g. SA key is wrong

Check IPSEC encryption keys

$ sudo ip -s xfrm state
src 192.168.0.102 dst 192.168.0.104
proto esp spi 0xb308b4fd(3003692285) reqid 1(0x00000001) mode tunnel
replay-window 0 seq 0x00000000 flag af-unspec (0x00100000)
mark 0x2a/0xffffffff
auth-trunc hmac(sha256) 0x51f437dcf2a07fac254acd6e3fdd067eee555a23104ba9dc539297da0658cdf1 (256 bits) 128
enc cbc(aes) 0x8a137175c8cc19fdb383b4f0cf087923cb938883932f899915c71fd19f5986d0 (256 bits)
encap type espinudp sport 4500 dport 57419 addr 0.0.0.0
anti-replay context: seq 0x0, oseq 0x0, bitmap 0x00000000
lifetime config:
limit: soft (INF)(bytes), hard (INF)(bytes)
limit: soft (INF)(packets), hard (INF)(packets)
expire add: soft 0(sec), hard 60(sec)
expire use: soft 0(sec), hard 0(sec)
lifetime current:
0(bytes), 0(packets)
add 2023-08-30 11:33:15 use -
stats:
replay-window 0 replay 0 failed 0
src 192.168.0.104 dst 192.168.0.102
proto esp spi 0xcec24725(3468838693) reqid 1(0x00000001) mode tunnel
replay-window 32 seq 0x00000000 flag af-unspec (0x00100000)
auth-trunc hmac(sha256) 0xf8fdca7324756a26be831afe6434dc3589d328d5e7180e7aeeab83c05aa110ab (256 bits) 128
enc cbc(aes) 0xf8c356a78401fafe5d4866f6dac0fdd821f2bce47a067ad0db992542af4ac198 (256 bits)
encap type espinudp sport 57419 dport 4500 addr 0.0.0.0
anti-replay context: seq 0x7, oseq 0x0, bitmap 0x0000007f
lifetime config:
limit: soft (INF)(bytes), hard (INF)(bytes)
limit: soft (INF)(packets), hard (INF)(packets)
expire add: soft 0(sec), hard 60(sec)
expire use: soft 0(sec), hard 0(sec)
lifetime current:
568(bytes), 7(packets)
add 2023-08-30 11:33:15 use 2023-08-30 11:33:16
stats:
replay-window 0 replay 0 failed 0

Child SA keys in the log

Sep 5 09:37:57 host4 charon: 16[CHD] encryption initiator key => 16 bytes @ 0x7fd110004c60
Sep 5 09:37:57 host4 charon: 16[CHD] 0: 0E 2A 96 1B F8 99 E1 34 C0 DF B8 5B C9 E7 3A 2C .*.....4...[..:,
Sep 5 09:37:57 host4 charon: 16[CHD] encryption responder key => 16 bytes @ 0x7fd1100045f0
Sep 5 09:37:57 host4 charon: 16[CHD] 0: C6 D9 3D AD 33 77 75 A0 D9 05 D4 DC 2E 3B 34 98 ..=.3wu......;4.
Sep 5 09:37:57 host4 charon: 16[CHD] integrity initiator key => 32 bytes @ 0x7fd110002330
Sep 5 09:37:57 host4 charon: 16[CHD] 0: D3 7D 83 74 CB 10 08 1B FA 66 57 56 4E 58 32 D1 .}.t.....fWVNX2.
Sep 5 09:37:57 host4 charon: 16[CHD] 16: 11 80 7D 57 3B 8D D6 C2 3E A9 34 70 E7 68 6F 5F ..}W;...>.4p.ho_
Sep 5 09:37:57 host4 charon: 16[CHD] integrity responder key => 32 bytes @ 0x7fd110000c40
Sep 5 09:37:57 host4 charon: 16[CHD] 0: 51 4C 5C 32 D9 50 E2 3A ED 85 69 37 1F 03 E6 4E QL\2.P.:..i7...N
Sep 5 09:37:57 host4 charon: 16[CHD] 16: 41 7D CB 8A 08 03 7A 00 12 B7 09 31 5F FE 2F 7B A}....z....1_./{

To see those messages in the syslog ipsec.conf has to contain the following section.

charondebug="ike 4, knl 1, cfg 1, net 1, esp 1, dmn 1, mgr 1, chd 4"

Check IKEv2 in VPP

$ sudo vppctl show ikev2 sa details
iip 10.0.3.15 ispi 6a926ba949811247 rip 192.168.0.102 rspi 8c335bbc5793143e
state: AUTHENTICATED
pfs: on
encr:aes-cbc-256 prf:hmac-sha2-256 integ:hmac-sha2-256-128 dh-group:modp-2048
nonce i:b5defefbedfa95eeb3f889a81f4b482b92a3c54e82690601cccf6989b0c21977
r:363bc05408eb534774294dc48bd84afea1bf5dc3d2a52996741ad1b7bdda53cb
SK_d 28f1de31fbeaddda7130328a441ae95d8dbf252a5cf89fec5a29a0719cc505be
SK_a i:f17bfff19f04cc0e7b3994fe0ce1c4a9b6c8fac8af8695813d62d28289d0e8ff
r:d5979186ef859f4056b72de944f95a4777cf6f3f6132d61b3855d0e984ebd40d
SK_e i:3aec4ef17660d30b20a4e4b8016d52ba1177aba8a02adfd1345bb508bbb2ab5b
r:34487309d1887309baa707c600338ee19da6336124ca457affabdbddfb1e7e4b
SK_p i:fa2f27bab6708237726211d0cef1e03f099b65551d85646cbd79e8430b9b0ffc
r:debd8be5497fbbdad885883086c2011db312e257860b3feceedd744ff7375e0b
identifier (i) id-type fqdn data 8475186311
identifier (r) id-type fqdn data 8475186311
child sa 0:encr:aes-cbc-256 integ:hmac-sha2-256-128 esn:no
spi(i) b308b4fd spi(r) cec24725
SK_e i:f8c356a78401fafe5d4866f6dac0fdd821f2bce47a067ad0db992542af4ac198
r:8a137175c8cc19fdb383b4f0cf087923cb938883932f899915c71fd19f5986d0
SK_a i:f8fdca7324756a26be831afe6434dc3589d328d5e7180e7aeeab83c05aa110ab
r:51f437dcf2a07fac254acd6e3fdd067eee555a23104ba9dc539297da0658cdf1
traffic selectors (i):0 type 7 protocol_id 0 addr 0.0.0.0 - 255.255.255.255 port 0 - 65535
traffic selectors (r):0 type 7 protocol_id 0 addr 0.0.0.0 - 255.255.255.255 port 0 - 65535

Make sure that SK_e and SK_a for a child are the same as on the Strongswan end.

Check IPSEC SA in VPP

Make sure that encryption keys are correct for both inbound and outbound directions.

vppctl# show ipsec sa 1
[1] sa 3221227520 (0xc0000800) spi 128056921 (0x07a1fe59) protocol:esp flags:[anti-replay udp-encap inbound ]
   locks 1
   salt 0x0
   thread-indices [encrypt:0 decrypt:0]
   seq 0 seq-hi 0
   last-seq 0 last-seq-hi 0
   anti-replay-window
   crypto alg aes-cbc-128 key adcbc1afffb70a4f2efff6a91d7fc51a
   integrity alg sha-256-128 key 62ad905890b5000240faacde55909e49fc7cee7570e5f017090e5c28b067bf06
   UDP:[src:4500 dst:4500]
   packets 0 bytes 0

References

https://www.kernel.org/doc/Documentation/networking/xfrm_proc.txt


Related Posts


📊 Want to see these techniques in action?
Check out our real-world DPDK & VPP projects →

Learning IKEv2: Debug Strongswan

Strongswan

Overview

The goal is to debug Strongswan server with GDB.

Version

Strongswan 5.8.2

Download

wget https://download.strongswan.org/strongswan-5.8.2.tar.bz2
tar -xf strongswan-5.8.2.tar.bz2
cd strongswan-5.8.2

Prerequisites

sudo apt install gdb build-essential libgmp-dev libssl-dev

Cleanup previous installation

sudo apt remove strongswan*
sudo rm -rf /usr/lib/ipsec/*

Build and install

./configure --prefix=/usr --sysconfdir=/etc --enable-save-keys
make
sudo make install

Start under debugger

sudo ipsec start --attach-gdb

References

https://docs.strongswan.org/docs/5.9/install/install.html


Related Posts


📊 Want to see these techniques in action?
Check out our real-world DPDK & VPP projects →

Learning IKEv2: Decrypt Strongswan IKE traffic

Strongswan

Overview

The goal is to decrypt IKE communication between the Strongswan server and the client.

Version

Strongswan 5.8.2

Client ipsec.conf

conn 8475186311
type=tunnel
compress=no
fragmentation=yes
ikelifetime=3h
keylife=1h
rekeymargin=540s
keyingtries=%forever
keyexchange=ikev2
authby=secret
mobike=no
ike=aes128-sha256-modp3072
esp=aes128-sha256
auto=start
left=%any
leftsubnet=0.0.0.0/0
leftauth=psk
leftid=@8475186311
rightsubnet=0.0.0.0/0
rightauth=psk
right=192.168.31.228
rightid=@8475186311
dpdaction=restart
dpddelay=10s
dpdtimeout=30s

Client /etc/ipsec.secrets

'@8475186311' : PSK "abc243f5621c1a95997997c8cd597956"

Server ipsec.conf

The setup section knows the charondebug keyword, which contains a comma-separated list of subsystem-level pairs. This enables debug logs that contain data needed for decryption.

conn 8475186311
type=tunnel
compress=no
fragmentation=yes
ikelifetime=0
keylife=60s
rekeymargin=540s
keyingtries=%forever
auto=add
keyexchange=ikev2
authby=secret
mobike=no
ike=aes128-sha256-modp3072
esp=aes128-sha256
left=%any
leftsubnet=0.0.0.0/0
leftauth=psk
leftid=@8475186311
right=%any
rightsubnet=0.0.0.0/0
rightauth=psk
rightid=@8475186311
leftupdown=/etc/strongswan.d/ipsec-vti.sh
config setup
uniqueids=never
charondebug="ike 4, knl 1, cfg 1, net 1, esp 1, dmn 1, mgr 1, chd 4"

Server /etc/ipsec.secrets

'@8475186311' : PSK "abc243f5621c1a95997997c8cd597956"

Retrieve SPI from status

$ sudo ipsec statusall

Status of IKE charon daemon (strongSwan 5.8.2, Linux 5.4.0-156-generic, x86_64):

  uptime: 8 minutes, since Aug 28 10:05:15 2023

  malloc: sbrk 3256320, mmap 0, used 982768, free 2273552

  worker threads: 11 of 16 idle, 5/0/0/0 working, job queue: 0/0/0/0, scheduled: 2

  loaded plugins: charon aesni aes rc2 sha2 sha1 md5 mgf1 random nonce x509 revocation constraints pubkey pkcs1 pkcs7 pkcs8 pkcs12 pgp dnskey sshkey pem openssl fips-prf gmp agent xcbc hmac gcm drbg attr kernel-netlink resolve socket-default connmark farp stroke vici updown eap-identity eap-aka eap-md5 eap-gtc eap-mschapv2 eap-dynamic eap-radius eap-tls eap-ttls eap-peap eap-tnc xauth-generic xauth-eap xauth-pam tnc-tnccs dhcp lookip error-notify certexpire led addrblock unity counters

Listening IP addresses:

  192.168.31.228

  10.10.10.111

Connections:

  8475186311:  %any...%any  IKEv2

  8475186311:   local:  [8475186311] uses pre-shared key authentication

  8475186311:   remote: [8475186311] uses pre-shared key authentication

  8475186311:   child:  0.0.0.0/0 === 0.0.0.0/0 TUNNEL

Security Associations (1 up, 0 connecting):

  8475186311[2]: ESTABLISHED 4 minutes ago, 192.168.31.228[8475186311]...192.168.31.119[8475186311]

  8475186311[2]: IKEv2 SPIs: 9087fdb3979c0180_i 659d7950e17d669a_r*, pre-shared key reauthentication in 49710 days

  8475186311[2]: IKE proposal: AES_GCM_16_256/PRF_HMAC_SHA2_384/ECP_384

  8475186311{5}:  INSTALLED, TUNNEL, reqid 5, ESP in UDP SPIs: cb1a71fb_i c9d37cde_o

  8475186311{5}:  AES_CBC_256/HMAC_SHA2_256_128, 0 bytes_i, 0 bytes_o, rekeying disabled

  8475186311{5}:   0.0.0.0/0 === 0.0.0.0/0

Retrieve SKx from syslog

Aug 28 10:09:35 host4 ipsec[711]: 06[IKE] Sk_ei secret => 36 bytes @ 0x55de208c7020

Aug 28 10:09:35 host4 ipsec[711]: 06[IKE]    0: 3E 71 B8 93 CE B8 16 F7 6D 9D 73 2F 91 1A 9C EA  >q......m.s/....

Aug 28 10:09:35 host4 ipsec[711]: 06[IKE]   16: 2B 4C 76 D8 54 95 04 0E 0A D4 07 35 94 1A 76 1C  +Lv.T......5..v.

Aug 28 10:09:35 host4 ipsec[711]: 06[IKE]   32: CE 08 1D 27          

Aug 28 10:09:35 host4 ipsec[711]: 06[IKE] Sk_er secret => 36 bytes @ 0x55de208c7830

Aug 28 10:09:35 host4 ipsec[711]: 06[IKE]    0: 73 12 F9 5F 3A 03 22 80 BB 00 06 2E FD D5 B8 5E  s.._:."........^

Aug 28 10:09:35 host4 ipsec[711]: 06[IKE]   16: 11 D9 15 65 7D AD A3 7A E3 6B B0 53 95 67 29 69  ...e}..z.k.S.g)i

Aug 28 10:09:35 host4 ipsec[711]: 06[IKE]   32: 67 96 58 D3   

SK_ei 3E71B893CEB816F76D9D732F911A9CEA2B4C76D85495040E0AD40735941A761CCE081D27

SK_er

7312F95F3A032280BB00062EFDD5B85E11D915657DADA37AE36BB05395672969679658D3

Capture traffic

sudo tcpdump -i enp0s3 udp port 500 or port 4500 -w isakmp.pcap

Configure Wireshark to use IKEv2 table

Learning IKEv2: Decrypt Strongswan IKE traffic - screenshot 487

Check decrypted data

Learning IKEv2: Decrypt Strongswan IKE traffic - image

References

https://www.geeksforgeeks.org/ikev2-decryption-table-in-wireshark/


📊 Want to see these techniques in action?
Check out our real-world DPDK & VPP projects →

Learning VPP: Building DPDK with debug symbols

logo_fdio-300x184

Overview

The goal is to build DPDK which is an external package for VPP together with debug symbols to be able to debug inside DPDK source code.

Version

VPP version is 23.02

Configuration

We need to modify two files:

  • build/external/deb/debian/rules
  • build/external/packages/dpdk.mk

dpdk.mk

The following flag has to be enabled.

DPDK_DEBUG ?= y

rules

The following lines have to be added.

override_dh_strip:
dh_strip --exclude=librte
Rebuild
 
sudo dpkg -r vpp-ext-deps
make install-ext-dep
make rebuild

gdb

When running gdb we need to specify the path to DPDK sources.

set substitute-path '../src-dpdk/' 
'/home/projects/vpp/build/external/downloads/dpdk-22.07'


Related Posts


📊 Want to see these techniques in action?
Check out our real-world DPDK & VPP projects →

Learning VPP: Two instances in different namespaces

logo_fdio-300x184

Overview

The goal of this article is to describe how to run two VPP instances on one machine but in dedicated namespaces. The communication between namespaces is executed through linux-cp plugin.

VPP two instances

Configuration

This section describes configuration files and shell commands to set up the environment.

Namespaces

First, we need to create two network namespaces.

ip netns add vpp1-ns
ip netns add vpp2-ns

VPP1

Then, we need to start first VPP and create a shared memory packet interface.

ip netns exec vpp1-ns /usr/bin/vpp -c `pwd`/vpp1_startup.conf
ip netns exec vpp1-ns ip link set dev memif0 up
ip netns exec vpp1-ns ip add add 10.0.0.1/24 dev memif0
ip netns exec vpp1-ns ping 10.0.0.2

CLI

CLI can be accessed using the following command.

vppctl -s /run/vpp/cli-vpp1.sock

vpp1_startup.conf

unix {
log ./vpp1.log
full-coredump
cli-listen /run/vpp/cli-vpp1.sock
gid vpp
startup-config /home/denys/vpp1.conf
}

api-segment {
prefix vpp1
}

socksvr {
socket-name /run/vpp/api_1.sock
}

dpdk {
blacklist 8086:100f
}

plugins {
plugin linux_cp_plugin.so { enable }
plugin linux_nl_plugin.so { enable }
}

logging {
default-log-level debug
default-syslog-log-level info
}

vpp1.conf

lcp default netns vpp1-ns
create interface memif id 0 master
set interface state memif0/0 up
lcp create 1 host-if memif0

VPP2

Then, we need to start a second VPP and create a shared memory packet interface.

ip netns exec vpp2-ns /usr/bin/vpp -c `pwd`/vpp2_startup.conf
ip netns exec vpp2-ns ip link set dev memif0 up
ip netns exec vpp2-ns ip add add 10.0.0.2/24 dev memif0
ip netns exec vpp2-ns ping 10.0.0.1

CLI

CLI can be accessed using the following command.

vppctl -s /run/vpp/cli-vpp2.sock

vpp2_startup.conf

unix {
log ./vpp2.log
full-coredump
cli-listen /run/vpp/cli-vpp2.sock
gid vpp
startup-config /home/denys/vpp2.conf
}

api-segment {
prefix vpp2
}

socksvr {
socket-name /run/vpp/api_2.sock
}

dpdk {
blacklist 8086:100f
}

plugins {
plugin linux_cp_plugin.so { enable }
plugin linux_nl_plugin.so { enable }
}

logging {
default-log-level debug
default-syslog-log-level info
}

vpp2.conf

lcp default netns vpp2-ns
create interface memif id 0 slave
set interface state memif0/0 up
lcp create 1 host-if memif0

References


Related Posts


📊 Want to see these techniques in action?
Check out our real-world DPDK & VPP projects →

Learning VPP: Configuring IPsec IKEv2 on Ubuntu 18.04 VMs

✅ Updated January 2026 — This guide has been reviewed and updated for the latest DPDK/VPP versions.

logo_fdio-300x184

Overview

Internet Key Exchange or IKE is the protocol used to set up IPsec connection using certificates.

Setup

Two Ubuntu 18.04 VMs with VPP 20.05.

VPP IKEv2

Prerequisites

First we need generate private keys and certificates and place them accordingly. To do that we need to install the strongswan and strongswan-pki packages. After that we run the following commands.

ipsec pki --gen  > server-key.der
ipsec pki --self --in server-key.der --dn "CN=vpp.home" > server-cert.der
openssl x509 -inform DER -in server-cert.der -out server-cert.pem
openssl rsa -inform DER -in server-key.der -out server-key.pem

ipsec pki --gen  > client-key.der
ipsec pki --self --in client-key.der --dn "CN=roadwarrior.vpn.example.com" > client-cert.der
openssl x509 -inform DER -in client-cert.der -out client-cert.pem
openssl rsa -inform DER -in client-key.der -out client-key.pem

VPP configuration

We need to configure responder first.

Responder

ikev2 profile add pr1
ikev2 profile set pr1 auth rsa-sig cert-file client-cert.pem
set ikev2 local key server-key.pem
ikev2 profile set pr1 id local fqdn vpp.home
ikev2 profile set pr1 id remote fqdn roadwarrior.vpn.example.com
ikev2 profile set pr1 traffic-selector remote ip-range 0.0.0.0 - 255.255.255.255 port-range 0 - 65535 protocol 0
ikev2 profile set pr1 traffic-selector local ip-range 0.0.0.0 - 255.255.255.255 port-range 0 - 65535 protocol 0

Now we are ready to configure initiator and start a connection.

Initiator

ikev2 profile add pr1
ikev2 profile set pr1 auth rsa-sig cert-file server-cert.pem
set ikev2 local key server1/client-key.pem
ikev2 profile set pr1 id local fqdn roadwarrior.vpn.example.com
ikev2 profile set pr1 id remote fqdn vpp.home
ikev2 profile set pr1 traffic-selector local ip-range 0.0.0.0 - 255.255.255.255 port-range 0 - 65535 protocol 0
ikev2 profile set pr1 traffic-selector remote ip-range 0.0.0.0 - 255.255.255.255 port-range 0 - 65535 protocol 0

ikev2 profile set pr1 responder GigabitEthernet0/3/0 192.168.0.123
ikev2 profile set pr1 ike-crypto-alg aes-cbc 256  ike-integ-alg sha1-96  ike-dh modp-2048
ikev2 profile set pr1 esp-crypto-alg aes-cbc 256  esp-integ-alg sha1-96  esp-dh ecp-256
ikev2 profile set pr1 sa-lifetime 3600 10 5 0

ikev2 initiate sa-init pr1

Results

Encap trace

DBGvpp# show ikev2 sa
iip 192.168.0.122 ispi 4c28e1c804fd1947 rip 192.168.0.123 rspi 399dc6c103195aaf
encr:aes-cbc-256 prf:hmac-sha2-256 integ:sha1-96 dh-group:modp-2048
 nonce i:3d3efa1c7e22b2d8a71cee9a25dc9865b7f9390cc5779951c853f54d3c43f8a4        r:a5d1349d0c3361f4b83928a4ed7d830c0bb30ce1c24eec0fad8f1246d5aa3d13
 SK_d    6801b1efb1b2b1af7716aa59110232e11f6ab14d21a5bbed78a5e3df780accfd
 SK_a  i:ef80c745b9b00687b790c8733ef1259051792d5a
        r:88197e468c0bb1547da1ba83a615fda8bddafe70
 SK_e  i:e03c29186cb043aab949345b4b082d52be0a55a917f0871055e8201b4a82bbe6
        r:47ce9e6ca78758d0ca55b49c95db412f41f4d82473f183276b09a4aeca4acabf
 SK_p  i:5e97db586a7e3f2f0532c8ecbd360cb9a8b9894bc1f7bcccb253878b299a3689
        r:a9346e5827ccf6927acaa5fff0d9cc4461649154f4e01ceed410cdbb1985a596
 identifier (i) fqdn roadwarrior.vpn.example.com
 identifier (r) fqdn vpp.home
 child sa 0:
   encr:aes-cbc-256 integ:sha1-96 esn:yes 
    spi(i) df7eeb0c spi(r) 244bc72d
   SK_e  i:d5cdc8129b666eb0ef40111d9a78c4d8b053e28b2d28846d421c47f27f00d9fd
         r:3ee2ab1bbfce8b0714d735e2e13a18d44d274c9a214b88ff9a7d47170f364f94
   SK_a  i:ba24a4dabc09eb1da586437e2d28841c67043d33
         r:1f289f029b601b371f7946e93c14df252dd9fcc4
   traffic selectors (i):
     0 type 7 protocol_id 0 addr 0.0.0.0 - 255.255.255.255 port 0 - 65535
   traffic selectors (r):
     0 type 7 protocol_id 0 addr 0.0.0.0 - 255.255.255.255 port 0 - 65535
 iip 192.168.0.122 ispi 4c28e1c804fd1947 rip 192.168.0.123 rspi 399dc6c103195aaf

Renew certificates

If we want to renew certificates on both sides we need to do the following.

Responder

ikev2 profile set pr1 auth rsa-sig cert-file client-cert.pem
set ikev2 local key server-key.pem

Initiator

ikev2 initiate del-child-sa df7eeb0c
ikev2 profile set pr1 auth rsa-sig cert-file server-cert.pem
set ikev2 local key client-key.pem
ikev2 initiate sa-init pr1

References


Related Posts


📊 Want to see these techniques in action?
Check out our real-world DPDK & VPP projects →

Learning VPP: OSPF routing protocol

✅ Updated January 2026 — This guide has been reviewed and updated for the latest DPDK/VPP versions.

logo_fdio-300x184

Overview

The task at hand is to enable OSPF on VPP router. For this purpose is chosen FRRouting (FRR), which is an IP routing protocol suite for Linux and Unix platforms.

I will use VPP’s router plugin, that implements logic to punt control packets to the Linux network stack and a Netlink-based mechanism that synchronizes the Linux’s routing table into VPP’s FIB.

In order to compile the router plugin with VPP version 20, I had to make few modifications to source code that can be seen on Github.

Topology

I have used 4 VirtualBox VMs for two VPP routers and two hosts.

VPP_OSPF

VPP VM configuration looks as follows.

VPP_VM_config

Host VM configuration looks as follows.

Host_VM_config

Install VPP from the package

For your convenience, I have uploaded prebuilt Debian packages targeted against Ubuntu 18.04. The VPP will be installed and started as a service with a router plugin enabled.

curl -s https://packagecloud.io/install/repositories/emflex/fdio/script.deb.sh | sudo bash
sudo apt-get install vpp vpp-plugin-core vpp-plugin-dpdk vpp-ext-deps

Install VPP from source

Here, I have introduced a script to download and build VPP from sources together with the router plugin.

git clone https://github.com/garyachy/frr-vpp.git
cd frr-vpp
sudo ./build_vpp.sh
sudo dpkg -i vpp/build-root/*.deb

Install FRR from the package

curl -s https://deb.frrouting.org/frr/keys.asc | sudo apt-key add -
FRRVER="frr-stable"
echo deb https://deb.frrouting.org/frr $(lsb_release -s -c) $FRRVER | sudo tee -a /etc/apt/sources.list.d/frr.list
sudo apt update && sudo apt install frr frr-pythontools

Configure FRR

Make the following changes to /etc/frr/daemons

ospfd=yes
ospfd_options=" -A 127.0.0.1 -f /etc/frr/ospfd.conf"

Make the following changes to /etc/frr/ospfd.conf

hostname ospfd
password zebra
log file /var/log/frr/ospfd.log informational
log stdout
!
router ospf
    ospf router-id 10.10.10.1
    network 10.10.10.1/24 area 0.0.0.0
    network 100.100.100.0/24 area 0.0.0.0
!

Configure Netplan

VPP interface need to be added into /etc/netplan/50-cloud-init.yaml and netplan apply executed.

network:
    ethernets:
        enp0s3:
            dhcp4: true
        vpp0:
            addresses:
            - 10.10.10.1/24
        vpp1:
            addresses:
            - 100.100.100.1/24
    version: 2

Results

VPP creates TAP interfaces in Linux.

denys@vpp1:~$ ip addr
5: vpp0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
    link/ether 08:00:27:4a:5c:a2 brd ff:ff:ff:ff:ff:ff
    inet 10.10.10.1/24 brd 10.10.10.255 scope global vpp0
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe4a:5ca2/64 scope link 
       valid_lft forever preferred_lft forever
6: vpp1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
    link/ether 08:00:27:7a:4b:93 brd ff:ff:ff:ff:ff:ff
    inet 100.100.100.1/24 brd 100.100.100.255 scope global vpp1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe7a:4b93/64 scope link 
       valid_lft forever preferred_lft forever

You can see that OSPF routes were installed into Linux.

denys@vpp1:~$ ip route
default via 192.168.0.1 dev enp0s3 proto dhcp src 192.168.0.108 metric 100 
10.10.10.0/24 dev vpp0 proto kernel scope link src 10.10.10.1 
20.20.20.0/24 via 100.100.100.2 dev vpp1 proto ospf metric 20 
100.100.100.0/24 dev vpp1 proto kernel scope link src 100.100.100.1 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
192.168.0.0/24 dev enp0s3 proto kernel scope link src 192.168.0.108 
192.168.0.1 dev enp0s3 proto dhcp scope link src 192.168.0.108 metric 100

Also, hosts can ping each other.

denys@host2:~$ ping 10.10.10.2
PING 10.10.10.2 (10.10.10.2) 56(84) bytes of data.
64 bytes from 10.10.10.2: icmp_seq=2 ttl=62 time=0.769 ms
64 bytes from 10.10.10.2: icmp_seq=3 ttl=62 time=0.547 ms
64 bytes from 10.10.10.2: icmp_seq=4 ttl=62 time=0.593 ms

References


Related Posts


📊 Want to see these techniques in action?
Check out our real-world DPDK & VPP projects →

Learning VPP: Setting up NAT for Internet Connectivity

✅ Updated January 2026 — This guide has been reviewed and updated for the latest DPDK/VPP versions.

logo_fdio-300x184

Overview

We will use NAT feature to enable hosts connected to VPP router access the Internet.

Configuration

We will need NAT extra features that are enabled only in endpoint dependent mode. Also, we need to increase limits for NAT translations that are too small by default.

So we need to add the following lines into startup.conf file.

nat {
    endpoint-dependent
    translation hash buckets 1048576
    translation hash memory 268435456
    user hash buckets 1024
    max translations per user 10000
 }

After VPP is started the following commands will enable NAT on two interfaces.

nat44 add interface address GigabitEthernet0/3/0
nat addr-port-assignment-alg default
set interface nat44 in GigabitEthernet0/8/0 out GigabitEthernet0/3/0 output-feature
nat44 forwarding enable

Bypassing NAT

To access VPP using ssh the following command is applied.

nat44 add static mapping local 192.168.31.130 22 external GigabitEthernet0/3/0 22 tcp

To forbid NAT change source port of the outgoing specific traffic the following command is used.

nat44 add identity mapping 192.168.31.130 udp 4789

References


Related Posts


📊 Want to see these techniques in action?
Check out our real-world DPDK & VPP projects →

Learning VPP: Understanding ABF in VPP Networking

✅ Updated January 2026 — This guide has been reviewed and updated for the latest DPDK/VPP versions.

logo_fdio-300x184

Overview

ABF stands for ACL Based Forwarding. ABF is a subset of PBR (Policy Based Routing). ABF is different from normal IP routing in that the lookup by IP destination address is replaced by a match using ACL rules.

Testing

Run VAT.

./build-root/build-vpp_debug-native/vpp/bin/vpp
./vpp/build-root/build-vpp_debug-native/vpp/bin/vpp_api_test

Create ACL rules.

vat# acl_add_replace ipv4 permit dst 8.8.8.8/32
acl_dump
vl_api_acl_add_replace_reply_t_handler:108: ACL index: 0
vat# vl_api_acl_details_t_handler:222: acl_index: 0, count: 1
   tag {}
   ipv4 action 1 src 0.0.0.0/0 dst 8.8.8.8/32 proto 0 sport 0-65535 dport 0-65535 tcpflags 0 mask 0

Create a policy.

DBGvpp# abf policy add id 0 acl 0 via 10.100.0.4 loop2
DBGvpp# show abf policy                                                                                    
abf:[0]: policy:0 acl:0
     path-list:[43] locks:1 flags:shared,no-uRPF, uRPF-list: None
      path:[45] pl-index:43 ip4 weight=1 pref=0 attached-nexthop:  oper-flags:resolved,
        10.100.0.4 loop2
      [@0]: arp-ipv4: via 10.100.0.4 loop2

Bind to an interface.

DBGvpp# abf attach ip4 policy 0 GigabitEthernet0/8/0         
DBGvpp# show abf attach GigabitEthernet0/8/0
ipv4:
 abf-interface-attach: policy:0 priority:0
  [@1]: arp-ipv4: via 10.100.0.4 loop2

Trace without ABF

00:02:27:818282: dpdk-input
  GigabitEthernet0/8/0 rx queue 0
  buffer 0xd886: current data 0, length 98, free-list 0, clone-count 0, totlen-nifb 0, trace 0x1
                 ext-hdr-valid 
                 l4-cksum-computed l4-cksum-correct 
  PKT MBUF: port 1, nb_segs 1, pkt_len 98
    buf_len 2176, data_len 98, ol_flags 0x0, data_off 128, phys_addr 0x8b162200
    packet_type 0x0 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
    rss 0x0 fdir.hi 0x0 fdir.lo 0x0
  IP4: 08:00:27:54:67:a2 -> 08:00:27:88:33:fd
  ICMP: 20.20.20.2 -> 8.8.8.8
    tos 0x00, ttl 64, length 84, checksum 0x85e9
    fragment id 0x7c9a, flags DONT_FRAGMENT
  ICMP echo_request checksum 0x8a71
00:02:27:818326: ethernet-input
  frame: flags 0x3, hw-if-index 2, sw-if-index 2
  IP4: 08:00:27:54:67:a2 -> 08:00:27:88:33:fd
00:02:27:818337: ip4-input-no-checksum
  ICMP: 20.20.20.2 -> 8.8.8.8
    tos 0x00, ttl 64, length 84, checksum 0x85e9
    fragment id 0x7c9a, flags DONT_FRAGMENT
  ICMP echo_request checksum 0x8a71
00:02:27:818350: ip4-lookup
  fib 0 dpo-idx 11 flow hash: 0x00000000
  ICMP: 20.20.20.2 -> 8.8.8.8
    tos 0x00, ttl 64, length 84, checksum 0x85e9
    fragment id 0x7c9a, flags DONT_FRAGMENT
  ICMP echo_request checksum 0x8a71
00:02:27:818357: ip4-rewrite
  tx_sw_if_index 1 dpo-idx 11 : ipv4 via 192.168.0.1 GigabitEthernet0/3/0: mtu:9000 98ded060c14f0800275a18a50800 flow hash: 0x00000000
  00000000: 98ded060c14f0800275a18a50800450000547c9a40003f0186e9141414020808
  00000020: 080808008a7106b2069c0292205e00000000707d0e00000000001011
00:02:27:818362: nat44-ed-in2out-output
  NAT44_IN2OUT_ED_FAST_PATH: sw_if_index 2, next index 3, session -1
00:02:27:818369: nat44-ed-in2out-output-slowpath
  NAT44_IN2OUT_ED_SLOW_PATH: sw_if_index 2, next index 0, session 5
00:02:27:818376: GigabitEthernet0/3/0-output
  GigabitEthernet0/3/0
  IP4: 08:00:27:5a:18:a5 -> 98:de:d0:60:c1:4f
  ICMP: 192.168.0.106 -> 8.8.8.8
    tos 0x00, ttl 63, length 84, checksum 0xedec
    fragment id 0x7c9a, flags DONT_FRAGMENT
  ICMP echo_request checksum 0x51a6
00:02:27:818381: GigabitEthernet0/3/0-tx
  GigabitEthernet0/3/0 tx queue 0
  buffer 0xd886: current data 0, length 98, free-list 0, clone-count 0, totlen-nifb 0, trace 0x1
                 ext-hdr-valid 
                 l4-cksum-computed l4-cksum-correct l2-hdr-offset 0 l3-hdr-offset 14 
  PKT MBUF: port 1, nb_segs 1, pkt_len 98
    buf_len 2176, data_len 98, ol_flags 0x0, data_off 128, phys_addr 0x8b162200
    packet_type 0x0 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
    rss 0x0 fdir.hi 0x0 fdir.lo 0x0
  IP4: 08:00:27:5a:18:a5 -> 98:de:d0:60:c1:4f
  ICMP: 192.168.0.106 -> 8.8.8.8
    tos 0x00, ttl 63, length 84, checksum 0xedec
    fragment id 0x7c9a, flags DONT_FRAGMENT
  ICMP echo_request checksum 0x51a6

Trace with ABF

From the trace below it is clear that traffic traverses abf-input-ip4 node. As a result, it is encapsulated in VxLAN and forwarded through a tunnel.

00:03:30:398890: dpdk-input
  GigabitEthernet0/8/0 rx queue 0
  buffer 0xcec6: current data 0, length 98, free-list 0, clone-count 0, totlen-nifb 0, trace 0x3
                 ext-hdr-valid 
                 l4-cksum-computed l4-cksum-correct 
  PKT MBUF: port 1, nb_segs 1, pkt_len 98
    buf_len 2176, data_len 98, ol_flags 0x0, data_off 128, phys_addr 0x8d73b200
    packet_type 0x0 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
    rss 0x0 fdir.hi 0x0 fdir.lo 0x0
  IP4: 08:00:27:54:67:a2 -> 08:00:27:88:33:fd
  ICMP: 20.20.20.2 -> 8.8.8.8
    tos 0x00, ttl 64, length 84, checksum 0xce6e
    fragment id 0x3415, flags DONT_FRAGMENT
  ICMP echo_request checksum 0x1312
00:03:30:398931: ethernet-input
  frame: flags 0x3, hw-if-index 2, sw-if-index 2
  IP4: 08:00:27:54:67:a2 -> 08:00:27:88:33:fd
00:03:30:399080: ip4-input-no-checksum
  ICMP: 20.20.20.2 -> 8.8.8.8
    tos 0x00, ttl 64, length 84, checksum 0xce6e
    fragment id 0x3415, flags DONT_FRAGMENT
  ICMP echo_request checksum 0x1312
00:03:30:399382: abf-input-ip4
   next 1 index 12
00:03:30:399535: ip4-rewrite
  tx_sw_if_index 3 dpo-idx 12 : ipv4 via 10.100.0.4 loop2: mtu:1360 020027fd0004020027fd00050800 flow hash: 0x00000000
  00000000: 020027fd0004020027fd0005080045000054341540003f01cf6e141414020808
  00000020: 08080800131206b200096c8b205e0000000091760100000000001011
00:03:30:399713: loop2-output
  loop2
  IP4: 02:00:27:fd:00:05 -> 02:00:27:fd:00:04
  ICMP: 20.20.20.2 -> 8.8.8.8
    tos 0x00, ttl 63, length 84, checksum 0xcf6e
    fragment id 0x3415, flags DONT_FRAGMENT
  ICMP echo_request checksum 0x1312
00:03:30:400172: l2-input
  l2-input: sw_if_index 3 dst 02:00:27:fd:00:04 src 02:00:27:fd:00:05
00:03:30:400389: l2-fwd
  l2-fwd:   sw_if_index 3 dst 02:00:27:fd:00:04 src 02:00:27:fd:00:05 bd_index 1 result [0xffffffffffffffff, -1] static age-not bvi filter learn-event learn-move 
00:03:30:400617: l2-flood
  l2-flood: sw_if_index 3 dst 02:00:27:fd:00:04 src 02:00:27:fd:00:05 bd_index 1
00:03:30:400894: l2-output
  l2-output: sw_if_index 4 dst 02:00:27:fd:00:04 src 02:00:27:fd:00:05 data 08 00 45 00 00 54 34 15 40 00 3f 01
00:03:30:401147: ipsec-gre0-output
  ipsec-gre0
  00000000: 020027fd0004020027fd0005080045000054341540003f01cf6e141414020808
  00000020: 08080800131206b200096c8b205e000000009176010000000000101112131415
  00000040: 161718191a1b1c1d1e1f202122232425262728292a2b2c2d2e2f303132333435
  00000060: 36370000000000000000000000000000000000000000000000000000
00:03:30:401413: ipsec-gre0-tx
  GRE: tunnel 0 len 122 src 10.101.0.5 dst 10.101.0.4 sa-id 1
00:03:30:401757: esp4-encrypt
  esp: spi 5 seq 469 crypto aes-cbc-128 integrity sha-256-128
00:03:30:402114: ip4-lookup
  fib 0 dpo-idx 16 flow hash: 0x00000000
  IPSEC_ESP: 10.101.0.5 -> 10.101.0.4
    tos 0x00, ttl 254, length 172, checksum 0xa74d
    fragment id 0x0000
00:03:30:402450: ip4-rewrite
  tx_sw_if_index 5 dpo-idx 16 : ipv4 via 10.101.0.4 loop3: mtu:9000 020027fe0004020027fe00050800 flow hash: 0x00000000
  00000000: 020027fe0004020027fe00050800450000ac00000000fd32a84d0a6500050a65
  00000020: 000400000005000001d6c6fd0b0258faf7c67f3d2a5c88dd30723e7a
00:03:30:402820: loop3-output
  loop3
  IP4: 02:00:27:fe:00:05 -> 02:00:27:fe:00:04
  IPSEC_ESP: 10.101.0.5 -> 10.101.0.4
    tos 0x00, ttl 253, length 172, checksum 0xa84d
    fragment id 0x0000
00:03:30:403637: l2-input
  l2-input: sw_if_index 5 dst 02:00:27:fe:00:04 src 02:00:27:fe:00:05
00:03:30:404046: l2-fwd
  l2-fwd:   sw_if_index 5 dst 02:00:27:fe:00:04 src 02:00:27:fe:00:05 bd_index 2 result [0xffffffffffffffff, -1] static age-not bvi filter learn-event learn-move 
00:03:30:404494: l2-flood
  l2-flood: sw_if_index 5 dst 02:00:27:fe:00:04 src 02:00:27:fe:00:05 bd_index 2
00:03:30:404950: l2-output
  l2-output: sw_if_index 6 dst 02:00:27:fe:00:04 src 02:00:27:fe:00:05 data 08 00 45 00 00 ac 00 00 00 00 fd 32
00:03:30:405421: vxlan4-encap
  VXLAN encap to vxlan_tunnel0 vni 3
00:03:30:405921: ip4-rewrite
  tx_sw_if_index 1 dpo-idx 17 : ipv4 via 192.168.0.104 GigabitEthernet0/3/0: mtu:9000 08002768d11e0800275a18a50800 flow hash: 0x00000001
  00000000: 08002768d11e0800275a18a50800450000de00000000fd113aecc0a8006ac0a8
  00000020: 006812b512b500ca00000800000000000300020027fe0004020027fe
00:03:30:406389: nat44-ed-in2out-output
  NAT44_IN2OUT_ED_FAST_PATH: sw_if_index 5, next index 3, session -1
00:03:30:406922: nat44-ed-in2out-output-slowpath
  NAT44_IN2OUT_ED_SLOW_PATH: sw_if_index 5, next index 0, session -1
00:03:30:408014: GigabitEthernet0/3/0-output
  GigabitEthernet0/3/0
  IP4: 08:00:27:5a:18:a5 -> 08:00:27:68:d1:1e
  UDP: 192.168.0.106 -> 192.168.0.104
    tos 0x00, ttl 253, length 222, checksum 0x3aec
    fragment id 0x0000
  UDP: 4789 -> 4789
    length 202, checksum 0x0000
00:03:30:408548: GigabitEthernet0/3/0-tx
  GigabitEthernet0/3/0 tx queue 0
  buffer 0x16c54: current data -50, length 236, free-list 0, clone-count 0, totlen-nifb 0, trace 0x3
  PKT MBUF: port 65535, nb_segs 1, pkt_len 236
    buf_len 2176, data_len 236, ol_flags 0x0, data_off 78, phys_addr 0x8d1b1580
    packet_type 0x0 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
    rss 0x0 fdir.hi 0x0 fdir.lo 0x0
  IP4: 08:00:27:5a:18:a5 -> 08:00:27:68:d1:1e
  UDP: 192.168.0.106 -> 192.168.0.104
    tos 0x00, ttl 253, length 222, checksum 0x3aec
    fragment id 0x0000
  UDP: 4789 -> 4789
    length 202, checksum 0x0000

References


Related Posts


📊 Want to see these techniques in action?
Check out our real-world DPDK & VPP projects →

Learning VPP: Python API Usage for Integrated Control Plane Apps

✅ Updated January 2026 — This guide has been reviewed and updated for the latest DPDK/VPP versions.

logo_fdio-300x184

Overview

VPP preferred interface is a binary API that is used by Northbound control plane applications like Honeycomb. Furthermore, VPP provides an autogenerated Python wrapper for your convenience.

While Python API channel is significantly slower than native binary API (1500 messages/second versus 450000), it is still a great option.

There are three classes of VPP API methods:

  1. Synchronous request/reply.
  2. Dump functions.
  3. Events.

Install

1. Build VPP.

2. Check JSON API definitions under

vpp/build-root/install-vpp_debug-native/vpp/share/vpp/api/core

3. Set LD_LIBRARY_PATH to point to the location of libvppapiclient.so

4. Install Python package.

cd vpp
cd src/vpp-api/python
sudo python setup.py install

Code

The code below is doing the following.

  1. Connect to VPP.
  2. Print version.
  3. Print all interfaces.
  4. Subscribe on interface statistics updates.
  5. Disconnect from VPP.
#!/bin/env python

from __future__ import print_function
import os
import fnmatch
import time
from vpp_papi import VPP

def papi_event_handler(msgname, result):
    print(msgname)
    print(result)
 
vpp_json_dir = '/usr/share/vpp/api/'
 
jsonfiles = []
for root, dirnames, filenames in os.walk(vpp_json_dir):
    for filename in fnmatch.filter(filenames, '*.api.json'):
        jsonfiles.append(os.path.join(vpp_json_dir, filename))
 
if not jsonfiles:
    print('Error: no json api files found')
    exit(-1)
 
vpp = VPP(jsonfiles)
r = vpp.connect('papi-example')

rv = vpp.api.show_version()
print('VPP version =', rv.version.decode().rstrip('\0x00'))

for intf in vpp.api.sw_interface_dump():
    print(intf.interface_name.decode())

async=True                                                                                                                                                                                                
r=vpp.register_event_callback(papi_event_handler)
pid=os.getpid()
sw_ifs = [2]
r = vpp.api.want_per_interface_simple_stats(enable_disable=True, sw_ifs=sw_ifs, num=len(sw_ifs), pid=pid)
print(r)
                                                                                                                                                                                
time.sleep(60)
r = vpp.api.want_per_interface_simple_stats(enable_disable=False)
 
r = vpp.disconnect()
exit(r)

Output

sudo python vpp.py
VPP version = 19.01.00.01.01-rc0~11-gc069ff9
local0
GigabitEthernet0/3/0
GigabitEthernet0/8/0
want_per_interface_simple_stats_reply(_0=862, context=3, retval=0)
vnet_per_interface_simple_counters
vnet_per_interface_simple_counters(_0=887, count=1, timestamp=785, data=[vl_api_vnet_simple_counter_t(sw_if_index=2, drop=44, punt=0, rx_ip4=62, rx_ip6=0, rx_no_buffer=0, rx_miss=0, rx_error=0, tx_error=44, rx_mpls=0)])
vnet_per_interface_simple_counters
vnet_per_interface_simple_counters(_0=887, count=1, timestamp=795, data=[vl_api_vnet_simple_counter_t(sw_if_index=2, drop=44, punt=0, rx_ip4=72, rx_ip6=0, rx_no_buffer=0, rx_miss=0, rx_error=0, tx_error=44, rx_mpls=0)])
vnet_per_interface_simple_counters

References


Related Posts


📊 Want to see these techniques in action?
Check out our real-world DPDK & VPP projects →

Learning VPP: IPsec GRE over VxLAN

✅ Updated January 2026 — This guide has been reviewed and updated for the latest DPDK/VPP versions.

logo_fdio-300x184

Overview

The goal is to create a layer-2 encrypted tunnel and hide inner network IP addresses.

To achieve this goal, the traffic will be encapsulated in GRE, protected with IPsec and encapsulated into VxLAN.

GRE is a tunneling protocol developed by Cisco. The GRE frame looks as follows.

GRE frame

VXLAN tunnel is an L2 overlay on top of an L3 network underlay. It uses the UDP protocol to traverse the network. The VXLAN frame looks as follows.

VXLAN frame

IPsec supports tunnel and transport modes. As far as our tunnel is based on GRE, the transport mode will be used. In this mode, only a payload of the IP packet is encrypted and/or authenticated and the IP header is not touched. The resulting frame looks as follows.

IPSEC frame in transport mode

Setup

Two Ubuntu VMs with VPP ver. 19.01 and two Ubuntu VMs representing hosts.

VXLAN setup (1)

VPP configuration

In terms of VPP we need to create two loopbacks. One loopback will be bridged with GRE-IPsec tunnel, while another will be bridged with VxLAN tunnel. And using routing we will direct traffic into the first loopback where it will be encapsulated into GRE header and encrypted with IPsec. Then the traffic will be routed into a second loopback where it will receive VxLAN header.

Router1

ipsec sa add 10 spi 1001 esp crypto-alg aes-cbc-128 crypto-key 4a506a794f574265564551694d653768 integ-alg sha1-96 integ-key 4339314b55523947594d6d3547666b45764e6a58
ipsec sa add 20 spi 1000 esp crypto-alg aes-cbc-128 crypto-key 4a506a794f574265564551694d653768 integ-alg sha1-96 integ-key 4339314b55523947594d6d3547666b45764e6a58

loopback create mac 1a:ab:3c:4d:5e:7f
set interface ip address loop0 10.100.0.7/31
set int mtu 1360 loop0
set int l2 learn loop0 disable
create ipsec gre tunnel src 10.101.0.7 dst 10.101.0.6 local-sa 10 remote-sa 20
set int state ipsec-gre0 up
create bridge-domain 12 learn 0 forward 1 uu-flood 1 flood 1 arp-term 1
set bridge-domain arp entry 12 10.100.0.7 1a:ab:3c:4d:5e:7f
set int l2 bridge loop0 12 bvi
set int l2 bridge ipsec-gre0 12 1

loopback create mac 1a:2b:3c:4d:5e:7f
set interface ip address loop1 10.101.0.7/31
create vxlan tunnel src 192.168.31.76 dst 192.168.31.47 vni 13
create bridge-domain 13 learn 0 forward 1 uu-flood 1 flood 1 arp-term 1
set bridge-domain arp entry 13 10.101.0.7 1a:2b:3c:4d:5e:7f
set interface l2 bridge vxlan_tunnel0 13 1
set interface l2 bridge loop1 13 bvi

ip route add 10.10.10.0/24 via 10.100.0.6

Router2

ipsec sa add 10 spi 1000 esp crypto-alg aes-cbc-128 crypto-key 4a506a794f574265564551694d653768 integ-alg sha1-96 integ-key 4339314b55523947594d6d3547666b45764e6a58
ipsec sa add 20 spi 1001 esp crypto-alg aes-cbc-128 crypto-key 4a506a794f574265564551694d653768 integ-alg sha1-96 integ-key 4339314b55523947594d6d3547666b45764e6a58

loopback create mac 1a:ab:3c:4d:5e:6f
set interface ip address loop0 10.100.0.6/31
set int mtu 1360 loop0
set int l2 learn loop0 disable
create ipsec gre tunnel src 10.101.0.6 dst 10.101.0.7 local-sa 10 remote-sa 20
set int state ipsec-gre0 up
create bridge-domain 12 learn 0 forward 1 uu-flood 1 flood 1 arp-term 1
set bridge-domain arp entry 12 10.100.0.6 1a:ab:3c:4d:5e:6f
set int l2 bridge loop0 12 bvi
set int l2 bridge ipsec-gre0 12 1

loopback create mac 1a:2b:3c:4d:5e:6f
set interface ip address loop1 10.101.0.6/31
create vxlan tunnel src 192.168.31.47 dst 192.168.31.76 vni 13
create bridge-domain 13 learn 0 forward 1 uu-flood 1 flood 1 arp-term 1
set bridge-domain arp entry 13 10.101.0.6 1a:2b:3c:4d:5e:6f
set interface l2 bridge vxlan_tunnel0 13 1
set interface l2 bridge loop1 13 bvi

ip route add 20.20.20.0/24 via 10.100.0.7

Results

Encap trace

00:04:26:418264: dpdk-input
  GigabitEthernet0/8/0 rx queue 0
  buffer 0xddb4: current data 0, length 98, free-list 0, clone-count 0, totlen-nifb 0, trace 0x0
                 ext-hdr-valid 
                 l4-cksum-computed l4-cksum-correct 
  PKT MBUF: port 1, nb_segs 1, pkt_len 98
    buf_len 2176, data_len 98, ol_flags 0x0, data_off 128, phys_addr 0x8e376d80
    packet_type 0x0 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
    rss 0x0 fdir.hi 0x0 fdir.lo 0x0
  IP4: 08:00:27:54:67:a2 -> 08:00:27:88:33:fd
  ICMP: 20.20.20.2 -> 10.10.10.2
    tos 0x00, ttl 64, length 84, checksum 0x66a0
    fragment id 0x97e7, flags DONT_FRAGMENT
  ICMP echo_request checksum 0x2ea5
00:04:26:418338: ethernet-input
  frame: flags 0x3, hw-if-index 2, sw-if-index 2
  IP4: 08:00:27:54:67:a2 -> 08:00:27:88:33:fd
00:04:26:418355: ip4-input-no-checksum
  ICMP: 20.20.20.2 -> 10.10.10.2
    tos 0x00, ttl 64, length 84, checksum 0x66a0
    fragment id 0x97e7, flags DONT_FRAGMENT
  ICMP echo_request checksum 0x2ea5
00:04:26:418369: ip4-lookup
  fib 0 dpo-idx 14 flow hash: 0x00000000
  ICMP: 20.20.20.2 -> 10.10.10.2
    tos 0x00, ttl 64, length 84, checksum 0x66a0
    fragment id 0x97e7, flags DONT_FRAGMENT
  ICMP echo_request checksum 0x2ea5
00:04:26:418385: ip4-rewrite
  tx_sw_if_index 3 dpo-idx 14 : ipv4 via 10.100.0.6 loop0: mtu:1360 1aab3c4d5e6f1aab3c4d5e7f0800 flow hash: 0x00000000
  00000000: 1aab3c4d5e6f1aab3c4d5e7f08004500005497e740003f0167a0141414020a0a
  00000020: 0a0208002ea555ec004d8f3afb5d0000000025b60400000000001011
00:04:26:418393: loop0-output
  loop0
  IP4: 1a:ab:3c:4d:5e:7f -> 1a:ab:3c:4d:5e:6f
  ICMP: 20.20.20.2 -> 10.10.10.2
    tos 0x00, ttl 63, length 84, checksum 0x67a0
    fragment id 0x97e7, flags DONT_FRAGMENT
  ICMP echo_request checksum 0x2ea5
00:04:26:418417: l2-input
  l2-input: sw_if_index 3 dst 1a:ab:3c:4d:5e:6f src 1a:ab:3c:4d:5e:7f
00:04:26:418423: l2-fwd
  l2-fwd:   sw_if_index 3 dst 1a:ab:3c:4d:5e:6f src 1a:ab:3c:4d:5e:7f bd_index 1 result [0x1020000000004, 4] none
00:04:26:418428: l2-output
  l2-output: sw_if_index 4 dst 1a:ab:3c:4d:5e:6f src 1a:ab:3c:4d:5e:7f data 08 00 45 00 00 54 97 e7 40 00 3f 01
00:04:26:418434: ipsec-gre0-output
  ipsec-gre0
  00000000: 1aab3c4d5e6f1aab3c4d5e7f08004500005497e740003f0167a0141414020a0a
  00000020: 0a0208002ea555ec004d8f3afb5d0000000025b6040000000000101112131415
  00000040: 161718191a1b1c1d1e1f202122232425262728292a2b2c2d2e2f303132333435
  00000060: 36370000000000000000000000000000000000000000000000000000
00:04:26:418436: ipsec-gre0-tx
  GRE: tunnel 0 len 122 src 10.101.0.7 dst 10.101.0.6 sa-id 10
00:04:26:418440: esp4-encrypt
  esp: spi 1001 seq 134 crypto aes-cbc-128 integrity sha1-96
00:04:26:418468: ip4-lookup
  fib 0 dpo-idx 19 flow hash: 0x00000000
  IPSEC_ESP: 10.101.0.7 -> 10.101.0.6
    tos 0x00, ttl 254, length 168, checksum 0xa74d
    fragment id 0x0000
00:04:26:418471: ip4-rewrite
  tx_sw_if_index 5 dpo-idx 19 : ipv4 via 10.101.0.6 loop1: mtu:9000 1a2b3c4d5e6f1a2b3c4d5e7f0800 flow hash: 0x00000000
  00000000: 1a2b3c4d5e6f1a2b3c4d5e7f0800450000a800000000fd32a84d0a6500070a65
  00000020: 0006000003e9000000870573ff85554266537fd108913fe1aba4e3fc
00:04:26:418485: loop1-output
  loop1
  IP4: 1a:2b:3c:4d:5e:7f -> 1a:2b:3c:4d:5e:6f
  IPSEC_ESP: 10.101.0.7 -> 10.101.0.6
    tos 0x00, ttl 253, length 168, checksum 0xa84d
    fragment id 0x0000
00:04:26:418488: l2-input
  l2-input: sw_if_index 5 dst 1a:2b:3c:4d:5e:6f src 1a:2b:3c:4d:5e:7f
00:04:26:418489: l2-fwd
  l2-fwd:   sw_if_index 5 dst 1a:2b:3c:4d:5e:6f src 1a:2b:3c:4d:5e:7f bd_index 2 result [0x1020000000006, 6] none
00:04:26:418491: l2-output
  l2-output: sw_if_index 6 dst 1a:2b:3c:4d:5e:6f src 1a:2b:3c:4d:5e:7f data 08 00 45 00 00 a8 00 00 00 00 fd 32
00:04:26:418493: vxlan4-encap
  VXLAN encap to vxlan_tunnel0 vni 13
00:04:26:418499: ip4-rewrite
  tx_sw_if_index 1 dpo-idx 18 : ipv4 via 192.168.31.47 GigabitEthernet0/3/0: mtu:9000 08002768d11e0800275a18a50800 flow hash: 0x00000001
  00000000: 08002768d11e0800275a18a50800450000da00000000fd11fd46c0a81f4cc0a8
  00000020: 1f2f12b512b500c600000800000000000d001a2b3c4d5e6f1a2b3c4d
00:04:26:418500: GigabitEthernet0/3/0-output
  GigabitEthernet0/3/0
  IP4: 08:00:27:5a:18:a5 -> 08:00:27:68:d1:1e
  UDP: 192.168.31.76 -> 192.168.31.47
    tos 0x00, ttl 253, length 218, checksum 0xfd46
    fragment id 0x0000
  UDP: 4789 -> 4789
    length 198, checksum 0x0000
00:04:26:418502: GigabitEthernet0/3/0-tx
  GigabitEthernet0/3/0 tx queue 0
  buffer 0x1c332: current data -50, length 232, free-list 0, clone-count 0, totlen-nifb 0, trace 0x0
  PKT MBUF: port 65535, nb_segs 1, pkt_len 232
    buf_len 2176, data_len 232, ol_flags 0x0, data_off 78, phys_addr 0x8e70cd00
    packet_type 0x0 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
    rss 0x0 fdir.hi 0x0 fdir.lo 0x0
  IP4: 08:00:27:5a:18:a5 -> 08:00:27:68:d1:1e
  UDP: 192.168.31.76 -> 192.168.31.47
    tos 0x00, ttl 253, length 218, checksum 0xfd46
    fragment id 0x0000
  UDP: 4789 -> 4789
    length 198, checksum 0x0000

Decap trace

00:04:26:419224: dpdk-input
  GigabitEthernet0/3/0 rx queue 0
  buffer 0x1afa: current data 0, length 232, free-list 0, clone-count 0, totlen-nifb 0, trace 0x1
                 ext-hdr-valid 
                 l4-cksum-computed l4-cksum-correct 
  PKT MBUF: port 0, nb_segs 1, pkt_len 232
    buf_len 2176, data_len 232, ol_flags 0x0, data_off 128, phys_addr 0x8dc6bf00
    packet_type 0x0 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
    rss 0x0 fdir.hi 0x0 fdir.lo 0x0
  IP4: 08:00:27:68:d1:1e -> 08:00:27:5a:18:a5
  UDP: 192.168.31.47 -> 192.168.31.76
    tos 0x00, ttl 253, length 218, checksum 0xfd46
    fragment id 0x0000
  UDP: 4789 -> 4789
    length 198, checksum 0x0000
00:04:26:419299: ethernet-input
  frame: flags 0x3, hw-if-index 1, sw-if-index 1
  IP4: 08:00:27:68:d1:1e -> 08:00:27:5a:18:a5
00:04:26:419313: ip4-input-no-checksum
  UDP: 192.168.31.47 -> 192.168.31.76
    tos 0x00, ttl 253, length 218, checksum 0xfd46
    fragment id 0x0000
  UDP: 4789 -> 4789
    length 198, checksum 0x0000
00:04:26:419320: ip4-lookup
  fib 0 dpo-idx 5 flow hash: 0x00000000
  UDP: 192.168.31.47 -> 192.168.31.76
    tos 0x00, ttl 253, length 218, checksum 0xfd46
    fragment id 0x0000
  UDP: 4789 -> 4789
    length 198, checksum 0x0000
00:04:26:419328: ip4-local
    UDP: 192.168.31.47 -> 192.168.31.76
      tos 0x00, ttl 253, length 218, checksum 0xfd46
      fragment id 0x0000
    UDP: 4789 -> 4789
      length 198, checksum 0x0000
00:04:26:419334: ip4-udp-lookup
  UDP: src-port 4789 dst-port 4789
00:04:26:419345: vxlan4-input
  VXLAN decap from vxlan_tunnel0 vni 13 next 1 error 0
00:04:26:419358: l2-input
  l2-input: sw_if_index 6 dst 1a:2b:3c:4d:5e:7f src 1a:2b:3c:4d:5e:6f
00:04:26:419365: l2-learn
  l2-learn: sw_if_index 6 dst 1a:2b:3c:4d:5e:7f src 1a:2b:3c:4d:5e:6f bd_index 2
00:04:26:419375: l2-fwd
  l2-fwd:   sw_if_index 6 dst 1a:2b:3c:4d:5e:7f src 1a:2b:3c:4d:5e:6f bd_index 2 result [0x700000005, 5] static age-not bvi 
00:04:26:419381: ip4-input
  IPSEC_ESP: 10.101.0.6 -> 10.101.0.7
    tos 0x00, ttl 253, length 168, checksum 0xa84d
    fragment id 0x0000
00:04:26:419385: ip4-lookup
  fib 0 dpo-idx 8 flow hash: 0x00000000
  IPSEC_ESP: 10.101.0.6 -> 10.101.0.7
    tos 0x00, ttl 253, length 168, checksum 0xa84d
    fragment id 0x0000
00:04:26:419387: ip4-local
    IPSEC_ESP: 10.101.0.6 -> 10.101.0.7
      tos 0x00, ttl 253, length 168, checksum 0xa84d
      fragment id 0x0000
00:04:26:419390: ipsec-if-input
  IPSec: spi 1000 seq 93
00:04:26:419399: esp4-decrypt
  esp: crypto aes-cbc-128 integrity sha1-96
00:04:26:419421: ipsec-gre-input
  GRE: tunnel -1 len 122 src 10.101.0.6 dst 10.101.0.7
00:04:26:419427: l2-input
  l2-input: sw_if_index 4 dst 1a:ab:3c:4d:5e:7f src 1a:ab:3c:4d:5e:6f
00:04:26:419429: l2-learn
  l2-learn: sw_if_index 4 dst 1a:ab:3c:4d:5e:7f src 1a:ab:3c:4d:5e:6f bd_index 1
00:04:26:419435: l2-fwd
  l2-fwd:   sw_if_index 4 dst 1a:ab:3c:4d:5e:7f src 1a:ab:3c:4d:5e:6f bd_index 1 result [0x700000003, 3] static age-not bvi 
00:04:26:419438: ip4-input
  ICMP: 10.10.10.2 -> 20.20.20.2
    tos 0x00, ttl 63, length 84, checksum 0x82f4
    fragment id 0xbc93
  ICMP echo_reply checksum 0x36a5
00:04:26:419441: ip4-lookup
  fib 0 dpo-idx 26 flow hash: 0x00000000
  ICMP: 10.10.10.2 -> 20.20.20.2
    tos 0x00, ttl 63, length 84, checksum 0x82f4
    fragment id 0xbc93
  ICMP echo_reply checksum 0x36a5
00:04:26:419445: ip4-rewrite
  tx_sw_if_index 2 dpo-idx 26 : ipv4 via 20.20.20.2 GigabitEthernet0/8/0: mtu:9000 0800275467a20800278833fd0800 flow hash: 0x00000000
  00000000: 0800275467a20800278833fd080045000054bc9300003e0183f40a0a0a021414
  00000020: 1402000036a555ec004d8f3afb5d0000000025b60400000000001011
00:04:26:419449: GigabitEthernet0/8/0-output
  GigabitEthernet0/8/0
  IP4: 08:00:27:88:33:fd -> 08:00:27:54:67:a2
  ICMP: 10.10.10.2 -> 20.20.20.2
    tos 0x00, ttl 62, length 84, checksum 0x83f4
    fragment id 0xbc93
  ICMP echo_reply checksum 0x36a5
00:04:26:419455: GigabitEthernet0/8/0-tx
  GigabitEthernet0/8/0 tx queue 0
  buffer 0x1c359: current data 38, length 98, free-list 0, clone-count 0, totlen-nifb 0, trace 0x1
  PKT MBUF: port 65535, nb_segs 1, pkt_len 98
    buf_len 2176, data_len 98, ol_flags 0x0, data_off 166, phys_addr 0x8e70d6c0
    packet_type 0x0 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
    rss 0x0 fdir.hi 0x0 fdir.lo 0x0
  IP4: 08:00:27:88:33:fd -> 08:00:27:54:67:a2
  ICMP: 10.10.10.2 -> 20.20.20.2
    tos 0x00, ttl 62, length 84, checksum 0x83f4
    fragment id 0xbc93
  ICMP echo_reply checksum 0x36a5

References


Related Posts


📊 Want to see these techniques in action?
Check out our real-world DPDK & VPP projects →

Learning VPP: ACL Plugin for Traffic Classification

✅ Updated January 2026 — This guide has been reviewed and updated for the latest DPDK/VPP versions.

logo_fdio-300x184

Overview

Our goal is to leverage the ACL plugin for traffic classification based on L3/L4 header fields. The ACL plugin does not supply the CLI for configuration but all APIs are covered in VAT CLI.

Integration

First, register as a user of the ACL plugin API.

acl_plugin_exports_init (&acl_plugin);
acl_user_id = acl_plugin.register_user_module ("Test", "label1", "label2");
acl_lc_id = acl_plugin.get_lookup_context_index (acl_user_id, 1, 2);

Second, add ACL rules into the current context.

vec_add1 (acl_vec, 0);
vec_add1 (acl_vec, 1);
acl_plugin.set_acl_vec_for_context (acl_lc_id, acl_vec);
vec_free (acl_vec);

Third, match traffic against the ACL rules.

acl_plugin_fill_5tuple_inline (acl_plugin.p_acl_main,
                               acl_lc_id, b0,
                               is_ip60,
                               /* is_input */ 0,
                               /* is_l2_path */ 1,
                               &pkt_5tuple0);

res = acl_plugin_match_5tuple_inline (acl_plugin.p_acl_main,
                                      acl_lc_id,
                                      &pkt_5tuple0, is_ip60,
                                      &action0, &acl_pos_p0,
                                      &acl_match_p0,
                                      &rule_match_p0,
                                      &trace_bitmap0);
if (res > 0)
{
    printf ("Rule matched! \n");
}

Testing

Build and run VPP. And run VAT.

./build-root/build-vpp_debug-native/vpp/bin/vpp
./vpp/build-root/build-vpp_debug-native/vpp/bin/vpp_api_test

Create ACL rules.

vat# acl_add_replace ipv6 permit dst 2001:db8::1/128, ipv4 permit src 192.0.2.1/32
vl_api_acl_add_replace_reply_t_handler:108: ACL index: 0
vat# acl_add_replace ipv6 permit dst 2001:db8::1/128, ipv4 permit src 10.10.2.1/32
vl_api_acl_add_replace_reply_t_handler:108: ACL index: 1

Check registered ACL users and ACL rules in CLI.

DBGvpp# show acl-plugin lookup context 
index 0:Test label1: 1 label2: 2, acl_indices: 0, 1
DBGvpp# show acl-plugin acl            
acl-index 0 count 2 tag {}
          0: ipv6 permit src ::/0 dst 2001:db8::1/128 proto 0 sport 0-65535 dport 0-65535
          1: ipv4 permit src 192.0.2.1/32 dst 0.0.0.0/0 proto 0 sport 0-65535 dport 0-65535
  used in lookup context index: 0
acl-index 1 count 2 tag {}
          0: ipv6 permit src ::/0 dst 2001:db8::1/128 proto 0 sport 0-65535 dport 0-65535
          1: ipv4 permit src 10.10.2.1/32 dst 0.0.0.0/0 proto 0 sport 0-65535 dport 0-65535
  used in lookup context index: 0

References

 


Related Posts


📊 Want to see these techniques in action?
Check out our real-world DPDK & VPP projects →

Learning VPP: VxLAN over IPsec

✅ Updated January 2026 — This guide has been reviewed and updated for the latest DPDK/VPP versions.

logo_fdio-300x184

Overview

The goal is to create a layer-2 encrypted tunnel. The traffic will be encapsulated in VxLAN and protected with IPsec.

VXLAN tunnel is an L2 overlay on top of an L3 network underlay. It uses the UDP protocol to traverse the network. The VXLAN frame looks as follows.

VXLAN frame

IPsec supports tunnel and transport modes. As far as our tunnel is based on VxLAN, the transport mode will be used. In this mode, only a payload of the IP packet is encrypted and/or authenticated and the IP header is not touched. The resulting frame looks as follows.

IPSEC frame in transport mode

Setup

Two Ubuntu VMs with VPP ver. 19.01 and two Ubuntu VMs representing hosts.

VXLAN setup (1)

VPP configuration

Router1

loopback create mac 1a:2b:3c:4d:5e:8f
create bridge-domain 13 learn 1 forward 1 uu-flood 1 flood 1 arp-term 0
create vxlan tunnel src 192.168.31.47 dst 192.168.31.76 vni 13
set interface l2 bridge vxlan_tunnel0 13 1
set interface l2 bridge loop0 13 bvi
set interface ip table loop0 0
set interface ip address loop0 10.100.0.6/31
ip route table 0 20.20.20.0/24 via loop0
ipsec sa add 10 spi 1000 esp crypto-alg aes-cbc-128 crypto-key 4a506a794f574265564551694d653768 integ-alg sha1-96 integ-key 4339314b55523947594d6d3547666b45764e6a58
ipsec sa add 20 spi 1001 esp crypto-alg aes-cbc-128 crypto-key 4a506a794f574265564551694d653768 integ-alg sha1-96 integ-key 4339314b55523947594d6d3547666b45764e6a58
ipsec spd add 1
set interface ipsec spd loop0 1
ipsec policy add spd 1 priority 100 inbound action bypass protocol 50
ipsec policy add spd 1 priority 100 outbound action bypass protocol 50
ipsec policy add spd 1 priority 10 outbound action protect sa 10 local-ip-range 10.10.10.0 - 10.10.10.255 remote-ip-range 20.20.20.0 - 20.20.20.255
ipsec policy add spd 1 priority 10 inbound action protect sa 20 local-ip-range 10.10.10.0 - 10.10.10.255 remote-ip-range 20.20.20.0 - 20.20.20.255

Router2

loopback create mac 1a:2b:3c:4d:5e:7f
create bridge-domain 13 learn 1 forward 1 uu-flood 1 flood 1 arp-term 0
create vxlan tunnel src 192.168.31.76 dst 192.168.31.47 vni 13
set interface l2 bridge vxlan_tunnel0 13 1
set interface l2 bridge loop0 13 bvi
set interface ip table loop0 0
set interface ip address loop0 10.100.0.7/31
ip route table 0 10.10.10.0/24 via loop0
ipsec sa add 10 spi 1001 esp crypto-alg aes-cbc-128 crypto-key 4a506a794f574265564551694d653768 integ-alg sha1-96 integ-key 4339314b55523947594d6d3547666b45764e6a58
ipsec sa add 20 spi 1000 esp crypto-alg aes-cbc-128 crypto-key 4a506a794f574265564551694d653768 integ-alg sha1-96 integ-key 4339314b55523947594d6d3547666b45764e6a58
ipsec spd add 1
set interface ipsec spd loop0 1
ipsec policy add spd 1 priority 100 inbound action bypass protocol 50
ipsec policy add spd 1 priority 100 outbound action bypass protocol 50
ipsec policy add spd 1 priority 10 outbound action protect sa 10 local-ip-range 20.20.20.0 - 20.20.20.255 remote-ip-range 10.10.10.0 - 10.10.10.255
ipsec policy add spd 1 priority 10 inbound action protect sa 20 local-ip-range 20.20.20.0 - 20.20.20.255 remote-ip-range 10.10.10.0 - 10.10.10.255

Results

Encap trace

00:01:37:265053: dpdk-input
GigabitEthernet0/8/0 rx queue 0
buffer 0xe663: current data 0, length 98, free-list 0, clone-count 0, totlen-nifb 0, trace 0x1
ext-hdr-valid
l4-cksum-computed l4-cksum-correct
PKT MBUF: port 1, nb_segs 1, pkt_len 98
buf_len 2176, data_len 98, ol_flags 0x0, data_off 128, phys_addr 0x91b99940
packet_type 0x0 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
rss 0x0 fdir.hi 0x0 fdir.lo 0x0
IP4: 08:00:27:54:67:a2 -> 08:00:27:88:33:fd
ICMP: 20.20.20.2 -> 10.10.10.2
tos 0x00, ttl 64, length 84, checksum 0x5e94
fragment id 0x9ff3, flags DONT_FRAGMENT
ICMP echo_request checksum 0x5d06
00:01:37:265092: ethernet-input
frame: flags 0x3, hw-if-index 2, sw-if-index 2
IP4: 08:00:27:54:67:a2 -> 08:00:27:88:33:fd
00:01:37:265103: ip4-input-no-checksum
ICMP: 20.20.20.2 -> 10.10.10.2
tos 0x00, ttl 64, length 84, checksum 0x5e94
fragment id 0x9ff3, flags DONT_FRAGMENT
ICMP echo_request checksum 0x5d06
00:01:37:265110: ip4-lookup
fib 0 dpo-idx 21 flow hash: 0x00000000
ICMP: 20.20.20.2 -> 10.10.10.2
tos 0x00, ttl 64, length 84, checksum 0x5e94
fragment id 0x9ff3, flags DONT_FRAGMENT
ICMP echo_request checksum 0x5d06
00:01:37:265118: ip4-rewrite
tx_sw_if_index 3 dpo-idx 21 : ipv4 via 10.100.0.6 loop0: mtu:9000 1a2b3c4d5e6f1a2b3c4d5e7f0800 flow hash: 0x00000000
00000000: 1a2b3c4d5e6f1a2b3c4d5e7f0800450000549ff340003f015f94141414020a0a
00000020: 0a0208005d0605030016ba1c935d0000000088930100000000001011
00:01:37:265123: ipsec4-output
spd 1
00:01:37:265131: esp4-encrypt
esp: spi 1001 seq 19 crypto aes-cbc-128 integrity sha1-96
00:01:37:265168: loop0-output
loop0
IP4: 1a:2b:3c:4d:5e:7f -> 1a:2b:3c:4d:5e:6f
IPSEC_ESP: 20.20.20.2 -> 10.10.10.2
tos 0x00, ttl 254, length 136, checksum 0x8022
fragment id 0x0000
00:01:37:265175: l2-input
l2-input: sw_if_index 3 dst 1a:2b:3c:4d:5e:6f src 1a:2b:3c:4d:5e:7f
00:01:37:265179: l2-fwd
l2-fwd:   sw_if_index 3 dst 1a:2b:3c:4d:5e:6f src 1a:2b:3c:4d:5e:7f bd_index 1 result [0x1010000000004, 4] none
00:01:37:265183: l2-output
l2-output: sw_if_index 4 dst 1a:2b:3c:4d:5e:6f src 1a:2b:3c:4d:5e:7f data 08 00 45 00 00 88 00 00 00 00 fe 32
00:01:37:265188: vxlan4-encap
VXLAN encap to vxlan_tunnel0 vni 13
00:01:37:265192: ip4-rewrite
tx_sw_if_index 1 dpo-idx 15 : ipv4 via 192.168.31.47 GigabitEthernet0/3/0: mtu:9000 08002768d11e0800275a18a50800 flow hash: 0x00000001
00000000: 08002768d11e0800275a18a50800450000ba00000000fd11fd66c0a81f4cc0a8
00000020: 1f2f3b6112b500a600000800000000000d001a2b3c4d5e6f1a2b3c4d
00:01:37:265194: GigabitEthernet0/3/0-output
GigabitEthernet0/3/0
IP4: 08:00:27:5a:18:a5 -> 08:00:27:68:d1:1e
UDP: 192.168.31.76 -> 192.168.31.47
tos 0x00, ttl 253, length 186, checksum 0xfd66
fragment id 0x0000
UDP: 15201 -> 4789
length 166, checksum 0x0000
00:01:37:265196: GigabitEthernet0/3/0-tx
GigabitEthernet0/3/0 tx queue 0
buffer 0x1aca6: current data -50, length 200, free-list 0, clone-count 0, totlen-nifb 0, trace 0x1
PKT MBUF: port 65535, nb_segs 1, pkt_len 200
buf_len 2176, data_len 200, ol_flags 0x0, data_off 78, phys_addr 0x916b2a00
packet_type 0x0 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
rss 0x0 fdir.hi 0x0 fdir.lo 0x0
IP4: 08:00:27:5a:18:a5 -> 08:00:27:68:d1:1e
UDP: 192.168.31.76 -> 192.168.31.47
tos 0x00, ttl 253, length 186, checksum 0xfd66
fragment id 0x0000
UDP: 15201 -> 4789
length 166, checksum 0x0000

Decap trace

00:01:37:265912: dpdk-input
GigabitEthernet0/3/0 rx queue 0
buffer 0x357c: current data 0, length 200, free-list 0, clone-count 0, totlen-nifb 0, trace 0x2
ext-hdr-valid
l4-cksum-computed l4-cksum-correct
PKT MBUF: port 0, nb_segs 1, pkt_len 200
buf_len 2176, data_len 200, ol_flags 0x0, data_off 128, phys_addr 0x918d5f80
packet_type 0x0 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
rss 0x0 fdir.hi 0x0 fdir.lo 0x0
IP4: 08:00:27:68:d1:1e -> 08:00:27:5a:18:a5
UDP: 192.168.31.47 -> 192.168.31.76
tos 0x00, ttl 253, length 186, checksum 0xfd66
fragment id 0x0000
UDP: 62150 -> 4789
length 166, checksum 0x0000
00:01:37:265941: ethernet-input
frame: flags 0x3, hw-if-index 1, sw-if-index 1
IP4: 08:00:27:68:d1:1e -> 08:00:27:5a:18:a5
00:01:37:265945: ip4-input-no-checksum
UDP: 192.168.31.47 -> 192.168.31.76
tos 0x00, ttl 253, length 186, checksum 0xfd66
fragment id 0x0000
UDP: 62150 -> 4789
length 166, checksum 0x0000
00:01:37:265947: ip4-lookup
fib 0 dpo-idx 5 flow hash: 0x00000000
UDP: 192.168.31.47 -> 192.168.31.76
tos 0x00, ttl 253, length 186, checksum 0xfd66
fragment id 0x0000
UDP: 62150 -> 4789
length 166, checksum 0x0000
00:01:37:265951: ip4-local
UDP: 192.168.31.47 -> 192.168.31.76
tos 0x00, ttl 253, length 186, checksum 0xfd66
fragment id 0x0000
UDP: 62150 -> 4789
length 166, checksum 0x0000
00:01:37:265954: ip4-udp-lookup
UDP: src-port 62150 dst-port 4789
00:01:37:265959: vxlan4-input
VXLAN decap from vxlan_tunnel0 vni 13 next 1 error 0
00:01:37:265964: l2-input
l2-input: sw_if_index 4 dst 1a:2b:3c:4d:5e:7f src 1a:2b:3c:4d:5e:6f
00:01:37:265967: l2-learn
l2-learn: sw_if_index 4 dst 1a:2b:3c:4d:5e:7f src 1a:2b:3c:4d:5e:6f bd_index 1
00:01:37:265971: l2-fwd
l2-fwd:   sw_if_index 4 dst 1a:2b:3c:4d:5e:7f src 1a:2b:3c:4d:5e:6f bd_index 1 result [0x700000003, 3] static age-not bvi
00:01:37:265974: ip4-input
IPSEC_ESP: 10.10.10.2 -> 20.20.20.2
tos 0x00, ttl 254, length 136, checksum 0x8022
fragment id 0x0000
00:01:37:265976: ipsec4-input
esp: sa_id 20 spd 1 spi 1000 seq 19
00:01:37:265980: esp4-decrypt
esp: crypto aes-cbc-128 integrity sha1-96
00:01:37:266019: ip4-input
ICMP: 10.10.10.2 -> 20.20.20.2
tos 0x00, ttl 254, length 84, checksum 0x8087
fragment id 0x0000
ICMP echo_reply checksum 0x6506
00:01:37:266021: ip4-lookup
fib 0 dpo-idx 23 flow hash: 0x00000000
ICMP: 10.10.10.2 -> 20.20.20.2
tos 0x00, ttl 254, length 84, checksum 0x8087
fragment id 0x0000
ICMP echo_reply checksum 0x6506
00:01:37:266024: ip4-rewrite
tx_sw_if_index 2 dpo-idx 23 : ipv4 via 20.20.20.2 GigabitEthernet0/8/0: mtu:9000 0800275467a20800278833fd0800 flow hash: 0x00000000
00000000: 0800275467a20800278833fd08004500005400000000fd0181870a0a0a021414
00000020: 14020000650605030016ba1c935d0000000088930100000000001011
00:01:37:266025: GigabitEthernet0/8/0-output
GigabitEthernet0/8/0
IP4: 08:00:27:88:33:fd -> 08:00:27:54:67:a2
ICMP: 10.10.10.2 -> 20.20.20.2
tos 0x00, ttl 253, length 84, checksum 0x8187
fragment id 0x0000
ICMP echo_reply checksum 0x6506
00:01:37:266029: GigabitEthernet0/8/0-tx
GigabitEthernet0/8/0 tx queue 0
buffer 0x1accd: current data 0, length 98, free-list 0, clone-count 0, totlen-nifb 0, trace 0x2
ip4
PKT MBUF: port 65535, nb_segs 1, pkt_len 98
buf_len 2176, data_len 98, ol_flags 0x0, data_off 128, phys_addr 0x916b33c0
packet_type 0x0 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
rss 0x0 fdir.hi 0x0 fdir.lo 0x0
IP4: 08:00:27:88:33:fd -> 08:00:27:54:67:a2
ICMP: 10.10.10.2 -> 20.20.20.2
tos 0x00, ttl 253, length 84, checksum 0x8187
fragment id 0x0000
ICMP echo_reply checksum 0x6506

References


Related Posts


📊 Want to see these techniques in action?
Check out our real-world DPDK & VPP projects →

Learning VPP: Trace with Wireshark

✅ Updated January 2026 — This guide has been reviewed and updated for the latest DPDK/VPP versions.

logo_fdio-300x184

Overview

Each node in VPP is equipped with a possibility to trace the packets. This is a great debugging tool to investigate the issues with traffic. But analyzing trace log in a text form is a tiresome exercise.

But not anymore, as soon as the latest Wireshark supports VPP pcap dispatch trace dissector. As a result, you have an amazing tool to analyze all the changes that happen with a packet buffer while travelling through the VPP node graph.

Setup

VPP

Initiate and stop trace recording using the following commands.

pcap dispatch trace on max 1000 file vppcapture buffer-trace dpdk-input 1000
pcap dispatch trace off

Wireshark

Download and build the latest Wireshark on Ubuntu 18.04.

apt-get install -y libgcrypt11-dev flex bison qtbase5-dev qttools5-dev-tools qttools5-dev qtmultimedia5-dev libqt5svg5-dev libpcap-dev qt5-default libc-ares-dev libpcre2-dev
git clone https://gitlab.com/wireshark/wireshark.git
cd wireshark
mkdir build
cd build
cmake -G Ninja ../
ninja -j 8
sudo ninja install

Open the file /tmp/vppcapture with Wireshark and make the following changes into “Preferences”.

Wireshark_preferences

Results

As a result, you get the following invaluable recording of the journey that the packet buffer took through the VPP node graph. Here you can find all the metadata information that is traveling from node to node.

Wireshark_capture

References


Related Posts


📊 Want to see these techniques in action?
Check out our real-world DPDK & VPP projects →

Learning VPP: VXLAN tunnel configuration and setup

✅ Updated January 2026 — This guide has been reviewed and updated for the latest DPDK/VPP versions.

logo_fdio-300x184

Overview

VXLAN tunnel is an L2 overlay on top of an L3 network underlay. It uses the UDP protocol to traverse the network. The VXLAN frame looks as follows.

VXLAN frame

Setup

Two Ubuntu VMs with VPP ver. 19.01 and two Ubuntu VMs representing hosts.

VXLAN setup (1)

VPP configuration

Router1

loopback create mac 1a:2b:3c:4d:5e:8f
create bridge-domain 13 learn 1 forward 1 uu-flood 1 flood 1 arp-term 0
create vxlan tunnel src 192.168.31.47 dst 192.168.31.76 vni 13
set interface l2 bridge vxlan_tunnel0 13 1
set interface l2 bridge loop0 13 bvi
set interface ip table loop0 0

Router2

loopback create mac 1a:2b:3c:4d:5e:7f
create bridge-domain 13 learn 1 forward 1 uu-flood 1 flood 1 arp-term 0
create vxlan tunnel src 192.168.31.76 dst 192.168.31.47 vni 13
set interface l2 bridge vxlan_tunnel0 13 1
set interface l2 bridge loop0 13 bvi
set interface ip table loop0 0

Results

Packet trace

00:03:43:444347: dpdk-input
GigabitEthernet0/3/0 rx queue 0
buffer 0x9d811: current data 0, length 148, buffer-pool 0, ref-count 1, totlen-nifb 0, trace handle 0x8
ext-hdr-valid
l4-cksum-computed l4-cksum-correct
PKT MBUF: port 0, nb_segs 1, pkt_len 148
buf_len 2176, data_len 148, ol_flags 0x0, data_off 128, phys_addr 0x8c1604c0
packet_type 0x0 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
rss 0x0 fdir.hi 0x0 fdir.lo 0x0
IP4: 08:00:27:68:d1:1e -> 08:00:27:5a:18:a5
UDP: 192.168.31.47 -> 192.168.31.76
tos 0x00, ttl 253, length 134, checksum 0xfd9a
fragment id 0x0000
UDP: 28591 -> 4789
length 114, checksum 0x0000
00:03:43:444389: ethernet-input
frame: flags 0x3, hw-if-index 1, sw-if-index 1
IP4: 08:00:27:68:d1:1e -> 08:00:27:5a:18:a5
00:03:43:444399: ip4-input-no-checksum
UDP: 192.168.31.47 -> 192.168.31.76
tos 0x00, ttl 253, length 134, checksum 0xfd9a
fragment id 0x0000
UDP: 28591 -> 4789
length 114, checksum 0x0000
00:03:43:444406: ip4-lookup
fib 0 dpo-idx 5 flow hash: 0x00000000
UDP: 192.168.31.47 -> 192.168.31.76
tos 0x00, ttl 253, length 134, checksum 0xfd9a
fragment id 0x0000
UDP: 28591 -> 4789
length 114, checksum 0x0000
00:03:43:444414: ip4-local
UDP: 192.168.31.47 -> 192.168.31.76
tos 0x00, ttl 253, length 134, checksum 0xfd9a
fragment id 0x0000
UDP: 28591 -> 4789
length 114, checksum 0x0000
00:03:43:444418: ip4-udp-lookup
UDP: src-port 28591 dst-port 4789
00:03:43:444423: vxlan4-input
VXLAN decap from vxlan_tunnel0 vni 13 next 1 error 0
00:03:43:444429: l2-input
l2-input: sw_if_index 4 dst 1a:2b:3c:4d:5e:7f src 1a:2b:3c:4d:5e:8f
00:03:43:444436: l2-learn
l2-learn: sw_if_index 4 dst 1a:2b:3c:4d:5e:7f src 1a:2b:3c:4d:5e:8f bd_index 1
00:03:43:444441: l2-fwd
l2-fwd: sw_if_index 4 dst 1a:2b:3c:4d:5e:7f src 1a:2b:3c:4d:5e:8f bd_index 1 result [0x700000003, 3] static age-not bvi
00:03:43:444446: ip4-input
ICMP: 10.100.0.6 -> 20.20.20.1
tos 0x00, ttl 64, length 84, checksum 0xabb7
fragment id 0x5c73, flags DONT_FRAGMENT
ICMP echo_request checksum 0xd8d2
00:03:43:444449: ip4-lookup
fib 0 dpo-idx 4 flow hash: 0x00000000
ICMP: 10.100.0.6 -> 20.20.20.1
tos 0x00, ttl 64, length 84, checksum 0xabb7
fragment id 0x5c73, flags DONT_FRAGMENT
ICMP echo_request checksum 0xd8d2
00:03:43:444451: ip4-local
ICMP: 10.100.0.6 -> 20.20.20.1
tos 0x00, ttl 64, length 84, checksum 0xabb7
fragment id 0x5c73, flags DONT_FRAGMENT
ICMP echo_request checksum 0xd8d2

Node counters

Count Node Reason
31 null-node blackholed packets
24 dpdk-input no error
9 ip4-udp-lookup no error
4 ip4-input ip4 source lookup miss
267 ip4-input Multicast RPF check failed
1 ip4-arp ARP requests sent
281 vxlan4-input good packets decapsulated
357 vxlan4-encap good packets encapsulated
357 l2-output L2 output packets
281 l2-learn L2 learn packets
1 l2-learn L2 learn misses
638 l2-input L2 input packets
81 l2-flood L2 flood packets
33 GigabitEthernet0/3/0-output interface is down
35 GigabitEthernet0/8/0-output interface is down

References


Related Posts


📊 Want to see these techniques in action?
Check out our real-world DPDK & VPP projects →

Learning VPP: Setting Up Internet Access with Routing, NAT, and ARP Proxy

✅ Updated January 2026 — This guide has been reviewed and updated for the latest DPDK/VPP versions.

logo_fdio-300x184

Overview

The goal is to provide internet access for a network namespace through VPP.

To achieve this we can set up routing and NAT. Besides, we can use the ARP proxy feature of VPP.

Build and run

First, build and run VPP as described in a previous post.

make run STARTUP_CONF=startup.conf

Setup

To set up network namespace, routing, NAT, and ARP proxy run the following script.

.gist table { margin-bottom: 0; }


This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters

#!/bin/bash
PATH=$PATH:./build-root/build-vpp-native/vpp/bin/
if [ $USER != “root” ] ; then
echo “Restarting script with sudo…”
sudo $0 ${*}
exit
fi
# delete previous incarnations if they exist
ip link del dev vpp1
ip link del dev vpp2
ip netns del vpp1
#create namespaces
ip netns add vpp1
# create and configure 1st veth pair
ip link add name veth_vpp1 type veth peer name vpp1
ip link set dev vpp1 up
ip link set dev veth_vpp1 up netns vpp1
ip netns exec vpp1 \
bash -c “
ip link set dev lo up
ip addr add 172.16.1.2/24 dev veth_vpp1
ip route add 172.16.2.0/24 via 172.16.1.1
ip route add default via 172.16.1.1
# create and configure 2nd veth pair
ip link add name veth_vpp2 type veth peer name vpp2
ip link set dev vpp2 up
ip addr add 172.16.2.2/24 dev veth_vpp2
ip link set dev veth_vpp2 up
ip route add 172.16.1.0/24 via 172.16.2.2
# configure VPP
vppctl create host-interface name vpp1
vppctl create host-interface name vpp2
vppctl set int state host-vpp1 up
vppctl set int state host-vpp2 up
vppctl set int ip address host-vpp1 172.16.1.1/24
vppctl set int ip address host-vpp2 172.16.2.1/24
vppctl ip route add 172.16.1.0/24 via 172.16.1.1 host-vpp1
vppctl ip route add 172.16.2.0/24 via 172.16.2.1 host-vpp2
vppctl ip route add 0.0.0.0/0 via 172.16.2.2 host-vpp2
vppctl set interface proxy-arp host-vpp2 enable
vppctl set ip arp proxy 172.16.1.1 – 172.16.1.2
# Enable IP-forwarding.
echo 1 > /proc/sys/net/ipv4/ip_forward
# Flush forward rules.
iptables -P FORWARD DROP
iptables -F FORWARD
# Flush nat rules.
iptables -t nat -F
# Enable NAT masquerading
iptables -t nat -A POSTROUTING -o wlan0 -j MASQUERADE
iptables -A FORWARD -i wlan0 -o veth_vpp2 -j ACCEPT
iptables -A FORWARD -o wlan0 -i veth_vpp2 -j ACCEPT

Results

Now we can access the internet from vpp1 network namespace.

sudo ip netns exec vpp1 ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=115 time=73.5 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=115 time=139 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=115 time=35.3 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=115 time=36.6 ms

Also, VPP itself has access to the internet.

DBGvpp# ping 8.8.8.8
64 bytes from 8.8.8.8: icmp_seq=1 ttl=118 time=53.7913 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=118 time=35.3645 ms
Aborted due to a keypress.

Statistics: 2 sent, 2 received, 0% packet loss

Now we can access google web site.

sudo ip netns exec vpp1 curl www.google.com

And trace HTTP packets inside VPP.

DBGvpp# trace add af-packet-input 1000
DBGvpp# show trace
...
Packet 37

00:18:08:629788: af-packet-input
af_packet: hw_if_index 1 next-index 4
tpacket2_hdr:
status 0x9 len 54 snaplen 54 mac 66 net 80
sec 0x5b7e7b58 nsec 0x157af968 vlan 0 vlan_tpid 0
00:18:08:629839: ethernet-input
IP4: 6e:25:1b:a7:11:05 -> 02:fe:13:61:29:4b
00:18:08:629865: ip4-input
TCP: 172.16.1.2 -> 173.194.221.103
tos 0x00, ttl 64, length 40, checksum 0xd927
fragment id 0x296c, flags DONT_FRAGMENT
TCP: 51480 -> 80
seq. 0xeec90cb7 ack 0x5287d28f
flags 0x10 ACK, tcp header: 20 bytes
window 457, checksum 0x0000
00:18:08:629889: ip4-lookup
fib 0 dpo-idx 4 flow hash: 0x00000000
TCP: 172.16.1.2 -> 173.194.221.103
tos 0x00, ttl 64, length 40, checksum 0xd927
fragment id 0x296c, flags DONT_FRAGMENT
TCP: 51480 -> 80
seq. 0xeec90cb7 ack 0x5287d28f
flags 0x10 ACK, tcp header: 20 bytes
window 457, checksum 0x0000
00:18:08:629913: ip4-rewrite
tx_sw_if_index 2 dpo-idx 4 : ipv4 via 172.16.2.2 host-vpp2: mtu:9000 a60ae99593be02fe094ec8700800 flow hash: 0x00000000
00000000: a60ae99593be02fe094ec870080045000028296c40003f06da27ac100102adc2
00000020: dd67c9180050eec90cb75287d28f501001c900000000000000000000
00:18:08:629940: host-vpp2-output
host-vpp2
IP4: 02:fe:09:4e:c8:70 -> a6:0a:e9:95:93:be
TCP: 172.16.1.2 -> 173.194.221.103
tos 0x00, ttl 63, length 40, checksum 0xda27
fragment id 0x296c, flags DONT_FRAGMENT
TCP: 51480 -> 80
seq. 0xeec90cb7 ack 0x5287d28f
flags 0x10 ACK, tcp header: 20 bytes
window 457, checksum 0x0000

References


📊 Want to see these techniques in action?
Check out our real-world DPDK & VPP projects →

Learning VPP: Implementing Flow Table for TCP Connection Tracking

✅ Updated January 2026 — This guide has been reviewed and updated for the latest DPDK/VPP versions.

logo_fdio-300x184

Overview

The task is to implement stateful machinery to track TCP connections. To achieve this goal, the following functional pieces are required.

  • A bidirectional hash table to match packets against a flow using 5-tuple;
  • A pool with flow entries data structures;
  • A timer wheel to expire stale flows;
  • TCP state machine tracking to clean up closed TCP connections.

Example

One possible implementation could be found here.

1. A bidirectional hash table.

The requirement is to match the packets from one flow to the same flow entry regardless of their direction. Thus by comparing source and destination addresses, it is we can always calculate the same hash as in the snippet below.

if (ip4_address_compare(&ip4->src_address, &ip4->dst_address) < 0)   {          ip4_sig->src = ip4->src_address;
    ip4_sig->dst = ip4->dst_address;
    *is_reverse = 1;
  } else {
    ip4_sig->src = ip4->dst_address;
    ip4_sig->dst = ip4->src_address;
  }

2. A flow table cache.

To speed up flow entry allocation, flow table cache is used. The idea is to batch entries allocation. In other words, to allocate 256 flow entries at once and store their pool indices inside a vector. As soon as a flow entry allocation is required, leverage preallocated data structure.

always_inline void
flow_entry_cache_fill(flowtable_main_t * fm, flowtable_main_per_cpu_t * fmt)
{
    int i;
    flow_entry_t * f;

    if (pthread_spin_lock(&fm->flows_lock) == 0)
    {
        if (PREDICT_FALSE(fm->flows_cpt > fm->flows_max)) {
            pthread_spin_unlock(&fm->flows_lock);
            return;
        }

        for (i = 0; i < FLOW_CACHE_SZ; i++) {                          pool_get_aligned(fm->flows, f, CLIB_CACHE_LINE_BYTES);
            vec_add1(fmt->flow_cache, f - fm->flows);
        }
        fm->flows_cpt += FLOW_CACHE_SZ;

        pthread_spin_unlock(&fm->flows_lock);
    }
}

3. The timer wheel.

To avoid stale flow entries, timers, organized in a so-called timer wheel are used.

static u64
flowtable_timer_expire(flowtable_main_t * fm, flowtable_main_per_cpu_t * fmt,
    u32 now)
{
    u64 expire_cpt;
    flow_entry_t * f;
    u32 * time_slot_curr_index;
    dlist_elt_t * time_slot_curr;
    u32 index;

    time_slot_curr_index = vec_elt_at_index(fmt->timer_wheel, fmt->time_index);

    if (PREDICT_FALSE(dlist_is_empty(fmt->timers, *time_slot_curr_index)))
        return 0;

    expire_cpt = 0;
    time_slot_curr = pool_elt_at_index(fmt->timers, *time_slot_curr_index);

    index = time_slot_curr->next;
    while (index != *time_slot_curr_index && expire_cpt < TIMER_MAX_EXPIRE)     {         dlist_elt_t * e = pool_elt_at_index(fmt->timers, index);
        f = pool_elt_at_index(fm->flows, e->value);

        index = e->next;
        expire_single_flow(fm, fmt, f, e);
        expire_cpt++;
    }

    return expire_cpt;
}

4. Flow entries recycling.

As soon as there are no more available flow entries in the pool, there is a mechanism to recycle the oldest entries.

static void
recycle_flow(flowtable_main_t * fm, flowtable_main_per_cpu_t * fmt, u32 now)
{
    u32 next;

    next = (now + 1) % TIMER_MAX_LIFETIME;
    while (PREDICT_FALSE(next != now))
    {
        flow_entry_t * f;
        u32 * slot_index = vec_elt_at_index(fmt->timer_wheel, next);

        if (PREDICT_FALSE(dlist_is_empty(fmt->timers, *slot_index))) {
            next = (next + 1) % TIMER_MAX_LIFETIME;
            continue;
        }
        dlist_elt_t * head = pool_elt_at_index(fmt->timers, *slot_index);
        dlist_elt_t * e = pool_elt_at_index(fmt->timers, head->next);

        f = pool_elt_at_index(fm->flows, e->value);
        return expire_single_flow(fm, fmt, f, e);
    }

    /*
     * unreachable:
     * this should be called if there is no free flows, so we're bound to have
     * at least *one* flow within the timer wheel (cpu cache is filled at init).
     */
    clib_error("recycle_flow did not find any flow to recycle !");
}

5. Bucket list.

The bidirectional hash algorithm that is used to lookup 5-tuple against a flow entry can result in multiple signatures for different tuples. To overcome this limitation, the list of entries is attached to each hash bucket.

clib_dlist_addhead(fmt->ht_lines, ht_line_head_index, f->ht_index);

6. TCP state machine.

TCP connection states are tracked in order to control flow lifetime.

static const tcp_state_t tcp_trans[TCP_STATE_MAX][TCP_EV_MAX] =
{
    [TCP_STATE_START] = {
        [TCP_EV_SYN]    = TCP_STATE_SYN,
        [TCP_EV_SYNACK] = TCP_STATE_SYNACK,
        [TCP_EV_FIN]    = TCP_STATE_FIN,
        [TCP_EV_FINACK] = TCP_STATE_FINACK,
        [TCP_EV_RST]    = TCP_STATE_RST,
        [TCP_EV_NONE]   = TCP_STATE_ESTABLISHED,
    },
    [TCP_STATE_SYN] = {
        [TCP_EV_SYNACK] = TCP_STATE_SYNACK,
        [TCP_EV_PSHACK] = TCP_STATE_ESTABLISHED,
        [TCP_EV_FIN]    = TCP_STATE_FIN,
        [TCP_EV_FINACK] = TCP_STATE_FINACK,
        [TCP_EV_RST]    = TCP_STATE_RST,
    },
    [TCP_STATE_SYNACK] = {
        [TCP_EV_PSHACK] = TCP_STATE_ESTABLISHED,
        [TCP_EV_FIN]    = TCP_STATE_FIN,
        [TCP_EV_FINACK] = TCP_STATE_FINACK,
        [TCP_EV_RST]    = TCP_STATE_RST,
    },
    [TCP_STATE_ESTABLISHED] = {
        [TCP_EV_FIN]    = TCP_STATE_FIN,
        [TCP_EV_FINACK] = TCP_STATE_FINACK,
        [TCP_EV_RST]    = TCP_STATE_RST,
    },
    [TCP_STATE_FIN] = {
        [TCP_EV_FINACK] = TCP_STATE_FINACK,
        [TCP_EV_RST]    = TCP_STATE_RST,
    },
    [TCP_STATE_FINACK] = {
        [TCP_EV_RST]    = TCP_STATE_RST,
    },
};

7. TCP lifetime.

TCP flow entry has a different expiration time depending on the connection life stage.

static const int tcp_lifetime[TCP_STATE_MAX] =
{
    [TCP_STATE_START]       = 60,
    [TCP_STATE_SYN]         = 15,
    [TCP_STATE_SYNACK]      = 60,
    [TCP_STATE_ESTABLISHED] = 299,
    [TCP_STATE_FIN]         = 15,
    [TCP_STATE_FINACK]      = 3,
    [TCP_STATE_RST]         = 6
};

References


Related Posts


📊 Want to see these techniques in action?
Check out our real-world DPDK & VPP projects →

Learning VPP: Packets Tracing and Interface Types for Network Namespace Connections

✅ Updated January 2026 — This guide has been reviewed and updated for the latest DPDK/VPP versions.

logo_fdio-300x184

Overview

There are multiple ways to run VPP on your laptop, namely, it could be run on the host Linux, in VM or in a docker container.

Also, besides DPDK interfaces, VPP supports low performant while very handy interface types that could be used for connection to network namespaces. These are veth (host interface in VPP) and TAP interface.

Build and run

To test traffic through VPP installed on a host Linux, two network namespaces have to be created to emulate external host machines. And packets will come and leave VPP either TAP or veth interfaces.

Now, build and run VPP as described in a previous post.

make run STARTUP_CONF=startup.conf

Virtual network over TAPs

To set up namespaces, taps and a bridge run the following script.

.gist table { margin-bottom: 0; }


This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters

#!/bin/bash
./build-root/build-vpp-native/vpp/bin/vppctl tap connect vpp1
./build-root/build-vpp-native/vpp/bin/vppctl tap connect vpp2
./build-root/build-vpp-native/vpp/bin/vppctl set interface state tapcli-0 up
./build-root/build-vpp-native/vpp/bin/vppctl set interface state tapcli-1 up
ip netns delete vpp1
ip netns delete vpp2
ip netns add vpp1
ip netns add vpp2
ip link set dev vpp1 netns vpp1
ip link set dev vpp2 netns vpp2
ip netns exec vpp1 ip link set vpp1 up
ip netns exec vpp2 ip link set vpp2 up
ip netns exec vpp1 ip addr add 192.168.0.1/24 dev vpp1
ip netns exec vpp2 ip addr add 192.168.0.2/24 dev vpp2
./build-root/build-vpp-native/vpp/bin/vppctl set interface l2 bridge tapcli-0 23
./build-root/build-vpp-native/vpp/bin/vppctl set interface l2 bridge tapcli-1 23

Tracing packets

The below commands can be used to test the VPP based bridge.

ip netns exec vpp1 ping -c1 192.168.0.2
ip netns exec vpp2 ping -c1 192.168.0.1

To see packets inside VPP, the trace feature has to be enabled beforehand.

DBGvpp# trace add tapcli-rx 8

Then to see how packet traversed VPP graph the following command has to be used.

DBGvpp# show trace

------------------- Start of thread 0 vpp_main -------------------
Packet 1

00:50:54:290610: tapcli-rx
tapcli-0
00:50:54:377068: ethernet-input
IP4: 12:77:2b:e0:b9:81 -> c2:12:c9:0d:80:23
00:50:54:406116: l2-input
l2-input: sw_if_index 1 dst c2:12:c9:0d:80:23 src 12:77:2b:e0:b9:81
00:50:54:414204: l2-learn
l2-learn: sw_if_index 1 dst c2:12:c9:0d:80:23 src 12:77:2b:e0:b9:81 bd_index 1
00:50:54:414940: l2-fwd
l2-fwd: sw_if_index 1 dst c2:12:c9:0d:80:23 src 12:77:2b:e0:b9:81 bd_index 1
00:50:54:415656: l2-output
l2-output: sw_if_index 2 dst c2:12:c9:0d:80:23 src 12:77:2b:e0:b9:81 data 08 00 45 00 00 54 2a 1a 40 00 40 01
00:50:54:415697: tapcli-1-output
tapcli-1
IP4: 12:77:2b:e0:b9:81 -> c2:12:c9:0d:80:23
ICMP: 192.168.0.1 -> 192.168.0.2
tos 0x00, ttl 64, length 84, checksum 0x8f3b
fragment id 0x2a1a, flags DONT_FRAGMENT
ICMP echo_request checksum 0xde15

Virtual network over veth pair

To set up namespaces and veth pairs run the following script.

.gist table { margin-bottom: 0; }


This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters

#!/bin/bash
PATH=$PATH:./build-root/build-vpp-native/vpp/bin/
if [ $USER != “root” ] ; then
echo “Restarting script with sudo…”
sudo $0 ${*}
exit
fi
# delete previous incarnations if they exist
ip link del dev vpp1
ip link del dev vpp2
ip netns del vpp1
ip netns del vpp2
#create namespaces
ip netns add vpp1
ip netns add vpp2
# create and configure 1st veth pair
ip link add name veth_vpp1 type veth peer name vpp1
ip link set dev vpp1 up
ip link set dev veth_vpp1 up netns vpp1
ip netns exec vpp1 \
bash -c “
ip link set dev lo up
ip addr add 172.16.1.2/24 dev veth_vpp1
ip route add 172.16.2.0/24 via 172.16.1.1
# create and configure 2st veth pair
ip link add name veth_vpp2 type veth peer name vpp2
ip link set dev vpp2 up
ip link set dev veth_vpp2 up netns vpp2
ip netns exec vpp2 \
bash -c “
ip link set dev lo up
ip addr add 172.16.2.2/24 dev veth_vpp2
ip route add 172.16.1.0/24 via 172.16.2.1
vppctl create host-interface name vpp1
vppctl create host-interface name vpp2
vppctl set int state host-vpp1 up
vppctl set int state host-vpp2 up
vppctl set int ip address host-vpp1 172.16.1.1/24
vppctl set int ip address host-vpp2 172.16.2.1/24
vppctl ip route add 172.16.1.0/24 via 172.16.1.1 host-vpp1
vppctl ip route add 172.16.2.0/24 via 172.16.2.1 host-vpp2

Tracing packets

The below commands can be used to test the VPP based bridge.

ip netns exec vpp1 ping 172.16.2.1 -c 1

To see packets inside VPP, the trace feature has to be enabled beforehand.

DBGvpp# trace add af-packet-input 8

Then to see how packet traversed VPP graph the following command has to be used.

DBGvpp# show trace
------------------- Start of thread 0 vpp_main -------------------
Packet 1

00:02:26:500404: af-packet-input
  af_packet: hw_if_index 1 next-index 4
    tpacket2_hdr:
      status 0x20000001 len 98 snaplen 98 mac 66 net 80
      sec 0x5b7a7435 nsec 0x2ed6d440 vlan 0 vlan_tpid 0
00:02:26:500486: ethernet-input
  IP4: b6:7b:f1:64:fe:8c -> 02:fe:9e:f6:c1:8f
00:02:26:500501: ip4-input
  ICMP: 172.16.1.2 -> 172.16.2.1
    tos 0x00, ttl 64, length 84, checksum 0xeaf8
    fragment id 0xf48c, flags DONT_FRAGMENT
  ICMP echo_request checksum 0xdbe0
00:02:26:500509: ip4-lookup
  fib 0 dpo-idx 8 flow hash: 0x00000000
  ICMP: 172.16.1.2 -> 172.16.2.1
    tos 0x00, ttl 64, length 84, checksum 0xeaf8
    fragment id 0xf48c, flags DONT_FRAGMENT
  ICMP echo_request checksum 0xdbe0
00:02:26:500523: ip4-local
    ICMP: 172.16.1.2 -> 172.16.2.1
      tos 0x00, ttl 64, length 84, checksum 0xeaf8
      fragment id 0xf48c, flags DONT_FRAGMENT
    ICMP echo_request checksum 0xdbe0
00:02:26:500529: ip4-icmp-input
  ICMP: 172.16.1.2 -> 172.16.2.1
    tos 0x00, ttl 64, length 84, checksum 0xeaf8
    fragment id 0xf48c, flags DONT_FRAGMENT
  ICMP echo_request checksum 0xdbe0
00:02:26:500533: ip4-icmp-echo-request
  ICMP: 172.16.1.2 -> 172.16.2.1
    tos 0x00, ttl 64, length 84, checksum 0xeaf8
    fragment id 0xf48c, flags DONT_FRAGMENT
  ICMP echo_request checksum 0xdbe0
00:02:26:500540: ip4-load-balance
  fib 0 dpo-idx 17 flow hash: 0x00000000
  ICMP: 172.16.2.1 -> 172.16.1.2
    tos 0x00, ttl 64, length 84, checksum 0x8e73
    fragment id 0x5112, flags DONT_FRAGMENT
  ICMP echo_reply checksum 0xe3e0
00:02:26:500543: ip4-rewrite
  tx_sw_if_index 1 dpo-idx 2 : ipv4 via 172.16.1.2 host-vpp1: mtu:9000 b67bf164fe8c02fe9ef6c18f0800 flow hash: 0x00000000
  00000000: b67bf164fe8c02fe9ef6c18f0800450000545112400040018e73ac100201ac10
  00000020: 01020000e3e0167e000135747a5b000000008bfd0b00000000001011
00:02:26:500550: host-vpp1-output
  host-vpp1
  IP4: 02:fe:9e:f6:c1:8f -> b6:7b:f1:64:fe:8c
  ICMP: 172.16.2.1 -> 172.16.1.2
    tos 0x00, ttl 64, length 84, checksum 0x8e73
    fragment id 0x5112, flags DONT_FRAGMENT
  ICMP echo_reply checksum 0xe3e0

References


📊 Want to see these techniques in action?
Check out our real-world DPDK & VPP projects →

Learning VPP: Understanding Code Style Guidelines for Better Development

✅ Updated January 2026 — This guide has been reviewed and updated for the latest DPDK/VPP versions.

logo_fdio-300x184

Overview

It is important that an open-source or proprietary product follows strict code style guidelines. It helps people who are involved in the project to understand, extend and maintain the codebase far more comfortably. Let alone it makes the code listings look neat and clean.

And VPP could be a good example of the above statement. It has a strict and well-defined code style that is based on GNU Coding Standards.

Hints

For example, an indentation looks as follows.

if (1)
  {
  }

A routine definition looks as follows.

static int
vnet_test_add_del(u8 * name, u32 index, u8 add);

A function call looks as follows.

vnet_feature_enable_disable ("device-input", "test",
			     sw_if_index, enable_disable, 0, 0);

To verify that your code is following the agreement the following command is used.

make checkstyle

VPP developers provided the rules written as a clang-format file that can be used by different tools and IDEs to enforce code formatting.

References


Related Posts


📊 Want to see these techniques in action?
Check out our real-world DPDK & VPP projects →

Learning VPP: Understanding Hash and Pool Implementation in Vector Packet Processing

✅ Updated January 2026 — This guide has been reviewed and updated for the latest DPDK/VPP versions.

logo_fdio-300x184

Overview

VPP implements a pool for a fixed size objects combining two data structures together, namely a vector and a bitmap.

VPP has multiple hash implementations. The most basic is defined inside hash.h file. It is used mostly in a control plane and a string could be used as a key while a pool index as a value.

The following example illustrates how to use the aforementioned data structures together to build a hash table.

Example

1. Definition

typedef struct {
u8 *name;
} test_t;

test_t *pool;
uword *hash;

2. Initialization.

hash = hash_create_vec (32, sizeof (u8), sizeof (uword));

3. Add element

test_t *test = NULL;
pool_get (pool, test);
memset(test, 0, sizeof(*test));
hash_set_mem (hash, name, test - pool);

4. Get element

uword *p = NULL;
test_t *test = NULL;
p = hash_get_mem (hash, name);
if (p) {
test = pool_elt_at_index (pool, p[0]);
}

5. Delete element

uword *p = NULL;
test_t *test = NULL;
p = hash_get_mem (hash, name);
if (p) {
hash_unset_mem (hash, name);
test = pool_elt_at_index (pool, p[0]);
pool_put (pool, test);
}

6. Iteration

u8 *name = NULL;
u32 index = 0;
/* *INDENT-OFF* */
hash_foreach(name, index, hash,
({
   test_t *test = NULL;
   test = pool_elt_at_index(pool, index);
}));
/* *INDENT-ON* */

References


Related Posts


📊 Want to see these techniques in action?
Check out our real-world DPDK & VPP projects →

Learning VPP: CLI Debugging Guide

✅ Updated January 2026 — This guide has been reviewed and updated for the latest DPDK/VPP versions.

logo_fdio-300x184

Overview

VPP preferred interface is a binary API that is used by Northbound control plane applications like Honeycomb.

But for debugging and other related purposes VPP includes very convenient CLI engine from both user and developer perspectives.

To add a new command it is required to register its name, help string and handler routine.

Run

1. A registration.

VLIB_CLI_COMMAND (test_create_command, static) =
{
.path = "create test",
.short_help = "create test name <string>",
.function = test_create_command_fn,
};

2. A handler.

static clib_error_t *
test_create_command_fn (vlib_main_t * vm,
                        unformat_input_t * input,
                        vlib_cli_command_t * cmd)
{
  unformat_input_t _line_input, *line_input = &_line_input;
  u8 *name = NULL;
  if (unformat_user (input, unformat_line_input, line_input))
  {
    while (unformat_check_input (line_input) != UNFORMAT_END_OF_INPUT)
    {
      if (unformat (line_input, "name %s", &name));
      else
      {
        unformat_free (line_input);
        return clib_error_return (0, "unknown input `%U'",
        format_unformat_error, input);
      }
    }
    unformat_free (line_input);
  }

  return NULL;
}

References


Related Posts


📊 Want to see these techniques in action?
Check out our real-world DPDK & VPP projects →

Learning VPP: Building and Running VPP in Interactive Mode Without DPDK on Ubuntu 16.04

✅ Updated January 2026 — This guide has been reviewed and updated for the latest DPDK/VPP versions.

logo_fdio-300x184

Overview

To build and run VPP in the interactive mode without DPDK on Ubuntu 16.04 the following steps could be used.

This immediately enables a developer to make changes and verify a build sanity.

Though it is not enough to verify packets processing, it is perfectly fine for testing other functionality using CLI.

Run

1. Pull the code.
git clone https://github.com/FDio/vpp

2. Build.
make install-dep
make bootstrap
make build

3. Create VPP group.
groupadd vpp
usermod -aG vpp root

4. Create startup.conf like the following.

.gist table { margin-bottom: 0; }


This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters

unix {
nodaemon
log /tmp/vpp.log
full-coredump
gid vpp
interactive
cli-listen /run/vpp/cli.sock
}
api-trace {
on
}
api-segment {
gid vpp
}
plugins {
plugin dpdk_plugin.so { disable }
}
view raw

startup.conf

hosted with ❤ by GitHub

5. Run.
make run STARTUP_CONF=startup.conf

References


Related Posts


📊 Want to see these techniques in action?
Check out our real-world DPDK & VPP projects →

Learning VPP: Introduction to Vector Packet Processing and High-Speed Packet Processing Framework

✅ Updated January 2026 — This guide has been reviewed and updated for the latest DPDK/VPP versions.

logo_fdio-300x184

Overview

VPP (Vector Packet Processing) is a software virtual switch and a framework for high-speed packet processing.

It is highly scalable, production-ready piece of software that together with DPDK enables anybody to build their own packet processing products based on commodity servers.

Run

1. Create a directory on your laptop.
mkdir fdio-tutorial
cd fdio-tutorial

2. Create a Vagrantfile containing the following.

.gist table { margin-bottom: 0; }


This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters

# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure(2) do |config|
config.vm.box = “puppetlabs/ubuntu-16.04-64-nocm”
config.vm.box_check_update = false
vmcpu=(ENV[‘VPP_VAGRANT_VMCPU’] || 2)
vmram=(ENV[‘VPP_VAGRANT_VMRAM’] || 4096)
config.ssh.forward_agent = true
config.vm.provider “virtualbox” do |vb|
vb.customize [“modifyvm”, :id, “–ioapic”, “on”]
vb.memory = “#{vmram}”
vb.cpus = “#{vmcpu}”
#support for the SSE4.x instruction is required in some versions of VB.
vb.customize [“setextradata”, :id, “VBoxInternal/CPUM/SSE4.1”, “1”]
vb.customize [“setextradata”, :id, “VBoxInternal/CPUM/SSE4.2”, “1”]
end
end
view raw

Vagrantfile

hosted with ❤ by GitHub

3. Bring up your Vagrant VM.
vagrant up
vagrant ssh

4. Install VPP from binary packages.
export UBUNTU="xenial"
sudo rm /etc/apt/sources.list.d/99fd.io.list
echo "deb [trusted=yes] https://nexus.fd.io/content/repositories/fd.io$RELEASE.ubuntu.$UBUNTU.main/ ./" | sudo tee -a /etc/apt/sources.list.d/99fd.io.list
sudo apt-get update
sudo apt-get install vpp vpp-lib

5. Open VPP CLI
sudo vppctl
_______ _ _ _____ ___
__/ __/ _ \ (_)__ | | / / _ \/ _ \
_/ _// // / / / _ \ | |/ / ___/ ___/
/_/ /____(_)_/\___/ |___/_/ /_/
vpp# show ver
vpp v18.07-release built by root on c469eba2a593 at Mon Jul 30 23:27:03 UTC 2018

References


Related Posts


📊 Want to see these techniques in action?
Check out our real-world DPDK & VPP projects →