Java Development

Beginning Native Java Development

Throughout my education and my career I have largely avoided native development and concentrated on developing in Java, recently however I have been working with a lot more native code so now it is time to come back and look at the relationship between these two worlds.

My learning and development style has always been to build from small working examples and iteratively expand, by iterating when you run into problems hopefully the delta between the current state and the last working state is not too large making it easier to diagnose the issues.

This blog post describes the steps I have taken to first create a native library and then to look at how this can be invoked using JNI or the newer Foreign Function and Memory API.

The example code can be found in my java-native-experiments GitHub repository, I would recommend using the beginning-native-java-development branch as after publishing this blog post the main branch may diverge further.

The Shared Library

The first step was to develop a shared library that will be called, in general I believe engineers would be looking at invoking native code either when a need to implement natively is identified or if a library already exists with the desired functionality.

In this project I am creating a new library called simple-library providing the API in the simple-library.h header.

int add_one(int x);
void say_hello();

The implementation is as simple as the names suggest:

int add_one(int x)
{
    return x + 2;
}

void say_hello()
{
    printf("Hello, from simple-library!\n");
}

I believe writing Makefiles is simple but as I have been using it for other projects I have used CMake and the following CMakeLists.txt to generate this for me:

cmake_minimum_required(VERSION 3.10.0)

if (EXISTS $ENV{HOME}/local)
    set(CMAKE_INSTALL_PREFIX $ENV{HOME}/local)
    message("Setting CMAKE_INSTALL_PREFIX to $ENV{HOME}/local")
endif()

project(simple-library VERSION 0.1.0 LANGUAGES C)

set(SIMPLE_SOURCE
        simple-library.c
)

set (SIMPLE_HEADERS
        simple-library.h
)

# Build the shared library
add_library(simple-library SHARED
            ${SIMPLE_SOURCE})
# Build the static library
add_library(simple-library-static STATIC
            ${SIMPLE_SOURCE})


# Install the shared library
install(TARGETS simple-library
        LIBRARY DESTINATION lib)
# Install the static library
install(TARGETS simple-library-static
        ARCHIVE DESTINATION lib)
# Install the header file
install(FILES ${SIMPLE_HEADERS}
        DESTINATION include)

At the base of my home area I have a directory called local and running make install will install the libraries under $HOME/local/lib and the header file for the API under $HOME/local/include. For this example I only need a shared library but I have also been experimenting with a static library so both variants are installed.

The Native Application

The purpose of this investigation was to look at the pieces need to invoke the library from Java but as I said in the beginning I like to incrementally iterate so my next step was a simple C program to call the library. If it is not possible to call the library from a C application I would not be ready to try and call it from Java.

The following is the implementation of the simple-c-app:

#include <stdio.h>

#include "simple-library.h"

int main(int, char**){
    printf("Hello, from simple-c-app!\n");

    int a  = 5;
    printf("add_one(%d) = %d\n", a, add_one(a));

    say_hello();
}

As with the library I also use CMake to generate the Makefile:

cmake_minimum_required(VERSION 3.10.0)
project(simple-c-app VERSION 0.1.0 LANGUAGES C)

add_executable(simple-c-app main.c)

target_include_directories(simple-c-app PRIVATE
    $ENV{HOME}/local/include)

# TODO This seems to work for now but should it reference
# using find_library instead?
target_link_directories(simple-c-app PRIVATE
    $ENV{HOME}/local/lib)

# Link the shared library
target_link_libraries(simple-c-app PRIVATE
    simple-library)

# Link the static library
#target_link_libraries(simple-c-app PRIVATE
#    simple-library-static)

As you can see at the bottom I have also been experimenting by being able to switch between the shared and the static library.

After building the app it can be invoked to check the interactions with the shared library:

$ build/simple-c-app 
Hello, from simple-c-app!
add_one(5) = 6
Hello, from simple-library!

The Java JNI Application

For the Java JNI application we need to begin our development in Java to define the native methods from the perspective of Java and generate a header file for these methods, we then need to implement a shared library that implements the functions in the header file and call the target shared library before we can come back and run the Java application. In a project it may be more likely to develop a Java library for the native invocations separately from the main application. For the Java development I am currently using the latest Java 24 Temurin build.

The simple-jni Java project contains a single App class:

public class App {

    /*
     * Native Methods.
     */
    private static native void sayHello();
    private static native int addOne(final int x);

    public static void main(String[] args) {
        System.out.println("Java says Hello World!");
        System.out.println(System.getProperty("java.library.path"));

        System.loadLibrary("jni-library");
        //System.loadLibrary("simple-library");

        final int x = 11;
        System.out.printf("addOne(%d)= %d\n", x, addOne(x));
        sayHello();
        System.out.println("Java says Goodbye World!");
    }
}

This class defines two methods that we wish to invoke using JNI as well as the main method to invoke them. These methods do follow the same pattern as for the API in our shared library but this is not necessary, as you will see the JNI library will act as an intermediary so an alternative API could have been used.

We need the Java compiler to generate a header file for the native methods so the compiler plugin in the pom.xml is configured as:

        <plugin>
          <artifactId>maven-compiler-plugin</artifactId>
          <version>3.13.0</version>
          <configuration>
            <compilerArgs>
              <arg>-h</arg>
              <arg>target/include</arg>
            </compilerArgs>
          </configuration>
        </plugin>

The -h target/include causes the compiler to generate the following header under target/include:

/* DO NOT EDIT THIS FILE - it is machine generated */
#include <jni.h>
/* Header for class dev_lofthouse_App */

#ifndef _Included_dev_lofthouse_App
#define _Included_dev_lofthouse_App
#ifdef __cplusplus
extern "C" {
#endif
/*
 * Class:     dev_lofthouse_App
 * Method:    sayHello
 * Signature: ()V
 */
JNIEXPORT void JNICALL Java_dev_lofthouse_App_sayHello
  (JNIEnv *, jclass);

/*
 * Class:     dev_lofthouse_App
 * Method:    addOne
 * Signature: (I)I
 */
JNIEXPORT jint JNICALL Java_dev_lofthouse_App_addOne
  (JNIEnv *, jclass, jint);

#ifdef __cplusplus
}
#endif
#endif

This brings us to the next project jni-library where the functions in this header will be implemented in jni-library.c:

#include <stdio.h>

#include "dev_lofthouse_App.h"
#include "simple-library.h"

JNIEXPORT void JNICALL Java_dev_lofthouse_App_sayHello (JNIEnv *, jclass)
{
    say_hello();
}

JNIEXPORT jint JNICALL Java_dev_lofthouse_App_addOne (JNIEnv * env, jclass jcl, jint x)
{
    return add_one(x);
}

As at the most in this example we are passing integers I can ignore the additional function parameters for now. As I did with the previous two native projects I used CMake to configure the Makefile:

cmake_minimum_required(VERSION 3.10.0)

if (EXISTS $ENV{HOME}/local)
    set(CMAKE_INSTALL_PREFIX $ENV{HOME}/local)
    message("Setting CMAKE_INSTALL_PREFIX to $ENV{HOME}/local")
endif()

project(jni-library VERSION 0.1.0 LANGUAGES C)

set(JNI_SOURCE
        jni-library.c
)

add_library(jni-library SHARED ${JNI_SOURCE})

# Include the JNI headers
target_include_directories(jni-library PRIVATE
    $ENV{JAVA_HOME}/include
    $ENV{JAVA_HOME}/include/linux)

# Include the simple-jni headers
target_include_directories(jni-library PRIVATE
    ../simple-jni/target/include)

# Include the simple-library headers
target_include_directories(jni-library PRIVATE
    $ENV{HOME}/local/include) # This one was installed.

# Link the shared library
target_link_directories(jni-library PRIVATE
    $ENV{HOME}/local/lib)

# Link the shared library
target_link_libraries(jni-library PRIVATE
    simple-library)

# Install the shared library
install(TARGETS jni-library
    LIBRARY DESTINATION lib)

This library needs access to a few more headers than were needed for simple-c-app, obviously we need the headers for the shared library we will call but we also need the generated JNI header as well as the headers in the Java installation as well. Similar to the experiments with the simple-c-app I could have targeted the static library instead of building against the shared library.

Before coming back to the simple-jni project to run the code it is worth pointing out here that as the shared libraries are being installed in a non-standard location they may not be found correctly at runtime. As an example if we run the following command:

$ ldd ~/local/lib/libjni-library.so 
        linux-vdso.so.1 (0x00007a17d59ea000)
        libsimple-library.so => not found
        libc.so.6 => /usr/lib/libc.so.6 (0x00007a17d57d0000)
        /usr/lib64/ld-linux-x86-64.so.2 (0x00007a17d59ec000)

The output shows that libsimple-library.so was not found.

However once the LD_LIBRARY_PATH environment variable is set the library can be found:

$ export LD_LIBRARY_PATH=$HOME/local/lib
$ ldd ~/local/lib/libjni-library.so 
        linux-vdso.so.1 (0x000070f866a98000)
        libsimple-library.so => /home/darranl/local/lib/libsimple-library.so (0x000070f866a88000)
        libc.so.6 => /usr/lib/libc.so.6 (0x000070f866879000)
        /usr/lib64/ld-linux-x86-64.so.2 (0x000070f866a9a000)

Back in the simple-jni project we can now run the previously built Java command using the run-app.sh :

#!/bin/bash
export LD_LIBRARY_PATH=$HOME/local/lib

java --class-path=target/simple-jni-0.0.1-SNAPSHOT.jar \
    --enable-native-access=ALL-UNNAMED \
    dev.lofthouse.App

If only the JNI library needed to found the system property -Djava.library.path=$HOME/local/lib could be set instead. Running the application gives us the following output:

$ ./run-app.sh 
Java says Hello World!
/home/darranl/local/lib:/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib
addOne(11)= 12
Hello, from simple-library!
Java says Goodbye World!

The Foreign API Java Application

The final part of this investigation was in the simple-foreign project to use the new Foreign Function APIs, unlike the JNI example which required an intermediary JNI aware native library this Java application can call the original shared library directly.

The App class is more involved as the functions to be called need to be describes but with the eliminated round trips to develop and build a JNI library:

import static java.lang.foreign.ValueLayout.JAVA_INT;

import java.lang.foreign.Arena;
import java.lang.foreign.FunctionDescriptor;
import java.lang.foreign.Linker;
import java.lang.foreign.MemorySegment;
import java.lang.foreign.SymbolLookup;
import java.lang.invoke.MethodHandle;

/**
 * Hello world!
 */
public class App {
    public static void main(String[] args) throws Throwable {
        Arena confinedArena = Arena.ofConfined();
        Linker linker = Linker.nativeLinker();
        SymbolLookup simpleLibraryLookup =
            SymbolLookup.libraryLookup("libsimple-library.so", confinedArena);

        /*
         * The first function is the say_hello function which takes no arguments,
         * returns void and prints a message to the output.
         */

        // Find the reference to the say_hello function.
        MemorySegment sayHelloSymbol = simpleLibraryLookup.find("say_hello").get();
        // Describe the function signature, this function takes no arguments and returns void.
        FunctionDescriptor sayHelloDescriptor = FunctionDescriptor.ofVoid();
        // Convert to a MethodHandle for the function.
        MethodHandle sayHelloMethodHandle = linker.downcallHandle(sayHelloSymbol, sayHelloDescriptor);
        // Invoke the function.
        sayHelloMethodHandle.invokeExact();

        /*
         * The second function is the add_one function which takes an integer as an argument,
         * adds two to it and returns the result.
         */

        // Find the reference to the add_one function.
        MemorySegment addOneSymbol = simpleLibraryLookup.find("add_one").get();
        // Describe the function signature, this function takes an integer and returns an integer.
        FunctionDescriptor addOneDescriptor = FunctionDescriptor.of(JAVA_INT, JAVA_INT);
        // Convert to a MethodHandle for the function.
        MethodHandle addOneMethodHandle = linker.downcallHandle(addOneSymbol, addOneDescriptor);
        // Invoke the function.
        int result = (int) addOneMethodHandle.invokeExact(14);
        // Display the result.
        System.out.printf("addOne(%d) = %d\n", 14, result);

        System.out.println("Hello World!");
    }
}

It is worth noting that after the calls to set up access the end result is we have a java.lang.invoke.MethodHandle to use for the invocations which has been available for Java Reflection going back to Java 1.7.

Similar to the JNI example we need LD_LIBRARY_PATH to be set to our non standard location and we can use the run-app.sh script to call the class.

#!/bin/bash
export LD_LIBRARY_PATH=$HOME/local/lib

java --class-path=target/simple-foreign-0.0.1-SNAPSHOT.jar \
     --enable-native-access=ALL-UNNAMED \
    dev.lofthouse.App

This gives us the following output:

$ ./run-app.sh 
Hello, from simple-library!
addOne(14) = 15
Hello World!

Conclusion

This concludes this first experiment to try both approaches to native invocations side by side.

Overall both approaches feel like they still need an individual with experience of both the native side of the calls and the Java side of the code to be able to develop the intermediary layer. In the case of JNI this was largely in the place of implementing the generated header for the native methods, in the foreign function case this was in the place of fully describing the the native functions first using Java code.

The examples here were very simple at the most passing ints, working with Java Objects, native Structs and calling back from the native code to the Java code would all add more complexity to test both approaches further.

Before looking at the more advanced examples the next step I would like to investigate and hopefully publish a follow up blog is updating these examples to be compiled to native code using GraalVM.

Credentials, WildFly

PKCS#12 with WildFly Elytron’s Credential Store

The default credential store used in WildFly from WildFly Elytron is the KeyStoreCredentialStore which is backed by a Java KeyStore to hold the credentials. This in turn defaults to using the JCEKS KeyStore format, this blog post is to illustrate how we can configure this to use a PKCS#12 store instead both in WildFly and when using the elytron-tool CLI tool directly.

These instructions assume that you have already successfully installed an up to date version of the WildFly application server and that you are able to configure it using the jboss-cli tool.

Adding a CredentialStore to WildFly

A credential store can be added to WildFly with the following command:

/subsystem=elytron/credential-store=mycredstore: \
    add(relative-to=jboss.server.config.dir, \
    path=mycredstore.cs, create=true, \
    credential-reference={clear-text=my_store_password}, \
    implementation-properties={keyStoreType=PKCS12})

The important attribute here being the “implementation-properties” attribute which can be used to override behaviour specific to the KeyStoreCredentialStore.

This results in XML configuration which looks like:

<credential-store name="mycredstore" 
    relative-to="jboss.server.config.dir" 
    path="mycredstore.cs" create="true">
    <implementation-properties>
        <property name="keyStoreType" value="PKCS12"/>
    </implementation-properties>
    <credential-reference clear-text="my_store_password"/>
</credential-store>

We can test that the underlying store is a PKCS#12 store using Java’s keytool.

keytool -list -keystore mycredstore.cs 
Enter keystore password:  

With the output:

Keystore type: PKCS12
Keystore provider: SUN

Your keystore contains 0 entries

The management operations to manipulate the store remain the same as the underlying type is held in the configuration.

Using the WildFly Elytron Tool

The WildFly Elytron tool is not backed by any configuration so you need to pass in all of the configuration each time you call the tool, the following is an example to add a plain text credential to the PKCS#12 backed store created in the first section.

bin/elytron-tool.sh credential-store \
    --location standalone/configuration/mycredstore.cs \
    --password my_store_password \
    --properties "keyStoreType=PKCS12" \
    --add testUser --secret mySecret

The important argument to note here is the use of “–properties” to set the “keyStoreType” to “PKCS12”.

If we now repeat the KeyTool command from before we can see that this credential store now contains one entry:

Keystore type: PKCS12
Keystore provider: SUN

Your keystore contains 1 entry

testuser/passwordcredential/clear/, 6 Jan 2025, SecretKeyEntry,

It is now possible to continue using the credential store as normal to manipulate the credentials it contains.

React, Web Applications, WildFly

Hosting a React Application on WildFly

Introduction

As part of an upcoming development item I am going to be working with a React application deployed to the WildFly application server and invoking JAX-RS / Rest endpoint to interact with a server side of the application.

I thought others might find it useful to see the steps I have taken. This blog post describes my steps up until the point I have the default React application deployed to WildFly, I may then follow up with some blogs of my subsequent steps.

Generating The Maven Project

The WildFly project now publishes a set of Guides demonstrating how to accomplish some tasks quickly, I am going to start from the Getting Started with WildFly which quickly generates a new Maven project which includes a HTML page with some JavaScript to invoke a JAX-RPC endpoint also contained within the deployment.

The first step is to use Maven to generate the project using the defaults provided:

mvn archetype:generate \
    -DarchetypeGroupId=org.wildfly.archetype \
    -DarchetypeArtifactId=wildfly-getting-started-archetype

At this stage it is possible to build and run the default application but I am not going to cover that here as it is already covered in the guide.

The next stage to generate the React application will require Node.js to be installed locally, however once the project has been created other developers will be able to work on it without installing Node.js as the Maven project will manage it’s own installation.

The newly generate Maven project should have given you a structure similar to:

We will add the React part of the project under src/main, in the terminal navigate to this directory and execute the following command:

npx create-react-app getting-started

This will take some time to complete but once completed there will be a new directory getting-started which will be populated with a template React application.

The next step is to add building of the React application to the build and then include the resulting build in the war that is being built, first add the following plugin to the projects pom.xml:

<plugin>
    <groupId>com.github.eirslett</groupId>
    <artifactId>frontend-maven-plugin</artifactId>
    <version>1.15.0</version>
    <configuration>
        <installDirectory>target</installDirectory>
        <workingDirectory>src/main/getting-started</workingDirectory>
    </configuration>
    <executions>
        <!-- Install node and npm -->
        <execution>
            <id>Install Node and NPM</id>
            <goals>
                <goal>install-node-and-npm</goal>
            </goals>
            <configuration>
                <nodeVersion>v20.10.0</nodeVersion>
            </configuration>
        </execution>
        <!-- clean install -->
        <execution>
            <id>npm install</id>
            <goals>
                <goal>npm</goal>
            </goals>
        </execution>
        <!-- build app -->
        <execution>
            <id>npm run build</id>
            <goals>
                <goal>npm</goal>
            </goals>
            <configuration>
                <arguments>run build</arguments>
            </configuration>
        </execution>
    </executions>
</plugin>    

This plugin performs three steps:

  1. Install node and npm under /target/node
  2. Run npm install within the React application to update the dependencies.
  3. Build the React application.

Next the following configuration is added to the maven-war-plugin:

<configuration>
    <!-- Jakarta EE doesn't require web.xml, Maven needs to catch up! -->
    <failOnMissingWebXml>false</failOnMissingWebXml>
    <webResources>
        <resource>
            <!-- this is relative to the pom.xml directory -->
            <directory>src/main/getting-started/build</directory>
        </resource>
    </webResources>
</configuration>               

Now the project can be built and started as described in the WildFly Getting Started guide:

mvn package
./target/server/bin/standalone.sh

You can now connect your web browser to http://localhost:8080/ and you should be presented with the spinning React logo.

NOTE: When the project was generated under src/main there was a directory called, webapp, this contains the original web content and the contents of this directory can be safely deleted.

Storing the project in a Git respository

If you would like to store the project in a git repository there are a couple of additional steps to perform.

When the React application was created it was created with a default Git repository, this can be used if you would like to treat the React part of the deployment as a submodule but at this stage I will contain the whole example in a single repository.

The following directory and file should be deleted:

  • src/main/getting-started/.git
  • src/main/getting-started/.gitignore

In the root of the project initialise a new repository with git init.

Add a .gitignore with the following content:

target
.settings
.vscode

# See https://help.github.com/articles/ignoring-files/ for more about ignoring files.

# dependencies
node_modules
.pnp
.pnp.js

# testing
/coverage

# production
build

# misc
.DS_Store
.env.local
.env.development.local
.env.test.local
.env.production.local

npm-debug.log*
yarn-debug.log*
yarn-error.log*

Now all files can be added and committed to the repository.

I have created a GitHub repository wildfly-react-getting-started which contains the project I created as I assembled this post.

Next Steps

The project I have published is as it was created writing this point, in a future update I will restore the HTML page which was communicating with the JAX-RS endpoint deployed to the application server and update it to use React for the user interface portion of the page.

RP2040 / Pico

Raspberry Pi Pico C / Assembly Development on Fedora

As a long time user of the Fedora Linux distribution but relatively new embedded software developer for the Raspberry Pi Pico I found that a lot of the documentation out there is more focussed on the Debian based Linux distributions.

Comprehensive documentation is already provided to begin development for the Raspberry Pi Pico in the Getting started with Raspberry Pi Pico documentation so this blog post is not intended to duplicate that content, instead this blog post is to highlight some of the specific steps I needed to get a complete development environment working on Fedora.

Overall my aim was to set up a development environment where I could use the command line tools as well as the VSCode IDE to develop software, I am also using a second Raspberry Pi Pico with debugprobe installed so I can deploy code to the target Pico and debug as needed.

Pico SDK and Examples

The first step is to obtain the pico-sdk and pico-examples locally, this is described in sections 2.1 of the getting started documentation, these command can be executed as documented on Fedora.

Toolchain Installation

On a clean Fedora installation use the following commands instead of the commands in the getting started documentation:

sudo dnf group install "C Development Tools and Libraries" "Development Tools"
sudo dnf install cmake
sudo dnf install arm-none-eabi-gcc-cs arm-none-eabi-gcc-cs-c++ arm-none-eabi-newlib

First Build

It should now be possible to move to Chapter 3 in the getting started guide and follow the instructions to build the blink example and install it on your Pico. Do pay attention to the command to set the PICO_SDK_PATH environment variable so the SDK you downloaded earlier can be found.

Once the Pico is connected as a mass storage device if you are going to use the terminal to copy over the binary you can check the mount point with the command mount:

$ mount
...
/dev/sda1 on /run/media/darranl/RPI-RP2 type vfat (rw,nosuid,nodev,relatime,uid=1000,gid=1000,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,showexec,utf8,flush,errors=remount-ro,uhelper=udisks2)

The binary can be copied over using:

cp blink.uf2 /run/media/$USER/RPI-RP2

At this stage you can also experiment with some of the other examples and copy them over with the Pico mounted as a mass storage device.

Debugprobe / Binary Installation and Debugging

There have been many discussions about the lack of a reset button on the Pico, however with a second Pico it is possible to install a tool called debugprobe to enable installation of binaries on a target Pico and support for debugging / accessing output sent from the target Pico using UART.

The debug probe used to be called picoprobe so where you see references to picoprobe these can now be assumed to be referring to debugprobe.

Appendix A of the getting started guide describes the general set up which includes how to connect the two Picos to each other, how to build and install OpenOCD and how to build debugprobe and install it on the first Pico.

Before building OpenOCD as described in the documentation a couple more packages need to be installed:

sudo dnf install libusb1-devel libftdi-devel

Before building debugprobe make sure that the PICO_SDK_PATH is still defined from previously as this step is building the binary to run on the Pico. Some updates need to be applied to the getting started guide for recent debugprobe updates so the cmake command needs to be updated to:

cmake -DDEBUG_ON_PICO=ON ..

The binary can then be installed on a Pico connected as a mass storage device using the following command:

cp debugprobe_on_pico.uf2.uf2 /run/media/$USER/RPI-RP2

Minicom and Permissions

After the Pico running debugprobe has restarted a new device /dev/ttyACM0 should now be present, if you have existing ttyACM devices then the digit may be higher. It is possible to connect a tool such as minicom to this device and this will give you access the UART output from the target Pico.

sudo dnf install minicom

The Raspberry Pi documentation recommends running minicom using sudo, however I find myself using this a lot so gave my user account permissions to connect. Checking the permissions on this device the dialout group has access so I added this to my user account:

$ ls -al /dev/ttyACM0
crw-rw----. 1 root dialout 166, 0 Feb 23 15:05 /dev/ttyACM0
sudo usermod -aG dialout darranl

After adding the group you need to login again for it to take effect.

It is now possible to use minicom to communicate with the debugprobe without needing to use sudo.

minicom -D /dev/ttyACM0 -b 115200

OpenOCD and Permissions

The OpenOCD utility also communicates with the Pico over USB, this can again be achieved using sudo but I have found it easier for my user account to have access, especially when using OpenOCD from an IDE.

First double check the vendor and product ID the debugprobe has used to register over USB:

$ lsusb
...
Bus 001 Device 009: ID 2e8a:000c Raspberry Pi Debug Probe (CMSIS-DAP)
...

The values here are 2e8a and 000c.

As root create the file /etc/udev/rules.d/60-openocd.rules with the following entry:

ATTRS{idVendor}=="2e8a", ATTRS{idProduct}=="000c", MODE="660", GROUP="plugdev", TAG+="uaccess"

Then to update the configuration run:

sudo udevadm control --reload-rules &amp;&amp; sudo udevadm trigger

You can test that OpenOCD can connect to debugprobe (and in turn connect to the target) by running the following command:

$ openocd -f interface/cmsis-dap.cfg -c "adapter speed 5000" -f target/rp2040.cfg
Open On-Chip Debugger 0.12.0-g4d87f6d (2024-02-23-14:57)
Licensed under GNU GPL v2
For bug reports, read
        http://openocd.org/doc/doxygen/bugs.html
adapter speed: 5000 kHz

Info : Hardware thread awareness created
Info : Hardware thread awareness created
Info : Listening on port 6666 for tcl connections
Info : Listening on port 4444 for telnet connections
Info : Using CMSIS-DAPv2 interface with VID:PID=0x2e8a:0x000c, serial=E6614C311B8B4122
Info : CMSIS-DAP: SWD supported
Info : CMSIS-DAP: Atomic commands supported
Info : CMSIS-DAP: Test domain timer supported
Info : CMSIS-DAP: FW Version = 2.0.0
Info : CMSIS-DAP: Interface Initialised (SWD)
Info : SWCLK/TCK = 0 SWDIO/TMS = 0 TDI = 0 TDO = 0 nTRST = 0 nRESET = 0
Info : CMSIS-DAP: Interface ready
Info : clock speed 5000 kHz
Info : SWD DPIDR 0x0bc12477, DLPIDR 0x00000001
Info : SWD DPIDR 0x0bc12477, DLPIDR 0x10000001
Info : [rp2040.core0] Cortex-M0+ r0p1 processor detected
Info : [rp2040.core0] target has 4 breakpoints, 2 watchpoints
Info : [rp2040.core1] Cortex-M0+ r0p1 processor detected
Info : [rp2040.core1] target has 4 breakpoints, 2 watchpoints
Info : starting gdb server for rp2040.core0 on 3333
Info : Listening on port 3333 for gdb connections

The OpenOCD process should continue running at this point and as it is listening for a connection on 3333 all is well.

Conclusion

At this stage everything should be running correctly and it should be possible to work with the pico-sdk, pico-examples and your own projects following the documentation from Raspberry Pi.

VSCode

One final point, at the outset I mentioned that I am using VSCode for developing and debugging my code, based on the above configuration the following is an example of the launch configuration I have within VSCode to install and debug my programs running on the target Pico:

   {
      "name": "Debug Probe  (OpenOCD)",
      "cwd": "${workspaceRoot}",
      "executable": "${command:cmake.launchTargetPath}",
      "request": "launch",
      "type": "cortex-debug",
      "servertype": "openocd",
      "gdbPath": "gdb",
      "device": "RP2040",
      "configFiles": [
        "interface/cmsis-dap.cfg",
        "target/rp2040.cfg"
        ],
      "svdFile": "${env:PICO_SDK_PATH}/src/rp2040/hardware_regs/rp2040.svd",
      "runToEntryPoint": "main",
      // Give restart the same functionality as runToEntryPoint - main
      "openOCDLaunchCommands": [
        "adapter speed 5000"
      ],
      "postRestartCommands": [
          "break main",
          "continue"
      ]
    }