Doxygen cannot handle more than one image with the same name

As you might know, you can include images in the output of your Doxygen html documentation like this:

@image html path/to/img.png

In one of my larger projects, I found out that doxygen simply copies all referenced images, which it finds somewhere below the IMAGE_PATH, into the root directory of the output folder. Given the following structure:

src/component1/doc/component1.md
src/component1/doc/img/structure.svg
src/component2/doc/component2.md
src/component2/doc/img/structure.svg

where component1.md and component2.md both have the following content:

...
![Structure](img/structure.svg)
...

the output will show the component2/doc/img/structure.svg for both component1 and component2. There is no warning or error message during doxygen generation about this.

As I don’t think this behavior can be right, I added an issue report for it:

https://github.com/doxygen/doxygen/issues/8362

Nevertheless, this means for the time being: Make sure all image file names referenced by Doxygen are unique.

Docker as a build environment, Part 2: Bind mounts

In my previous post, I ended with two goals yet unaccomplished:

  • Avoiding costly copy of all source files (we’re talking about a multiple Gigabyte repository here) into the docker image
  • Using Docker as a build environment but not as an execution environment in the leanest possible way.

Even though all tutorials tell you to, I am not happy with the solution to copy all source files into the docker image (using the COPY instruction inside the Dockerfile). The build environment itself rarely changes, whereas the source files change all the time – and thus a new Docker image is created for every build, filling up your image cache. Docker offers an alternative to that: the bind mounts.

Using a bind mount, you could map the input directory, e.g. your source tree, directly into a mount point in the running docker container. Given a build environment docker image called build_env, the command would look like this:

docker run -it --name build-container --mount type=bind,ro=true,source=$(pwd),destination=/src  build_env:latest /src/build.sh

The last argument (/src/build.sh) calls a build command which is part of the source folder in the host file system – no need to COPY anything, no need to create a new docker image for the build stage!

The –mount argument declares that this is a read-only bind mount, so no temp files will be accidentally written to your source tree (possibly with a user that does not exist in the host OS or – even worse, owned by root). This read-only source tree also makes it clear that the bind-mount strategy only works with out-of-source builds.

But where do the build results go? They are generated inside the docker container file system.

As soon as the container is removed, the build result disappears with it. So better copy them out into your destination directory before:

docker cp build-container:/path/to/build-output .
docker rm build-container

It is not advisable to simply add a (second) mount binding to use as build output destination directory, because the owner of the files created by the docker process will be what was defined as USER previously in the Dockerfile – per default root. This may cause big trouble, throwing garbage owned by root into the host’s file system. Juan Treminio has written an in-depth blog post about this. Bottom line: You can run the container as your own user, but before that, you have to make sure that any user has enough access rights inside the container to run the programs e.g. the build toolchain. This is the command:

docker container run --rm \
    -v ${PWD}:/var/www \
    -w /var/www \
    -u $(id -u ${USER}):$(id -g ${USER}) \
    jtreminio/php:7.2 composer require psr/log

It seems safer and simpler to me to use “docker cp” instead.

So, for just building locally, there’s the possibility to run docker, then get the build output from the running (or stopped) container. Unfortunately, there is no way to get the functionality of a “bind mount” into Docker image creation, though.

The “COPY –from” command which is used in a multi-stage docker build to copy the build result from a previous stage into the final execution environment image only works with Docker images which are created in the same Dockerfile as source.

I found out that you cannot use “docker run –name build-container –mount …” to build in a Docker container named build-container, and then use “COPY –from=build-container …” in a Dockerfile to copy the build output from the container into the dodo image. The Dockerfile reference is very clear about this:

Optionally COPY accepts a flag --from=<name> that can be used to set the source location to a previous build stage (created with FROM .. AS <name>) that will be used instead of a build context sent by the user. In case a build stage with a specified name can’t be found an image with the same name is attempted to be used instead.

Okay, so an image works as well… but not a container.

If you think about how Docker works internally, this limitation becomes clear: The Docker build command sends commands (and the build context) to the Docker daemon, which creates the image. The Docker daemon could run on a different node and does not see our local Docker containers.

So what to do if we want to create an execution environment image with the build output? We have no choice but to resort to the multi-stage build and create an image for the build itself as well – which means we have to COPY the source tree into the build image. Too bad :-(.

But, in the next post about Docker I will show some ways to clean up temporary images systematically.


p.s.: When you feel like losing overview over all your docker images or running low on disk space, check out https://linuxconfig.org/how-to-remove-all-docker-images-stored-in-a-local-repository to free up some space. If this command tells you that images are used by a container, please enter

docker ps -a

to see all the currently not running docker containers. Even containers you stopped a while ago is still listed there. You have to remove them if you don’t need them any more:

docker rm

Docker as a build environment

This is the first of a series of posts about using Docker to create and use containerized build environments.

The need for this came up at work in software projects where a large tool landscape was needed to work with the codebase (C/C++ for embedded targets and Python with lots of extra package dependencies). In my particular case, I wanted to facilitate generating the documentation in a docs-as-code fashion (see for example https://doctoolchain.github.io/docToolchain/).

This is not a gentle introduction to Docker. It requires basic knowledge of how to create a Docker image and how to work with Docker containers.

My docker build is a multi-stage build, as recommended by Docker for this use case:

  • Stage 1 builds a generic image that contains all dependencies required to build documentation (“build environment).  This image may be cached locally or the docker registry.
  • Stage 2 copies all source files into the build environment image and runs the build steps to generate build output (in this case: static HTML pages).
  • Stage 3 builds an “execution environment” (in this case: nginx web server) and copies the buid output from Stage 2 to the location from which it can be run (or in this case, the root directory for the nginx webserver).

The build environment image can be reused easily, the build stage is temporary and short-lived, and the execution environment image can run the executable(s) or, in this case, the webserver hosting the static HTML pages for the project documentation, without containing all the build toolchain or source code. This makes the execution environment image small and easy to deploy or exchange.

Multi-stage builds are also recommended by Microsoft in their well-written VSCode Docker tutorial.

While these builds are nice, two open points remained for me:

  1. At least in stage 2, a COPY operation of the source files into the container is required. In addition, the build context will be large – as large as your source tree (.dockerignore file can reduce the size of the build context which is sent to the Docker daemon considerably, but the COPY operation still remains, and eats up both time and increases image size).
  2. What if I only want to have a containerized build environment, but no need for a containerized execution environment (let’s assume I have installed the dependencies or a framework, e.g. ROS, anyways?).

Could a “bind mount” be the solution to those problems? Find the answer in my next blog post.

Getting started with CUDA

First example I used was the even easier introduction to CUDA blog post. First thing that did not quite work is the call to nvprof:

$ nvprof ./add_cuda
==7511== NVPROF is profiling process 7511, command: ./add_cuda
Max error: 0
==7511== Profiling application: ./add_cuda
==7511== Profiling result:
No kernels were profiled.
No API activities were profiled.
==7511== Warning: Some profiling data are not recorded. Make sure cudaProfilerStop() or cuProfilerStop() is called before application exit to flush profile data.
======== Error: Application received signal 11

"Signal 11" is SIGSEGV (segmentation fault). I did some web search to find out that I need to add an option for nvprof to work:
$ nvprof --unified-memory-profiling off ./add_cuda

But why doesn’t it work like in the example? Because NVidia profiler is deprecated since the Volta GPU generation, which is the last to still support it.

So the call failed but it’s also not supported any more. I tried using the replacement nsys instead.
Turns out you need to install NsightSystems first. To do so, register at https://developer.nvidia.com/ and go to the Download section. After installation, run from the console:

$ nsys nvprof ./add_cuda

So, that works fine and I was able to finish the excercise.

The Cuda-10 manual also mentions that “Nvidia Nsight Eclipse Edition” is going to be removed by the next release, so Cuda-11 and newer only come with the Nvidia Nsight Eclipse plug-ins. /usr/local/cuda-xyz/doc/pdf/Nsight_Eclipse_Plugins_Installation_Guide.pdf explains how to install them into an existing Eclipse. cuda-10.0 Nsight Eclipse plug-ins were compatible with the newest Eclipse version as of the time of writing this (2020.12).

Random Bash wisdom

I’ve recently found myself writing some bash scripts and I wanted to collect some random wisdom I aggregated:

1. This is a great cheat-sheet for bash if conditions: https://clburlison.com/bash-if-then-cheat-sheet/

2. If you have a back-up solution in case a certain command is not available in the PATH, you can use the following to test it:

if ! command -v my_tool > /dev/null; then
    # do something in case my_tool is not available
fi

NOTE: In bash, you can also use “hash” command, as this post summarizes: https://scripter.co/check-if-a-command-exists-from-shell-script/

3. To get the path of your own script:

SCRIPT_DIR=$(realpath $(dirname "$0"))

4. Understanding Curly braces: https://www.linux.com/topic/desktop/all-about-curly-braces-bash/
Now, I don’t think every variable should be put in ${}, but it’s useful to have a reference on what you can do when using ${} instead of just the $ sign.

5. I like using pushd and popd, but I also like them to be silent. https://serverfault.com/questions/108154/can-i-call-pushd-popd-and-prevent-it-printing-the-stack shows how to do it:

pushd mydir > /dev/null
# work in mydir
popd > /dev/null

Finally, the Wikibook on Bash Shell Scripting is realy good (https://en.wikibooks.org/wiki/Bash_Shell_Scripting).