Find out default python version for any Ubuntu release

If you want to find the default Python 3 version of any Ubuntu release, run the command

rmadison python3

example output:

 python3 | 3.6.5-3         | bionic         | amd64, arm64, armhf, i386, ppc64el, s390x
 python3 | 3.6.7-1~18.04   | bionic-updates | amd64, arm64, armhf, i386, ppc64el, s390x
 python3 | 3.8.2-0ubuntu2  | focal          | amd64, arm64, armhf, i386, ppc64el, riscv64, s390x
 python3 | 3.9.4-1build1   | impish         | amd64, arm64, armhf, i386, ppc64el, riscv64, s390x
 python3 | 3.9.7-4         | jammy          | amd64, arm64, armhf, i386, ppc64el, riscv64, s390x
 python3 | 3.10.1-0ubuntu1 | jammy-proposed | amd64, arm64, armhf, i386, ppc64el, riscv64, s390x

Alternatively, check the package

Avoid missing git LFS files on subtree merge

If you follow the github instrauctions about git subtree merges, but your source repository (remote) for the subtree merge has git LFS enabled and contains git LFS tracked files, you may see the following error when you want to push the merge commit to your remote:

$ git push
Unable to find source for object 6faf8b30c6ffa04ac6a40b313d2ad835979ee13870365fd953c64dfa581b771d (try running git lfs fetch --all)  

git lfs fetch –all won’t to the trick here, as you have to transfer the LFS files from the remote repository that is the source of the subtree merge to the target repository.

The workaround described by technoweenie in https://github.com/git-lfs/git-lfs/issues/854 does the trick:

Suppose you want to do a subtree merge of repository A into repository B. Then:

# in repo A
$ git remote add B git-server.com/org/B
$ git lfs push B master --all

After this, all git LFS objects of repo A are known on the LFS server of repo B, and you can continue to push your subtree merge commit.

How to re-add tracked binary files using git LFS

Using git lfs migrate rewrites history, which is not always acceptable, especially in distributed projects. You can add files to git LFS by either renaming them (which deletes and re-adds the file, with the new file tracked by git LFS), or by modifying them. The following command chain uses the second approach to re-add binary files using git LFS, without a modification of the file itself:

touch binary_image.png
git add binary_image.png
git commit -m "Re-add binary_image tracked by git LFS"

Done!

Delete local git branches that were deleted on remote repository

This blog post is based on the article with the same name by KC Müller, which I extend a little bit here so that my command takes care of not deleting local branches checked out in worktrees.

We let our remote servers (GitHub or Bitbucket) delete branches after merging a pull request per default. I sometimes check out branches I’m reviewing, and we work with many branches, so the old branches for which the remote tracking branch has been deleted start to pile up.

A “git prune” will only remove the remote tracking branch, but not delete the branch in your local git repo. So this is the command I use (in Linux!) to delete all local branches with a deleted remote tracking branch, which are not currently checked out:

git branch -vv | grep ': gone]' | grep -Ev "(\*|\+|master|develop)" | xargs -r -n 1 git branch -d

Here’s how it works:

  • git branch -vv lists all local branches and shows information about the remote branch, saying “: gone]” if the remote branch is not present any more.

  • grep ‘: gone]’ filters for those lines which contain the ‘: gone]’ phrase.

  • grep -Ev … removes lines from the result that contain an asterisk, a plus, or the names “master” or “develop”. This will ignore the branch you are currently on (marked with an asterisk), branches checked out in other worktrees (marked with a +), and the special branches “develop” and “master”. It will also prevent that the “git branch -d” is executed with a “*” at the end which would result in deleting all your local branches.
    NOTE: This might filter out braches like “merge-xy-to-master”. I will likely improve this regular expression in the future.
  • xargs -r -n 1 git branch -d calls “git branch -d” on each argument. The option -r avoids that the command is called with no arguments, and the option “-n 1” limits the number of passed arguments to one, which means that the first word – which happens to be the branch name – is passed. With the option -D, also branches which have not been fully merged into the current branch are deleted, which is helpful if the project has several long-running integration branches.

There you go, all stale branches cleaned!

I added this to my .bashrc as an alias:

 

# deletes all branches which have been deleted on remote, except for 
git-cleanup() {
    git branch -vv | grep ': gone]' | grep -Ev "(\*|\+|master|develop)" | awk '{print $1}' | xargs -r -n 1 git branch -D
}

I use a function instead of an alias to avoid trouble with escaping the Dollar signs and single/double quotes.

Python: Cleaning a virtual environment

If you want to test whether a requirements.txt installs all packages required to run your python application, and you have experimented with the virtual environment before (installed some extra packages etc.), you want to remove all packages and keep only those a the requirements.txt would install. Two solutions come to my mind:

  1. Delete the virtual environment and create it again.
  2. Remove all packages installed into the virtual environment:

    pip freeze | xargs pip uninstall -y
    Then install all packages from the requirements.txt again:
    pip install -r requirements.txt

Linux: Show files from an apt package

As I search for these commands time and time again, I’ll just copy the Post which lists them all:

https://serverfault.com/questions/96964/list-of-files-installed-from-apt-package

To find which files were installed by a package, use dpkg -L:

$ dpkg -L $package

apt-file can tell you which files will be installed by a package before installing it:

root# apt-get install apt-file
root# apt-file update
$ apt-file list $package

NOTE: apt-file needs to be installed first (sudo apt-get install apt-file).

Or if you have the package as a .deb file locally already, you can run dpkg on it:

$ dpkg --contents $package.deb

To find which package provides a file that is already on your system, use:

$ dpkg -S /path/to/file

To find which package provides a file that is not currently on your system, use apt-file again:

$ apt-file search /path/to/file

L4+ Autonomes Fahren Wettbewerber Übersicht

Im Bereich des urbanen, Level4 autonomen Fahrens tummeln sich so viele Firmen, dass man schon mal die Übersicht verlieren kann.

Darum hat das Manager Magazin ein autonomes Fahren Ranking der führenden Firmen veröffentlicht, die ich hier zusammenfassen möchte.

Im dort zitierten Report Guidehouse leaderboard automated driving vehicles  wird aktuell folgende Rangliste angegeben:

  1. Waymo (Google)
  2. Ford Autonomous Vehicles (Argo.ai)
  3. Cruise (GM, Honda, Lyft)
  4. Baidu (China, “Apollo”)
  5. Intel-Mobileye (USA/Israel)
  6. Aptiv-Hyundai
  7. VW Group (über Argo.ai)
  8. Yandex (Russland)
  9. Zoox (Amazon)
  10. Daimler-Bosch
  11. Aurora (ex Uber, seit kurzem mit Toyota im Boot)
  12. May Mobility (Shuttles)
  13. Voyage Auto

Ferner liefen: Toyota (Pony.ai), BMW, Renault-Nissan-Mitsubishi, Volvo, NAVYA (Shuttles, Frankreich). Tesla steht auf dem letzten Rang, hauptsächlich weil sie in ihren Elektroautos gar keine Hardware für “echtes” Level4/Level5 autonomes Fahren vorsehen. Tesla dürfte allerdings im L2+ autonomen Fahren vorne mit dabei sein.

Waymo liegt offensichtlich ganz vorne, die Fahrzeugflotte ist schon seit einiger Zeit ohne Safety Driver in Phoenix, Arizona unterwegs und ein Test im deutlich komplexeren Umfeld San Francisco wird vorbereitet. Ebenso sind viele andere Tochterfirmen der großen US Technologiekonzerne auf den vorderen Plätzen. Die Plätze 1-4 haben bereits eine Fahrzeugflotte im Dauerbetrieb auf öffentlichen Straßen im Einsatz, davon Waymo ohne Safety Driver. Ford möchte seinen Robotaxi Dienst aufgrund der Corona-Pandemie “erst” 2022 starten.

Wie ein Bericht des manager magazins über autonomes Fahren in China zeigt, ist auch die VR vorne mit dabei. Auf Platz 5 liegt Intel-Mobileye, über deren interessante Strategie zur Sensordatenfusion dieser Artikel auf Forbes.com einen Überblick bietet.

Wie stehen aber die Deutschen Firmen im Rennen?

Hier ein Artikel zum Deutschen Wettbewerber Bosch/Daimler. VW ist wie auch Ford in die Firma Argo.ai investiert. BMW hat gemäß aktuellen Presseberichten ein eigenes Programm zum L4 autonomen Fahren, betreibt dieses allerdings nicht mit besonders viel Nachdruck – eine Testflotte mit 500 L4 autonomen Fahrzeugen wurde vor kurzem abgekündigt.

Eine weitere Übersicht findet sich auf craft-co. Diese Übersicht enthält allerdings auch Firmen, die nicht beabsichtigen, eine eigene Fahrzeugflotte aufzustellen oder ein Gesamtsystem anzubieten (z.B. apex.ai). Außerdem basieren die dort genannten Kennzahlen rein auf öffentlich im Netz abrufbaren Statements, sind also logischerweise unvollständig.

Insgesamt ist schon jetzt eine Konsolidierung am Markt zu betrachten, wobei es spannend ist, wie viele doch eher kleine Firmen ohne große Partner sich im Markt tummeln, da die Entwicklung von Robotaxis und L4 autonomen Fahrzeugen riesige Geldsummen verschlingt.

Doxygen cannot handle more than one image with the same name

As you might know, you can include images in the output of your Doxygen html documentation like this:

@image html path/to/img.png

In one of my larger projects, I found out that doxygen simply copies all referenced images, which it finds somewhere below the IMAGE_PATH, into the root directory of the output folder. Given the following structure:

src/component1/doc/component1.md
src/component1/doc/img/structure.svg
src/component2/doc/component2.md
src/component2/doc/img/structure.svg

where component1.md and component2.md both have the following content:

...
![Structure](img/structure.svg)
...

the output will show the component2/doc/img/structure.svg for both component1 and component2. There is no warning or error message during doxygen generation about this.

As I don’t think this behavior can be right, I added an issue report for it:

https://github.com/doxygen/doxygen/issues/8362

Nevertheless, this means for the time being: Make sure all image file names referenced by Doxygen are unique.

Docker as a build environment, Part 2: Bind mounts

In my previous post, I ended with two goals yet unaccomplished:

  • Avoiding costly copy of all source files (we’re talking about a multiple Gigabyte repository here) into the docker image
  • Using Docker as a build environment but not as an execution environment in the leanest possible way.

Even though all tutorials tell you to, I am not happy with the solution to copy all source files into the docker image (using the COPY instruction inside the Dockerfile). The build environment itself rarely changes, whereas the source files change all the time – and thus a new Docker image is created for every build, filling up your image cache. Docker offers an alternative to that: the bind mounts.

Using a bind mount, you could map the input directory, e.g. your source tree, directly into a mount point in the running docker container. Given a build environment docker image called build_env, the command would look like this:

docker run -it --name build-container --mount type=bind,ro=true,source=$(pwd),destination=/src  build_env:latest /src/build.sh

The last argument (/src/build.sh) calls a build command which is part of the source folder in the host file system – no need to COPY anything, no need to create a new docker image for the build stage!

The –mount argument declares that this is a read-only bind mount, so no temp files will be accidentally written to your source tree (possibly with a user that does not exist in the host OS or – even worse, owned by root). This read-only source tree also makes it clear that the bind-mount strategy only works with out-of-source builds.

But where do the build results go? They are generated inside the docker container file system.

As soon as the container is removed, the build result disappears with it. So better copy them out into your destination directory before:

docker cp build-container:/path/to/build-output .
docker rm build-container

It is not advisable to simply add a (second) mount binding to use as build output destination directory, because the owner of the files created by the docker process will be what was defined as USER previously in the Dockerfile – per default root. This may cause big trouble, throwing garbage owned by root into the host’s file system. Juan Treminio has written an in-depth blog post about this. Bottom line: You can run the container as your own user, but before that, you have to make sure that any user has enough access rights inside the container to run the programs e.g. the build toolchain. This is the command:

docker container run --rm \
    -v ${PWD}:/var/www \
    -w /var/www \
    -u $(id -u ${USER}):$(id -g ${USER}) \
    jtreminio/php:7.2 composer require psr/log

It seems safer and simpler to me to use “docker cp” instead.

So, for just building locally, there’s the possibility to run docker, then get the build output from the running (or stopped) container. Unfortunately, there is no way to get the functionality of a “bind mount” into Docker image creation, though.

The “COPY –from” command which is used in a multi-stage docker build to copy the build result from a previous stage into the final execution environment image only works with Docker images which are created in the same Dockerfile as source.

I found out that you cannot use “docker run –name build-container –mount …” to build in a Docker container named build-container, and then use “COPY –from=build-container …” in a Dockerfile to copy the build output from the container into the dodo image. The Dockerfile reference is very clear about this:

Optionally COPY accepts a flag --from=<name> that can be used to set the source location to a previous build stage (created with FROM .. AS <name>) that will be used instead of a build context sent by the user. In case a build stage with a specified name can’t be found an image with the same name is attempted to be used instead.

Okay, so an image works as well… but not a container.

If you think about how Docker works internally, this limitation becomes clear: The Docker build command sends commands (and the build context) to the Docker daemon, which creates the image. The Docker daemon could run on a different node and does not see our local Docker containers.

So what to do if we want to create an execution environment image with the build output? We have no choice but to resort to the multi-stage build and create an image for the build itself as well – which means we have to COPY the source tree into the build image. Too bad :-(.

But, in the next post about Docker I will show some ways to clean up temporary images systematically.


p.s.: When you feel like losing overview over all your docker images or running low on disk space, check out https://linuxconfig.org/how-to-remove-all-docker-images-stored-in-a-local-repository to free up some space. If this command tells you that images are used by a container, please enter

docker ps -a

to see all the currently not running docker containers. Even containers you stopped a while ago is still listed there. You have to remove them if you don’t need them any more:

docker rm