Embedded Linux
Embedded Linux
Table of Contents
Embedded Linux ToolChain............................................................................................................4
Native And Cross ToolChains.....................................................................................................4
ToolChains Have To Match The Target Machine Specifications................................................4
The Standard GNU ToolChain....................................................................................................5
How To Choose The Right ToolChain........................................................................................5
How To Use The ToolChain............................................................................................................7
What Build, Host And Target Are For ToolChains.....................................................................7
Native, Cross, Cross Native, Crossback And Canadian Cross Builds........................................7
More GCC Options For x86_64 Systems...................................................................................8
Analyzing The C Library............................................................................................................9
Statically And Dynamically Linked Libraries.............................................................................9
Shared Library Version Number And Interface Number..........................................................10
How Libraries Look Like On Linux Systems...........................................................................10
MakeFile, AutoTools And Cmake..................................................................................................11
Cross Compiling Using Make And Makefiles..........................................................................11
Building Systems With Autotools.............................................................................................11
Building Systems With CMake.................................................................................................13
Common Issues That Affect Build Systems..............................................................................13
Boot Process..................................................................................................................................13
The ROM Step..........................................................................................................................13
The SPL Step.............................................................................................................................14
The TPL Step............................................................................................................................14
The UEFI Standard And Booting Process.................................................................................14
The Bootloader And The Kernel...............................................................................................14
Device Trees To Simplify Physical Devices Management.......................................................15
How To Read And Create Device Trees....................................................................................15
Introduction To U-Boot..................................................................................................................17
How U-Boot Works...................................................................................................................17
Downloading And Building The U-Boot Code.........................................................................17
Test U-Boot On QEMU............................................................................................................18
Test U-Boot On A Physical Board............................................................................................18
How To Automate U-Boot........................................................................................................20
U-Boot Falcon Mode................................................................................................................20
Introduction To BareBox...............................................................................................................21
Downloading And Building The BareBox Source Code..........................................................21
Using BareBox On A Physical Board.......................................................................................22
Differences Between U-Boot And BareBox.............................................................................22
Choosing A Linux Kernel..............................................................................................................23
What The Linux Kernel Does...................................................................................................23
The C Library, The User Space And The Kernel......................................................................24
A System Is Not Just Its Kernel................................................................................................24
The Linux Kernel Versioning System.......................................................................................24
How To Retrieve The Linux Kernel Source Code....................................................................25
Configuring The Linux Kernel......................................................................................................25
The Linux Kernel Folder Structure And Content......................................................................25
Introduction To The Linux Kernel With Kconfig.....................................................................25
How Kconfig Works And Its Syntax.........................................................................................26
Creating The Linux Kernel Configuration File.........................................................................28
Working With The Linux Kernel Version Number...................................................................29
Embedded Devices And Linux Kernel Modules.......................................................................29
Building The Linux Kernel............................................................................................................29
An Example Of Kconfig Makefile............................................................................................30
Building The Kernel Creating The Right Output......................................................................30
Building A uImage For ARM....................................................................................................30
The Kernel Linux Build Output................................................................................................31
Building The Device Tree File..................................................................................................31
Building The Kernel Modules...................................................................................................31
Make Targets To Clean The Build Environment.......................................................................32
Example Of Sequence Of Commands To Build The Linux Kernel..........................................32
Testing The Linux Kernel.........................................................................................................33
Booting The Linux Kernel.............................................................................................................33
The Boot Process And Root File System..................................................................................33
Passing Parameters To The Kernel............................................................................................34
Booting The Kernel Using QEMU...........................................................................................35
Booting The Kernel Using A Physical Board...........................................................................35
Shells And Command Line Utilities..............................................................................................36
Creating The Root File System.................................................................................................36
Adding A Command Shell To The System With BusyBox.......................................................37
Downloading And Building BusyBox......................................................................................38
ToyBox As An Alternative To BusyBox...................................................................................39
The Root File System And Initramfs.............................................................................................39
Introduction To Initramfs..........................................................................................................40
Building An Initramfs As A Standalone CPIO Archive............................................................40
Building An Initramfs As A CPIO Archive Embedded Into The Kernel..................................41
Using Device Tables As Initramfs.............................................................................................41
Why Are Some Commands Still Not Working?........................................................................42
Init And Alternatives For Embedded Systems...............................................................................42
Init And Its Alternatives............................................................................................................42
Configuring The BusyBox Init..................................................................................................43
Managing Device Nodes................................................................................................................44
Managing Device Nodes With Makedev..................................................................................45
Managing Device Nodes With Udev.........................................................................................45
Managing Device Nodes With Mdev........................................................................................46
Managing Device Nodes With Devtmpfs.................................................................................47
Conclusions...............................................................................................................................47
Basic Network Configuration........................................................................................................47
Managing The Network With BusyBox....................................................................................48
Managing The Network With Nmcli.........................................................................................49
Managing The Network With Netplan......................................................................................50
Managing The Network With Systemd-Networkd...................................................................50
Managing The Network With ConnMan...................................................................................51
Using Device Tables To Create File System Images.....................................................................51
How To Use Genext2fs To Create File Systems.......................................................................51
Testing The File System By Starting The Board.......................................................................53
Booting Through The Network......................................................................................................53
Booting The Board Using TFTP...............................................................................................53
Booting The Board Using NFS.................................................................................................54
Building Embedded Systems With Buildroot................................................................................54
Introduction To Buildroot..........................................................................................................54
Downloading The Buildroot Source Code................................................................................55
How To Configure The System With Buildroot........................................................................55
How To Tune The Kernel Using Buildroot...............................................................................56
How To Build The System With Buildroot...............................................................................56
Testing The System With QEMU..............................................................................................57
How To Create Custom Systems With Buildroot......................................................................57
Building Embedded Systems With The Yocto Project...................................................................58
How To Install The Yocto Project.............................................................................................59
BitBake And The Yocto Project Metadata................................................................................59
The Yocto Project Layers..........................................................................................................60
The Yocto Project Recipes........................................................................................................61
How To Build An Image With The Yocto Project.....................................................................61
How To Test The Image With QEMU.......................................................................................62
How To Add Recipes To An Image...........................................................................................62
How To Create And Manage A SDK........................................................................................63
Embedded Linux ToolChain
A toolchain is a set of packages needed to create software for a specific device. Toolchains can be
either downloaded and installed or they can be built using a toolchain generator. A toolchain is the
very first thing that is needed when building an embedded Linux machine, as it will build:
1. The bootloader.
2. The kernel.
3. The root filesystem.
1. GNU’s Not Unix! (GNU) project & the compiler system GCC (gcc, g++, gfortran and many
more).
2. Low Level Virtual Machine (LLVM) project and the compiler front end Clang (C, C++,
Objective-C and Objective-C++).
1. Clang only compiles C-like languages while relies on other technologies for Ada, Fortran,
Java, etc…
2. GCC is a more robust but memory intensive system.
3. However, as progresses are made every day, this might change soon!
While the native methodology requires software updates to be strictly controlled, as development
and target systems have to be synchronized, cross development requires large amount of work, as
all libraries have to be cross compiled.
Toolchains have to be built according to the target machine specifications, such as:
1. Compilers for C, C++, Assembly and Java that produce assembler code to be passed to as
(GNU assembler).
2. Binutils to turn assembly code into binary (as), link objects to create executable files (ld)
and much more!
3. A C library implementing POSIX APIs able to talk to the kernel.
4. Debugging tools.
The output of the following command is an example of how the GNU project identifies toolchains:
C libraries are wrapper functions for system calls from which they are often named. Programs use
C libraries to talk to the kernel:
• glibc is the GNU C library and it is the best implementation of POSIX API, although a large
one.
• eglibc was a project forked from glibc, optimized for embedded systems, obsolete and no
longer maintained.
• musl libc is a good C standard library for systems with low RAM or not enough storage. It is
released under the MIT License.
• uClibc-ng is a project forked from uClibc and it is designed for embedded systems and
mobile devices runnning μClinux. Highly compatible with glibc.
Those willing to build an embedded system from scratch can choose their toolchain among the
following three options:
1. A pre-built toolchain.
2. A toolchain created from scratch before the installation of the system.
3. A toolchain generated using an embedded build tool.
Using a pre-built toolchain is the easiest option, although less flexible than the other ones. The
following pre-built toolchains are very popular: Debian-based, Yocto Project, Mentor Graphics,
TimeSys, MontaVista and Linaro. Always make sure that the chosen toolchain:
Building a toolchain from scratch is not an easy task even though several very good projects, such
as “Cross Linux From Scratch”, already exist. A simplified approach consists of using crosstool-NG
which comes with many useful scripts driven by a front-end. To install crosstool-NG:
If the previous steps succeeded, the reader will have a working installation of crosstool-NG ready
on the build machine. The following steps show how to use crosstool-NG to build a toolchain for
the emulator QEMU:
Should the build process succeed, the toolchain will be located under:
~/x-tools/arm-unknown-linux-gnueabi/bin
In this folder, tools such as compiler, debugger, linker, all renamed with the toolchain identity, are
located. For example, the toolchain version of ‘ld’ will be:
~/x-tools/arm-unknown-linux-gnueabi/bin/arm-unknown-linux-gnueabi-ld
At this stage the user should start familiarising with the new environment. Modifying the PATH
environmental variable would also be beneficial:
PATH=${HOME}/x-tools/arm-unknown-linux-gnueabi/bin/:$PATH
Although the procedure above is the one that QEMU users should follow, those who own a physical
board can go through the same procedure, making sure to:
*-gcc -v
In the output produced by the line above, the section “Configured with…” can be found, which
contains the following details:
Therefore, a crossed native build would use a cross-compiler to build native packages for a different
system, a crossback build would use a cross-compiler to build packages for the build machine and a
Canadian cross build would use a cross-compiler to build another cross compiler to build packages
for a third platform.
Going back to the output of the previous gcc command, the meaning of the some of the remaining
options for a x86_64 system, is the following:
The next step should be analyzing the C library and its components, in order to familiarize with
this important part of the system:
• libc is the main library that contains, for example, the POSIX implementations of “printf”
and “close”. This library is so important that smart compiler such as GCC can import it
automatically when the programmer forgets to do so.
• libpthread is the library for POSIX thread functions. It can be linked by using the -lpthread
flag.
• libm is the library for math functions. It can be linked by using the -lm flag.
• librt is the library for POSIX real time extensions, which includes asynchronous
Input/Output and shared memory. It can be linked by using the -lrt flag.
Libraries can be either statically or dynamically linked to those files that need them. Static linking:
1. Static library contents physically exist in the executable files that link them.
2. Executable files that use static libraries increase in size.
3. Removes any compatibility issue as each executable file includes all libraries it needs.
4. Increases execution speed.
5. Tends to create slower build processes.
6. Usually, the keyword -static forces the operating system into statically linking all libraries.
*-gcc -c mystaticlib1.c
*-gcc -c mystaticlib2.c
*-ar rc libmystaticlib.a mystaticlib1.o mystaticlib2.o
*-gcc myprog.c -lmystaticlib -I../usr/include -L../libs -o myprog
Where “-L” appends the directory to the list of directories to be searched for library files and “-I”
appends the directory to the list of directories to be searched for header files. On the other hand,
dynamic linking:
Where “-L” appends the directory to the list of directories to be searched for library files and “-I”
appends directory to the list of directories to be searched for header files. The linker will look for
libmydynlib.so into /lib and /usr/lib and alternative directories can be specified modifying the shell
variable LD_LIBRARY_PATH.
The shared library version number is a string that is appended to the library name to define its
version and it is not included in the symbolic link name used to load the library. For example, the
library libmylib.so.1.0.10 will have a symbolic link called libmylib.so. When another minor fix is
released as libmylib.so.1.0.11, only the symbolic link will need to be modified to point at
libmylib.so.1.0.11.
The interface number encodes the interface number created when the library was built. Its format
is:
[library_name].so.[interface_number]
This is useful when major changes are deployed. Should the new version libxyz.so.2.0.10 of
libxyz.so.1.0.10 be released, backward compatibility would break. However, old programs would
not see the new library, as they were linked to libxyz.so.1.*.*
The following schema shows how dynamic and static libraries will look like on Linux systems:
CC=gcc
CFLAGS=-I.
HelloW: HelloW.c HelloWFunc.c
$(CC) -o HelloW HelloW.c HelloWFunc.c
The file above tells make to compile a C program and an external function by using gcc as C
compiler. CFLAGS is used to tell the compiler that header files will be located into the local
directory.
To cross compile software packages using make or a makefile, the variables CROSS_COMPILE
and ARCH usually need to be set:
More simply:
ARCH=arm64
CROSS_COMPILE=arm-cortex_a8-linux-gnueabihf-
Autotools is a collection of tools that are used to build systems. The main aim of autotools is
providing users with standardized build procedures. Programmers will need to learn a single tool
that is able to compile packages using different versions of compilers while loading different header
files and different versions of libraries. Autotools consists of:
• GNU Autoconf which generates the configure script that checks the host system and creates
the makefile and header files from templates.
• GNU Automake which generates Makefile.in templates from Makefile.am templates
allowing programmers to write makefiles in a higher-level language.
• GNU Libtool which creates portable compiled libraries.
• GNULib which is a portability library.
Moreover, autotools:
Usually, to configure, build and install packages using autotools the following steps are required:
$> ./configure
$> make
$> make install
It is possible to analyze packages configurations by inspecting their .pc file, which are used for
tracking packages installations, as follows:
The same output can be obtained by using the tool pck-config, making sure its configuration
directory PKG_CONFIG_LIBDIR is properly set:
CMake does not depend on make and the Unix shell, therefore, it is able to run on Windows too. In
fact, it is designed to work with native build environments. In the CMake system, very simple
configuration files called CMakeLists.txt are stored into each project (sub)directory and they are
used to produce build files. CMake can interact with Microsoft Visual Studio, XCode, Eclipse CDT,
MSBuild, Make and many more. The following is an example of the extremely intuitive syntax
used to build CMakeLists.txt files:
cmake_minimum_required(VERSION 3.9)
project(HWorld)
add_executable(HWorld HWorld.c Utils.c)
install(TARGETS HWorld DESTINATION bin)
CPack is the packaging system fully integrated with CMake, although it can work by itslef. CPack
can create the following types of archives: deb, rpm, gzip, NSIS and Mac OS X packages. Usually,
to configure, build and install a pakcage with CMake the following commands are required:
$> ./ccmake .
$> make
$> make install
The following are common issues that affect the most commonly used build systems:
Boot Process
Booting the Linux kernel on embedded systems is a process that consists of the following four
stages, in the given order:
1. ROM.
2. Secondary Program Loader (SPL).
3. Tertiary Program Loader (TPL).
4. Kernel level.
SPL configures the memory controller and other components so that the Tertiary Program Loader
(TPL) can be loaded into DRAM. If SPL comes with file system drivers, it can read files such as “u-
boot.img” from the disk. SPL usually does not allow for any user interaction. At the end of this
stage DRAM contains TPL and SPL can jump to it.
When the TPL stage is reached, full bootloaders such as U-Boot or BareBox are also running.
Users can access a command line interface to select a newer kernel, perform maintenance tasks and
much more. At the end of this step the kernel is stored in memory, ready to be started. Embedded
bootloaders usually disappear once the kernel is loaded in order to free up memory.
The firmware of many ARM and Intel platforms is based on Universal Extensible Firmware
Interface (UEFI). Many bootloaders compatible with this standard exist, therefore, the reader can
choose any of them. For example, either systemd-boot or Barebox would be a very good choice.
UEFI booting process consists of similar steps to those already discussed, as it can also be divided
into four steps:
Device trees, as defined in the OpenBoot standard, are tree data structures with nodes that describe
physical devices. They can be either loaded by the bootloader which then pass them to the kernel
usually through R2 register or embedded into the kernel. Device trees are saved into .dts files and
they are compiled using the Device Tree Compiler to produce a Device Tree Blob.
ARM systems did not use device trees, but they would store information inside ATAGS, whose
address was saved into the R2 register, ready to be passed to the kernel instead. Machine type was
also passed to the kernel as integer, stored into the R1 register. PowerPC would simply pass the
kernel a pointer containing the address of an information structure.
The following is an excerpt of a typical .dts file used to describe a made up board:
{
model = "MY Board Electronics";
/*Drivers compatibility*/
compatible = "ti,am33xx";
#address-cells = <1>;
#size-cells = <1>;
cpus {
#address-cells = <1>;
#size-cells = <0>;
/*dev name + @ + address*/
cpu@0 (
/*Drivers compatibility*/
compatible = "arm,cortex-a8";
device_type = "cpu";
reg = <0>;
};
};
memory@0x2000 {
device_type = "memory";
reg = <0x2000 0x20 0xDA00 0x10>;
};
};
The snippet above, the cpus container node defines all CPUs present on the system. In this instance,
as only a single cpu node is defined, the board will be equipped with only one CPU. Moreover, as
each node is assigned a unique id, the only CPU running on this board will be identified by the
cpu@0 label. The properties #address-cells and #size-cells are defined in the snippet, as all
addressable devices have to set initialize these in order to interpret the reg property:
• #address-cells is the number of 32-bit cells required to encode the address field in reg.
• #size-cells is the number of 32-bit cells required to encode the size field in reg.
The property reg is the address of the device’s resources within the address space defined by its
parent. It consist of couple of fields, the first one being the address and the second one being the
size. In the eaxmple above, memory@0x2000 would have a 32-byte block at offset 0x2000 and a
16-byte block at offset 0xDA00. Machines that require 64-bit addressing may set #address-cells and
#size-cells to two. The following properties are also important:
The following is one more snippet defining interrupt controllers and it should be self-explanatory:
...
compatible = "comp1,comp2";
#address-cells = <1>;
#size-cells = <1>;
/*The controller is intc*/
interrupt-parent = <&intc>;
...
intc: interrupt-controller@10150000 {
compatible = "arm,pl190";
reg = <0x10150000 0x2000>;
interrupt-controller;
/*defines how to specify interrupts*/
#interrupt-cells = <2>;
};
...
serial@1a1f3000 {
compatible = "arm,pl011";
reg = <0x1a1f3000 0x2000 >;
/*defines interrupts for this device*/
interrupts = < 2 0 >;
};
Device trees have to be passed to the kernel in their binary representation as .dtb files obtained
using the program dtc (device tree compiler). Bear in mind that this utility does not return verbose
debug information, therefore, working with custom device tree files is difficult. The device tree
compiler can also be used to unpack device tree blob files .dtb., so that they can be reverse
engineered and expanded. In conclusion, device trees:
• Only describe the hardware present on the platform and how this works but they do not
define how hardware configuration should be used.
• Are platform independent, therefore, they can be considered as stable structures.
• Might make the kernel bigger and might slow the boot process down.
• Should be kept minimal and contain as little details as possible, as large device trees might
make the binding process hard.
• May be extended but they should never be modified.
Introduction To U-Boot
U-Boot is a primary boot loader, used in embedded devices, originally designed for PowerPC, it is
now available for a number of architectures including: x86, ARM, MIPS, 68k, SuperH, PPC, RISC-
V, MicroBlaze, Blackfin and Nios.
Initially loaded by the ROM or the BIOS, U-Boot can work on very limited amount of resources as
it can be split into stages: a stripped down version of U-Boot (SPL) is loaded first to perform basic
hardware configuration, in order to start the bootloader full version. It comes with a command line
which users can use to boot a particular kernel, manipulate device trees, download files, work with
environmental variables and much more. It requires users to specify the memory locations of all
objects it has to access: copying a ramdisk or jumping to a kernel image is done through memory
addresses.
Regardless whether the user is using a physical board or an emulator (QEMU), the very first step is
getting U-Boot up and running so that its console can be accessed to launch commands to load the
kernel. As U-Boot can be automated, users do not always need to type these commands. The
following are the steps required to build U-Boot:
As the directory “configs” contains all available configurations, the reader should choose the one
that matches the platform the software is being built for. Also, the reader should make sure all
environment variables are set correctly, as shown in the previous chapters, before proceeding:
#Configure U-Boot
$> make CROSS_COMPILE=[board_platform] [config_file]
#Build U-Boot
$> make CROSS_COMPILE=[board_platform]
#Example for QEMU
$> make CROSS_COMPILE=arm-unknown-linux-gnueabi- qemu_arm_defconfig
$> make CROSS_COMPILE=arm-unknown-linux-gnueabi-
The steps above are quite straightforward, although certain boards might require the user to set
additional variables to have make compiling the code correctly. If everything goes well, the
procedure should create the “u-boot” and “u-boot.bin” files in the “u-boot” directory. The following
are important files to familiarize with:
To test the U-Boot build with the QEMU emulator, the reader should move into the u-boot root
directory, next:
Bear in mind that, although virtual boards such as versatilepb and vexpress-a9 are still around,
they are obsolete and all code should be tested using virt instead. Once inside the QEMU prompt,
the user may launch some commands as the following:
=> printenv
arch=arm
baudrate=115200
board=qemu-arm
board_name=qemu-arm
=> reset
Should the reader own a physical board, this section will show how to make U-Boot files
accessible to the hardware. The reader should choose any support, such as SD cards, USB mass
storage or serial interfaces, to store the code which is going to be read by the ROM and create two
partitions, as follows:
Min Size Max size Type Mount Point Content
64 Mb 128 Mb FAT32 /media/[user]/boot Bootloader
1 Gb 2 Gb ext4 /media/[user]/rootfsRoot file system
Should a MLO file required by the board, it should be copied to the boot partition. The same should
be done with u-boot.img. The board can now be powered on and accessed using the preferred
terminal program, making sure to set the port at 115200 bps with no flow control.
The program mkimage is used to create images for U-Boot to contain a Linux kernel, a root file
system, a firmware, device tree blob files and much more. It is possible to create legacy images:
Or current Flattened Image Tree (FIT) images, which are designed to be more flexible and safer:
The following command can be launched into a U-Boot prompt in order to load files from FAT
partitions:
The system administrator will need to make sure the chose address ‘0x83000000’ is not overwritten
while the kernel is copied into ‘0x80001000’, as defined by the mkimage command.
The process can occur over a network using the TFTP protocol: the user will set local and server
addresses and then the image will be loaded by specifying RAM address and file name:
Now images are ready to be programmed into NAND. Let’s use some Error Correction Coding in
order to prevent corruption:
=> nandecc hw
Let’s clean up NAND:
Finally, the following line gets the image at 0x83000000 and writes 0x400000 bytes to NAND flash
at the address 0x300000:
Similarly, written bytes can be retrieved from NAND using the following command which reads
0x400000 bytes from offset 0x300000 from beginning of NAND, storing them into RAM address
0x83000000:
As no kernel has been built yet, next chapters will have to cover this step again. However, a kernel
stored into memory can be booted by using the following:
If no initramfs is provided, then “-” can be used followed by the address pointing at the device tree
blob:
It is also possible to create a variable with setenv to store all commands. Next, the variable is
executed using the run command.
So far, the boot procedure has been following these steps: ROM -> SPL -> u-boot.bin -> kernel. U-
Boot Falcon mode makes the SPL loading the kernel, removing the need for u-boot.bin, making the
whole process faster. However, enabling this configuration is not an easy task, as it might require
the user to build U-Boot after having modified a large number of properties stored into several
configuration files.
Introduction To BareBox
BareBox is a project derived from U-Boot and originally named U-Boot v2. The aim of BareBox is
combining the U-Boot and Linux technologies as this product does not require the user to work with
memory addresses.
The directory arch/${ARCH}/configs contains all available configurations, therefore, the reader
should choose the one that matches the platform for which this package will be built. If BareBox is
being built for a physical board that requires the MLO, this will have to be built first. The MLO is
also called x-loader and it is used when the SRAM is too small to contain the entire code needed to
load the kernel. After having made sure that the environment variables are correctly set:
MLO configuration files are located into arch/${ARCH}/configs and their name usually contain
either the string mlo or xload or anything similar. The reader should make sure to choose the right
configuration. Moreover, the menuconfig step can be skipped if no customization is needed.
The actual bootloader can now be built following a similar procedure that points at a different
configuration file:
To make the BareBox files accessible to the board, one should choose any support such as SD
cards, USB mass storage or serial interfaces to store the code which is going to be read by the
ROM. Create two partitions, as follows:
Next, if a MLO file was created, it has to be copied to the boot partition while the same should be
done with barebox.bin.
After having reset the board, you will get a prompt similar to the Linux one: many well known
commands such as ls, cp, rm and mount will work just fine. For those who choose to load the code
from a SD card, the following line of code can be used to mount the first partition:
Once everything is ready, the following command can be used to boot any BareBox image, ARM
Linux zImage or U-Boot uImage:
To pass parameters to the kernel parameters, the following format can be used:
barebox:/ global.bootm.oftree=/mnt/am335x-boneblack.dtb
barebox:/ global linux.bootargs.root="root=/dev/mydev rootwait”
barebox:/ bootm /mnt/zImage
In conclusion, to those who are not sure which bootloader to install, the following section might be
helpful:
• As U-Boot is being used by a larger number of installations, it is very well maintained.
• U-Boot is well known for its high level of configurability and flexibility.
• U-Boot requires deep board knowledge.
• U-Boot command line forces users into working with memory addresses rather than file
names.
• U-Boot is harder to configure: many files have to be edited in order to change a
configuration.
• In BareBox, environment variables and scripts cannot be mixed up.
The kernel is a trusted element that has complete and unrestricted access to the underlying
hardware, therefore, it runs in kernel space. Actions that can be performed without any special
privilege run in user space instead. Transitions from user space to kernel space are triggered by any
of the following:
However, although each platform executes a different set of steps to transition from user space to
kernel space, the basic interactions between the two spaces are similar across all systems and can be
summarized by the following:
The C library is the primary interface between the user space and the kernel, as it is able to
translate user level functions into system calls. The system call interface uses architecture-specific
technologies, such as traps or software interrupts, to switch the CPU level from user to kernel mode,
enabling the program to access CPU registers and all memory addresses. The system call handler
dispatches each call to the right kernel subsystem such as the memory manager or filesystem code.
The kernel is just one of the essential components of the Linux installation, together with the C
Library, basic command tools and much more. Moreover, the Linux kernel can be coupled with:
Operating systems derived from the defunct BSD, such as FreeBSD, OpenBSD and NetBSD are
structured differently: kernel, toolchain and user space are combined into a single code base. Linux
is a kernel while *BSD are complete products.
Each kernel is defined by its own version number. Before July 2011, the accepted format was the
following:
2 6 39 1
Major version Minor version Revision Stable version
An odd minor version number would indicate a developer release, while an even one, would mean
that the kernel was ready to be installed on end users computers. Every now and then a fix would be
pushed, which would increase the stable number. As the minor version number was later dropped,
the numbering jumped from 2.6.39 to 3.0.
A full cycle of kernel development starts with the opening of the merge window: the development
community then pushes all code that is deemed to be stable into mainline kernel. The window stays
open for approximately two weeks. Next, Linus Torvalds closes the window and produces release
candidates which are labelled appending -rc plus the version number. During this time, users test
the kernel and submit bug reports and fixes. When everything is ready, the kernel is released. All
kernel releases changelogs at available here: http://kernelnewbies.org/LinuxVersions. After the
release of a mainline kernel, the code is then pushed to the stable tree, which is managed by Greg
Kroah-Hartman. A new development cycle can now begin on mainline kernel, while bug fixes will
be stored into the stable tree. Releases that mainly publish bug fixes are called point releases and are
marked by a third number, the rightmost one (3.61.2). As already explained, before version three,
four numbers were used. Moreover, some kernels are labelled as long term and maintained for 2
years or more.
Should the reader prefer to retrieve the Linux kernel as tarball files, they are available at
https://cdn.kernel.org/pub/linux/kernel. Regardless the methodology used to download the code, the
kernel folder will contain, at least, the following subfolders:
By configuring the kernel the reader can reach maximum flexibility. The configuration system is
called Kconfig and it is documented in the “Documentation/kbuild” directory. The configuration
process can be text based using make config, ncurses-based with a pseudo-graphical menu using
make menuconfig, Qt based using make xconfig, GTK based using make gconfig and so on. The
command make help can be used to explore all possibilities. In this page it is assumed the user will
choose the most common option:
When the chosen configurator starts, it reads the main Kconfig file located in the appropriate arch
subdirectory. When the user does not set the ARCH variable during the configuration step, the
system defaults to the local machine architecture. The main Kconfig file contains references to
additional configuration files, declared using the following syntax:
source "path/Kconfig"
These external files can contain references to external Kconfig files too: they are all processed by
Kconfig.
The syntax that Kconfig understands is simple: each line starts with a keyword which may be
followed by multiple arguments. The config keyword starts a new configuration, while the
following lines define the attributes of this configuration:
config LCD
bool "Make all module available"
depends on SCREEN
help
Write something useful to the users...
Is equivalent to:
bool
prompt "Install all drivers"
The default keyword assigns default values to a configuration. Dependencies for specific values
can be specified using the if keyword instead:
default y if LCD
The code above means that the default value is yes if LCD option is also activated. Default values
are always equal to “no” otherwise.
The keyword depends defines menu dependencies. Multiple dependencies are defined using two
ampersands. This keyword affects all options within the menu:
depends on LCD
bool "SCREEN"
default y
In the example above, the second and third line of code also inherit the LCD dependency defined by
the first line. Standard dependencies restrict the upper limit while reverse ones restrict the lower one
by using the keyword select.
config USE_A_STACK
bool "Graphics mode on?"
select B_GRAPHICS_NEEDED
The lines above mean that if USE_A_STACK radio button is checked, the system will also need
B_GRAPHICS_NEEDED. This type of dependencies can also be manage using the keyword
imply, which allows the symbol’s value to be changed by a direct dependency or via a visible
prompt:
config A
tristate
imply C
config C
tristate
depends on B
In the configuration above, if A = “n” and B = “y” then C = “n”, but C can also be changed to “m”
or “y” while if A = “m” and B = “y” then C = “m”, but C can also be changed to “y” or “n”.
visible if [expression]
The keyword help defines help text whose end is determined by the indentation level. The
following two are equivalent:
help
Some help text
---help---
The same help text
The position of the menu entry in the tree is determined in two ways:
This is example shows how the position of a menu can change because of its dependencies:
config MODS
bool "Enable loadable module"
After having correctly configured all Kconfig files, it is possible to run the configuration procedure,
using any of the available tool, like the following:
• The main Makefile reads the .config file and performs tasks while analyzing all
subdirectories recursively.
• The main Makefile reads the one located in the usual “arch/$ARCH” directory in order to
gather architecture-specific information.
• All makefiles present on each subdirectory will carry out commands passed from above.
As configuring the kernel from scratch is a big job, it is possible to use known Kconfig files stored
into arch/$ARCH/configs. The .config file can then be created using the usual syntax. The
following command reads a configuration compatible to many ARMv7-A machines and creates
a .config accordingly:
The version of a previously downloaded Linux kernel can be printed by launching the following
command, which will return the same string than the command uname:
While Desktop Linux distributions do use modules extensively, embedded devices do not, as their
hardware and software configurations are more stable and known at the time the kernel is built.
Therefore, embedded kernels, should always be built without any modules, unless it is crucial to:
While the first line forces the build of mem.c and random.c, the following one simply treats
CONFIG_TTY_PRINTK as a variable to be replaced:
The list of the files to be built depends on what the bootloader expects. Therefore, while old
versions of U-Boot might need a uImage file, more recent ones can load zImage files using the
bootz command. Other bootloaders usually do require a zImage file. Moreover, when the target is a
x86 system, a bzImage file is usually required. The following line of code can be used to build a
zImage:
Building uImage files for ARM with multi-platform support is tricky, as from Linux 3.7 onward, a
single kernel binary can run on multiple platforms. This was done in order to have less kernels
targeting more ARM devices. This is a difficult scenario as the bootloader passes the device tree
and/or the machine number to the kernel, which then should select the correct platform. But as the
memory location and the relocation address code might be different for each platform, this approach
does not work. Bear in mind that the relocation address is hardcoded into the uImage header by the
mkimage program. To solve this problem, one must read the memory address stored into zreladdr-
y, which can be found into the Makefile.boot, which then needs to be passed to the LOADADDR
variable. The “Makefile.boot” will be stored into a location similar to the following one:
• arch/arm/mach-[your_SoC]/Makefile.boot
Therefore, if the “Makefile.boot” shows that the value of “zreladdr-y” is the address 0x80009000,
the command line to create an “uImage” file compatible with multi-platform images might be the
following:
$> make ARCH=arm CROSS_COMPILE=arm-cortex_a8-linux-gnueabihf-
LOADADDR=0x80009000 uImage
Each kernel build will generate the following two files, located in the top level directory:
• vmlinux which is a statically linked executable file in ELF format, which can be used for
debugging, if it was built with the CONFIG_DEBUG_INFO flag on. It has to be made
bootable by adding moltiboot header, bootsector and setup routines.
• System.map which contains the symbol table in a readable form. It shows associations
between symbols names, such as variables names and functions names and memory
addresses. This file is very useful for debugging.
As most bootloaders cannot handle ELF files correctly, the vmlinux file, which contains the Linux
kernel, needs to be further processed to create binary files in a format that bootloaders can
understand:
Next, all binaries are to be copied to arch/$ARCH/boot. For example, for an ARM Cortex-A8
board, this will build the zImage:
All device trees, as multi-platform builds can define many of them, have to be built now. The make
target dtbs is to be used to accomplish this task. To do this, make reads the rules listed in
arch/$ARCH/boot/dts/Makefile. The following creates device trees:
If any module is required, it can be built now. As the reader might guess, the make command that
creates modules is the following:
$> make ARCH=[board_arch] CROSS_COMPILE=[board_platform] modules
Compiled modules will be created with the .ko (kernel object) suffix, as this is the extension of
kernel modules loaded by modprobe since kernel version 2.6. In fact, before Linux 2.6, a user space
program would read “.o” files to link them to the kernel, in order to create the final binary image.
However, as from Linux 2.6 onward the linking is done by the kernel, the extension “.ko” was used
to flag these new ELF files that were passing additional information to the kernel. After the build,
these “.ko” files will be located all over the place in the directory source, therefore, the following
line can be used to place each file into the right place:
Should the user choose $HOME/rootfs as staging directory, all modules will be located into
$HOME/rootfs/lib/modules.
Should the reader need to rebuild multiple times using the same source folder, the following make
targets can be used to clean the environment to make sure that all artefacts of previous builds are
removed:
Finally, this is the complete sequence of commands that is required to build the Linux kernel for an
ARM Cortex-A8 board:
$> cd linux-stable
$> make ARCH=arm CROSS_COMPILE=arm-cortex_a8-linux-gnueabihf- mrproper
$> make ARCH=arm multi_v7_defconfig
$> make ARCH=arm CROSS_COMPILE=arm-cortex_a8-linux-gnueabihf- zImage
$> make ARCH=arm CROSS_COMPILE=arm-cortex_a8-linux-gnueabihf- modules
$> make ARCH=arm CROSS_COMPILE=arm-cortex_a8-linux-gnueabihf- dtbs
$> cd linux-stable
$> make ARCH=arm CROSS_COMPILE=arm-unknown-linux-gnueabi- mrproper
$> make ARCH=arm versatile_defconfig
$> make ARCH=arm CROSS_COMPILE=arm-unknown-linux-gnueabi- zImage
$> make ARCH=arm CROSS_COMPILE=arm-unknown-linux-gnueabi- modules
$> make ARCH=arm CROSS_COMPILE=arm-unknown-linux-gnueabi- dtbs
Should the reader decide to boot the kernel that was just built, an error similar to the following will
be generated:
---[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) ]---
The reader should not be surprised, as this is the expected behaviour at this stage. The next chapters
will explain what this error means and how to fix it.
---[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) ]---
The start_kernel function opens the door to the architecture-independent section of the kernel
startup process. More in particular:
Although platform might differ greatly when it comes down to the kernel initialization process, the
schema above explains how the code that set the system up works. If a ramdisk is present,
kernel_init tries to run the program /init, which creates and initialize the user space. If the previous
step fails, the same function tries to mount a file system by calling prepare_namespace which is
defined in init/do_mounts.c. For this step to work, the user has to pass the correct block device to
the kernel using the syntax:
root=/dev/[disk_name][partition_n]
root=/dev/[disk_name]p[partition_n]
root=/dev/mmcblk0p1
If this mount succeeds, the system tries to run the following ones in the given order, stopping at the
first one that works:
• /sbin/init
• /etc/init
• /bin/init
• /bin/sh
• init= to define the init program to run from a mounted file system.
• rdinit= to define the init program to run from a ramdisk.
Passing parameters (bootargs) to the kernel can be done in many different ways:
The following table shows a list of the most commonly used bootargs, while the complete list is
available in Documentation/kernel-parameters.txt.
Name Description
debug Set console level to 8: all messages shown
quiet Set console level to 0: only emergency messages shown
panic Defines what to do when the kernel panics
init= Init program to run from the mounted file system
rdinit= Init program to run from ramdisk
ro Mount the root device as read-only
root= Device to mount the root file system
rootdelay= Seconds to wait before mount the root device
rootwait Wait indefinitely for the root device to be detected
rootfstype= File system type for the root device
rw Mounts the root device as read-write
The table above mentions the concept of message level which is important to debug and maintain
the kernel. In fact, printing messages is always the easiest way to debug software packages. The C
function printk does for the kernel what printf does for the userspace: these messages can be later
displayed using the Linux command dmesg. The function “printk” works by writing messages on
the __log_buf ring buffer, therefore, older messages get overwritten once the buffer fills up. The
size of this buffer can be changed by modifying CONFIG_LOG_BUF_SHIFT, which is usually
located in init/Kconfig file, while the amount of bytes read by “dmesg” can be changed by using its
“-s” option. Moreover, the function “printk” has an optional prefix string that defines the loglevel of
the message being logged:
These messages are categorized according to their importance, with zero being the highest, as the
table above shows.
Should the reader be curious to load the kernel into a QEMU, the following can be useful:
• Plug the microSD card with the U-Boot installation into the reader.
• Plug the card reader to the build PC.
• Copy the “zImage” and .dtb file to the “boot” partition on the memory device.
• Unmount the card reader and plug it into the BeagleBone Black.
• Start a terminal emulator able to talk to the board.
• Power on the board and be prepared to press space bar when U-Boot messages appear.
• At this stage the user should get the U-Boot prompt.
• Load “zImage” from the boot partition of mmc and place it into the given starting address:
• Tell the system to use the first UART device for the console output:
How to create the root file system the kernel needs? The following roadmap will show how this can
be achieved:
After this the system would be ready to execute the first program which is usually the init
executable.
Any root file system will need to contain the following components:
• daemons are background programs that are not under control of interactive users. They are
created by either a process forking a child process that is then adopted by init or by the init
itself starting another process.
• init is the daemon that starts everything else and it is the ancestor of all processes as it
adopts all orphaned processes.
• configuration files usually stored as text file under the /etc directory, they control the
behaviour of all daemons, including init.
• shell is a program that takes input from the user and executes commands.
• shared libraries linked by most programs.
• device nodes are special files that allow applications to interact with devices by using
drivers via system calls. They can be either character special files, therefore unbuffered, or
block special files, therefore buffered. Nodes can be created using mknod system call.
• procfs is a pseudo file system that presents information about processes and other operating
system related information in a hierarchical file-like structure. It is usually mapped as /proc
and it is used to get and set kernel parameters at runtime.
• sysfs is a pseudo file system that presents information about many kernel subsystems,
hardware devices and their drivers in a hierarchical file-like structure. It is usually mapped
as /sys and it is used to get and set kernel parameters at runtime. Since the release of kernel
version 2.6, much of the information has been migrating from “/proc” to “/sys”.
• kernel modules located into /lib/modules/ if the kernel is configured to use modules.
It is possible to combine many of the previous components into a statically linked program.
Assuming this program is called “prog”:
This configuration should be implemented when a high level of security is needed as the OS will
not be able to start anything else apart from “/prog”.
When designing the directory layout, users are free to implement whichever directory structure
they prefer. In fact, Android and Linux distributions come with completely different directory
layouts. Whichever the chosen structure is, the first step should always be the creation of the staging
directory, which for instance may be named “rootfs”:
• ash is based on the Unix Bourne shell and it is much smaller than bash. It is the default shell
on FreeBSD, NetBSD, MINIX and many other Linux systems. It used to be the default shell
on Android until its version 4.0, when it was replaced by the Korn shell.
• bash is a superset of the Unix Bourne shell with many extensions and advanced features
unique to this shell, called bashisms.
• hush is a very small shell that can be run on devices with very limited memory. This shell
comes with no support for I/O redirection or pipes, therefore, many commands have to
include additional arguments to make up for this limitation.
How to install the chosen shell onto an embedded systems? This will be explained soon.
Linux shells allow users to launch programs with some flow control being able to pass information
between programs. As all shells need utilities to be able to work effectively, users would have to
face two major issues:
• The amount of disk space that this collection of programs would need to be installed.
• The difficulty of tracking down and cross-compiling the code of each one of the these
programs.
Users of embedded systems looked for and found a solution tailored to their unique needs. In fact,
BusyBox is one of the available solutions to the previous problem, as this package combines
stripped down versions of selected UNIX utilities, called applets, into a single executable. For
example, BusyBox comes with its own versions of init, ash, hush, vi, dd, sed, mount and many
more. To use BusyBox, one needs to type the name of the applet after the name of the main
application:
As the standard installation process can create soft links, the users will be able to omit the initial
busybox. While the build and deployment procedures for this application will be covered shortly,
let’s take some time to dig more into the BusyBox architecture:
• Each applet exports its main function following the “[appletname]_main” format. For
example, “rmdir” exports “rmdir_main” in “coreutils/rmdir.c”.
• The BusyBox “main” function redirects all calls to the correct applet according to the
command line arguments that will be parsed and analyzed by “libbb/appletlib.c”.
For example, to build BareBox for the BeagleBone Black board, the last two command lines will
be:
The menuconfig step might be skipped if no fine tuning is required. For example, to change the
installation directory just pass the CONFIG_PREFIX with the desired value to make when
running the installation step.
ToyBox is a very good alternative to BusyBox which implements the BSD license instead of GPL.
ToyBox aims to comply with standards such as POSIX-2008 and LSB 4.1 rather than mirroring the
GNU project. It has been included with Android since version 6.0. While the procedure to build and
install ToyBox is similar to the BusyBox one, the git repository and .tar files are located at the
following addresses:
Bear in mind that the default installation directory is “/usr/toybox” and this behaviour can be
quickly modified by passing the preferred value to the PREFIX variable.
• initramfs (Initial Random-Access Memory File System) is a file system image that is
loaded into RAM by the bootloader. This is a temporary root file system, only present in
memory, that can be used to initialize the system.
• disk image is a copy of the root file system prepared to be copied onto the target storage
device. Commands such as dd, mkfs and genext2fs might be used to create this image.
• network file system can be used by implementing a NFS server: the file system is mounted
over the network by the target at boot time.
Introduction To Initramfs
The initramfs is a compressed cpio archive, which is an older and simpler utility than tarball which
can be easily managed by the kernel. It requires the kernel option CONFIG_BLK_DEV_INITRD
to be enabled. Although many architectures do not neeed an initramfs file, some other systems do.
Initramfs files are mainly used to:
• Carry out work that would be too hard for the kernel.
• Load modules necessary to the boot procedure.
• Provide users with a minimalistic rescue shell.
Shifting tasks out of the kernel simplifies the work of system programmers and administrators as
the kernel will not need to be rebuilt every time the is changed. An initramfs file can be created as
any of the following:
• A standalone cpio archive. This is the most flexible option, although some boot loaders
might not be able to boot two files.
• A cpio archive embedded into the kernel.
• A device table which the kernel build process.
Before starting creating an initramfs file as a standalone cpio archive the reader should make sure
that the staging directory, called “rootfs” contains all required files:
$> cd ~/rootfs
$> find . | cpio --format=newc -ov --owner root:root > ../initramfs.cpio
$> cd ..
$> gzip initramfs.cpio
$> mkimage -O linux -A arm -T ramdisk -d initramfs.cpio.gz uRamdisk
• Shrink BusyBox and ToyBox installation by removing all unnecessary applets and libraries.
• Remove all unnecessary drivers and functions from the kernel.
• Statically rebuild packages such as BusyBox.
• Use uClibc-ng or musl libc instead of glibc.
• Make sure the kernel QEMU is trying to load was compiled by the right toolchain with the
correct parameters that would match the virtual machine that is being used.
• Use lsinitramfs -l to double check the initramfs file: broken links and missing files are quite
a common issue.
• Make sure initramfs contains all required libraries and support files.
• Check for QEMU out of memory errors.
• Check the QEMU virtual machine documentation out.
Should anything go wrong during the boot process, it would be possible to go through the same
troubleshooting steps that were listed for QEMU.
To create initramfs files as cpio archives embedded into the kernel simply rebuild the kernel setting
the value of CONFIG_INITRAMFS_SOURCE to the full path of the uncompressed cpio archive:
Device tables define device nodes, files, directories and links that go into archives or file system
images. Linux expands these tables in order to create the above-mentioned system objects. As
device tables are simple text files, ordinary users can edit them in order to create objects that will be
assigned to root, without having special privileges. A device table will need to be created and then
the CONFIG_INITRAMFS_SOURCE will need to point at its full path. Device tables are created
according to the following format:
• nod [name] [mode] [uid] [gid] [dev_type] [maj] [min]
◦ nod creates a node into the initramfs cpio.
• file [name] [location] [mode] [uid] [gid]
◦ file copies source file to the initramfs cpio with the right mode, UID and GID.
• dir [name] [mode] [uid] [gid]
◦ dir creates a directory into the initramfs cpio.
• slink [name] [target] [mode] [uid] [gid]
◦ slink creates a link into the initramfs cpio.
As creating these files from scratch might be time consuming, the following script, which has only
been tested on bash, can be used to automate this process:
$> scripts/gen_initramfs_list.sh
Finally, for those who have noticed that the ps command is not working on their installations, it has
to be said that this is caused by the fact that procfs has not been mounted yet:
• A shell
• A script
• An executable file
However, the init daemon can be run by all systems that need for complex initialisation procedures
and for starting and monitoring other programs. This is not the only available solution as the
following options are also available:
• sysvinit, which is a collection of System V-style init programs that includes packages such
as init, telinit, shutdown, poweroff, halt, reboot, killall5, runlevel and fstab-decode. The
telinit executable can be used to change the runlevel to the one specified by the user, forcing
init to kill all running processes that do not belong to the new state.
• systemd, which is a system and service manager commonly used on server and desktop
Linux distributions that is also to be used on more complex and advanced embedded
systems. Like init, systemd is the very first daemon that starts right after the boot process,
therefore it is assigned the PID number 1 and it is always the last to be killed during a
system shutdown. The systemd package is configured via simple text files which replace the
per-daemon startup shell scripts. Configuration files are hierarchically organised, therefore,
those located into higher level directories, override those with the same name but stored into
locations with lower priorities. This software suite also comes with additional functionalities
that replace the original utilities and daemons such as cron. The following is a list of some
of utilities that come with systemd:
◦ logind, is a daemon which manages users login.
◦ networkd, is a daemon which manages the network interfaces configuration.
◦ udevd, is a daemon which manages devices for the Linux kernel.
◦ journald, is a daemon that manages the event logging.
◦ Many more…
Although nowadays many Linux distributions no longer rely upon the traditional init as they use
systemd instead, most embedded systems are simple enough to still be managed the old way. In
fact, these devices might not need (yet) the advanced functionalities that systemd offers, such as
parallelization capabilities or stronger integration with Gnome. As already explained, BusyBox
comes with its own implementation of init that takes its configuration from /etc/inittab. As at this
stage a very simple configuration would be enough, the inittab might just include the following two
lines:
::sysinit:/etc/init.d/rcS
::askfirst:-/bin/ash
Considering that the inittab file format is 'ID:RunLevel:Action:Command' that has to be read as
follows:
• ID is for any standard init implementation, the identifier for the process. However, BusyBox
uses this field to define the controlling tty for the specified process to run on.
• RunLevel is the run levels in which this entry can be processed. This field is ignored by the
init implementation offered by BusyBox. Therefore, should the reader need runlevels,
alternative options such as sysvinit can be used instead.
• Action defines how the process has to be handles by the operating system and it can be set
to many different values, for example:
◦ once, which starts the program once. The process is not restarted when it ends.
◦ respawn, which has the process restarted every time it terminates.
◦ wait, which starts the process while init waits for its termination. The process is not
restarted when it ends.
◦ ctrlaltdel, which defines the program to be run when the system detects that CTRL-Alt-
Del keys combination has been pressed.
◦ shutdown, which executes the process when the system is told to reboot.
• Command defines the command to be run and its parameters.
Therefore, in the example above, while the first line runs the script rcS, the second one starts a
login shell that will get the profile information from the following files, before displaying the
prompt:
• /etc/profile
• ~/.profile
Moreover, the script rcS is the place where file systems have to be mounted:
Otherwise, should the reader need for a more robust configuration, one may consider to use the
default inittab file that comes with BusyBox instead, as this one would, for example, umount all
devices and turn the swap area off before rebooting.
To have QEMU executing the init program, the parameter -append has to be modified as follows:
Similarly, the reader can have U-Boot execute init by passing the right value to the boot arguments
as follows:
As init is started, other daemons might need to be launched too at startup, once again using one of
the applet that come with BusyBox or any other similar package. This can be achieved by editing
the inittab file as follows:
::respawn:[path_to_executable] [options]
::respawn:/sbin/syslogd -n
Here the option “-n” means that the process needs to run as a foreground process.
The program MAKEDEV is an obsolete, but still working, solution to automate the creation of
device files into the /dev directory. As this application might not be aware of all devices the system
administrator needs to configure, mknod can be invoked to manually create the remaining nodes.
The utility mknod works as follows:
Although udev is an application that can be mostly found on Desktop systems, it can also be used
on embedded devices which need for advanced configuration. This device manager consists of
some kernel services that inform the udevd daemon when certain system events occur so that it can
trigger the proper responses. In fact, udev relies upon sysfs as when drivers register with sysfs they
become available to userspace processes and to udevd itself:
• Drivers that are compiled into the kernel register their objects with sysfs.
• Drivers that are compiled as modules register their objects with sysfs when they are loaded.
More in particular, after the kernel creates the device file into the devtmpfs file system, a uevent
message is sent to udevd which will read the following files in order to look for rules to be applied
to the device node:
• /run/udev/rules.d
• /etc/udev/rules.d
• /lib/udev/rules.d
Although going into all details of these rules go well beyond the scope of these tutorial, the next
example can be used in order to gain a basic understanding of this topic:
The rule above will have the system execute ‘/usr/sbin/usbmuxd -x’ in order to stop usbmuxd when
the last usb device is unplugged and only if the PRODUCT environment variable matches the given
regular expression. Using the following command as root, it is possible to display how udev and the
kernel communicate to one another:
At this present moment, this technology can be retrieved and installed in many different ways:
• The traditional udev is part of systemd and it is installed and configured by default when the
latter is installed.
• The package eudev is a Gentoo fork of udev that is not part of systemd. This is the best
solution for those who do not need the entire software suite.
BusyBox comes with the mdev applet, which is a light-weight version of udev and it can be used to
create nodes and execute updates on embedded systems. In order to properly run, this application
requires:
The following snippet is an example of how to edit the init script to run this applet:
The mdev applet can be configured by editing the /etc/mdev.conf file, which controls the ownership
and the permissions of device nodes. The content of this file will be similar to the following:
Lastly, this section of the tutorial is an introduction to devtmpfs, a pseudo file system onto which
the kernel creates nodes for all devices it knows about and for all the new ones that are detected at
run-time. This file system is mount over /dev at boot time. In order to use this technology, the kernel
has to be built with the CONFIG_DEVTMPFS option enabled, which might not be the default
configuration for all platforms. To try devtmpfs, the following command can be launched using the
root account:
To permanently activate this technology, the line above has to be added to an initialisation script
such as the standard rcS.
Conclusions
As already explained, very often Linux systems are configured to have all device nodes
automatically created by devtmpfs, while mdev and udev are used just to implement rules and
setting ownership and permissions. Moreover, simple static device nodes can be preferred over
dynamically created ones when:
• The booting procedure should be as fast as possible and CPU cycle cannot be used to create
device nodes.
• The hardware configuration is not supposed to change over time.
• Buildroot, among all options, this tool allows system administrators to setup the network of
bootable Linux environments, just by preparing a bunch of configuration files and make
settings.
• nmcli configures systems on which NetworkManager is up and running, where it can
quickly setup and monitor network connections.
• Netplan is the default network configuration tool installed on Ubuntu systems. It allows
system administrators to easily setup networking interfaces via YAML files. This tool
supports NetworkManager and systemd-networkd as backend services.
• systemd-networkd can also be used on embedded systems, where it can manually configure
network devices and network properties via text files.
• ConnMan is a network manager explicitly designed for embedded systems: it can easily
configure and manage WiFi, Ethernet, Bluetooth, 2G, 3G and 4G network cards via text
files. Choosing connman is good idea for those willing to use systemd without networkd.
This tool also allows system administrators to create customised plugins and automation
scripts.
• WICD is a simple and lightweight alternative to NetworkManager and it can manage WiFi
and Ethernet network connections. At this present stage, this technology does not come with
any support for advanced configuration such as DSL routing.
This section of the tutorial explains how to set up basic networking on embedded systems using
BusyBox. To do so, it is assumed that the physical board is equipped with an Ethernet interface
called eth0, able to communicate on an IPv4 network. In the staging directory ‘rootfs’ used during
the previous tutorials, the following folders will have to be created with the proper permissions:
• var/run
• etc/network/if-pre-up.d
• etc/network/if-up.d
• etc/network
In order to assign a static IPv4 address to eth0, 192.168.100.2/24 for example, a text file called
‘etc/network/interfaces’ has to be created with the following content:
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 192.168.100.2
netmask 255.255.255.0
network 192.168.100.0
Should the board need a dynamic address, the same file will have to contain the following instead:
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
Next, udchpcd, the BusyBox DHCP client, needs to be setup by dropping its configuration file into
the ‘/usr/share/udhcpc/default.script’ directory. This file should be modelled according to the default
example located into the BusyBox folder, which might be ‘examples/udhcp/simple.script’. For
instance, the configuration file allows system administrators to modify the following parameters:
• Start and end of IP lease block.
• Interfaces names that will use DHCP.
• Maximum number of leases.
• How long an offered address is reserved.
• The amount of time that an IP will be reserved, should an ARP conflict or DHCP decline
message occur.
• Time period at which udhcpd will write out leases file.
• BOOTP specific options such as next server and TFTP server names.
• Static leases map.
• DHCP specific options such as the addresses of important servers (WINS, DNS and
gateway), static routes to be used and other network specific options.
As the board needs to correctly locate objects such as protocol numbers, passwords and host
addresses, on most glibc-based systems administrators will have to create the ‘/etc/nsswitch.conf’
file, providing the Name Service Switch (NSS) with information about name resolution
mechanisms and common configuration databases used by the system. The following example of
nsswitch.conf would be enough for most systems:
Finally, the administrator should copy from the toolchain sysroot all libraries Linux uses to
perform the name resolution. As these are neither modules nor dependencies, they would not show
up in the output of commands such as ldd or readelf:
$> cd $HOME/rootfs
$> cp -a $SYSROOT/lib/libnss* lib
$> cp -a $SYSROOT/lib/libresolv* lib
The tool nmcli can be used to configure NetworkManager, a daemon that relies on own
configuration files ‘/etc/NetworkManager/NetworkManager.conf’ (usually left as default) and
‘/etc/NetworkManager/conf.d/*’. The following example shows how to connect the board to a
wireless network using nmcli:
The following nmcli command line is to assign a static IP to an Ethernet network card and it has to
be run as root:
root> nmcli con add type ethernet ifname eno1 con-name static-eno1 ip4 192.168.100.2/24 gw4
192.168.100.1
root> nmcli con mod static-eno1 ipv4.dns "192.168.100.100,192.168.100.101"
root> nmcli con up static-eno1
The tool netplan is good at simplifying the management of systems running systemd-networkd. In
fact, while the former only requires one configuration file, the latter instead needs up to three of
them. The following is an example of a netplan configuration file, saved as ‘/etc/netplan/01-
netcfg.yaml’, that can setup an Ethernet network card:
network:
version: 2
renderer: networkd
ethernets:
eth0:
dhcp4: no
dhcp6: no
addresses: [192.168.100.2/24]
gateway4: 192.168.100.1
nameservers:
search: [local.domain]
addresses: [192.168.100.100, 192.168.100.101]
Next, as superuser, the following command has to be run in order to apply the above configuration:
Very complex systems can benefit from running systemd-networkd, especially those ones that
need to spawn isolated environments which can be done using systemd-nspawn, a command able to
fully virtualize file systems, all IPC subsystems, host names and domain names. The following
example file called ‘/etc/systemd/network/100-wired.network’ can be used on system running
systemd-networkd to set up a static IP on a network adapter:
[Match]
Name= eth0
[Network]
Address=192.168.100.2/24
Gateway=192.168.100.1
DNS=192.168.100.100
The following snippet of the file ‘/var/lib/connman/settings’ shows the typical ConnMan
configuration to assign static IP addresses to an Ethernet network adapter:
[my_home_ethernet]
Type = ethernet
IPv4 = 192.168.100.2/255.255.255.0/192.168.100.1
IPv6 = 2001:db8::42/64/2001:db8::1
MAC = 00:A0:C9:09:C7:28
Nameservers = 192.168.100.100,192.168.100.101
SearchDomains = local.domain
Domain = another.domain
• Mounting the image file before being able to copy files on it.
• Superuser permissions in order to run.
The device table format that genext2fs is able to understand is the following (a row for each device
or group):
• [name] [type] [mode] [UID] [GID] [major] [minor] [start] [inc] [count]
While the initramfs device tables require the system administrator to specify all files, in order to
create other file system image formats, the configuration will only define the staging directory with
all the exceptions that have to be applied to obtain the expected file system layout. The following
lines will create the ‘/dev’ and ‘/dev/mem’ directories:
#name type mode UID GID major minor start inc count
/dev d 755 0 0 - - - - -
/dev/mem c 640 0 0 1 1 0 0 -
The following can be used to create four teletype devices, from tty0 to tty3:
#name type mode UID GID major minor start inc count
/dev/tty c 666 0 0 5 0 0 0 -
/dev/tty c 666 0 0 4 0 0 1 3
The following to create the master disk on IDE primary controller named ‘/dev/hda’ and its four
partitions, from hda1 to hda4:
#name type mode UID GID major minor start inc count
/dev/hda b 640 0 0 3 0 0 0 -
/dev/hda b 640 0 0 3 1 1 1 4
#name type mode UID GID major minor start inc count
/dev/null c 666 0 0 1 3 0 0 -
Once the device table file is ready, genext2fs can be run as follows:
The next step is copying the ‘rootfs.ext2’ file to the board using dd, which requires root
permissions. As seen before in this course, the board could be using a SD card to store boot files
and the root file system. Should the second partition on the SD card be used as root file system
storage, the ‘dd’ command line will look like this:
The board could then be started using a sequence of commands similar to the following:
• Kernel
• Device tree
• Initramfs
• Root file system
The TFTP protocol needs a server (which can be configured on any personal computer) and a client
(which runs on the embedded system being configured). While the configuration of the server
changes according to the chosen operating system, the following is an example of the U-Boot
command line sequence that will load all files listed above from the server through the network:
=> setenv ipaddr [client_ip]
=> setenv serverip [server_ip]
=> setenv netmask [net_mask]
=> setenv bootargs console=ttyO0,115200 root=[root_file_system] rw rootwait ip={ipaddr}
=> tftpboot 0x80200100 zImage
=> tftpboot 0x80F00100 [device_tree]
=> bootz 0x80200100 – 0x80F00100
As device tree, initramfs and root file system can also be loaded from a NFS server, should the
reader choose to implement this scenario, the U-Boot sequence of commands will need to be
modified as follows:
Making sure that the values passed to ‘root=’ and ‘nfsroot=’ are both correct.
• Buildroot
• EmbToolkit
• OpenEmbedded
• OpenWrt
• PTXdist
• The Yocto Project
Introduction To Buildroot
This chapter will dive into Buildroot, a collection of Makefiles and patches, designed to simplify
the process of building Linux embedded systems, even the complex ones, through cross-
compilation. Although the main goal of this product is creating root file system images, Buildroot is
also able to build bootloaders, kernel images and custom packages. More in particular, Buildroot
can build systems with the following characteristics (including, but not limited to):
• C library: uClibc-ng, glibc and musl.
• Bootloader: U-Boot, grub2, BareBox, afboot-stm32, s500-bootloader.
• File system: ext2, ext3, ext4, squashfs, jffs2, cpio, initial RAM file system.
• Kernel: patches installation, DTB build, zImage support, uImage support, vmlinux support.
• Packages: BusyBox, OpenSSH, Qt and custom.
• System configuration: networking, timezone, init (BusyBox, systemd, OpenRC and
sysvinit) and ‘/dev’ management (static, eudev, mdev and devtmpfs).
To get the list of all available Buildroot versions to choose from, the reader can press ‘TAB’ after
‘git checkout’ command. The same code is also available as tar archive at the following address:
• https://buildroot.org/downloads
Next, in the ‘docs/manual/prerequisite.txt’ file (this path might change) the user will have to find
the list of mandatory packages that are expected to be already installed on the build machine.
At this stage, Buildroot should be ready to go. To configure a system from scratch and create the
‘.config’ file, any of the following commands can be launched from ‘buildroot’ directory:
Otherwise, the system administrator can simply choose any of the configuration files stored into the
‘configs’ directory, making sure to select the one that matches the board or emulator architecture. A
list of the available configurations can be printed on screen by running the following:
Please note that a *defconfig file contains only the options for which a non-default value has been
chosen. Next, the following will create the .config at top level Buildroot level
(CONFIG_DIR=$TOPDIR), according to the directives contained in the chosen configuration file:
The reader should open the .config that was just created in order to familiarize with its options and
their values. If needed, the kernel can now be tuned and configured by running any of the
following:
$> make
After having run the command above, the Buildroot folder will contain the following directories:
Almost all directories above, apart from those ones that store the output, will contain at least the
following files:
• *.mk used by Makefile to define the software to download, configure, build and install. For
example, ‘file1.mk’ will contain information able to drive the compilation of the ’file1’
package. This format is an improvement of the standard ‘make’ files, it is very compatible
with it and runs up to 30 times faster.
• Config.in written according the Kconfig standard and contains details about the
configuration settings of all objects of the build.
The following lines allow the reader to test a Buildroot build that targets the verstatilepb board on
QEMU (login credentials are ‘root’ with a blank password):
$> FILEDIR=output/images
$> qemu-system-arm -machine versatilepb -m 256 -dtb ${FILEDIR}/versatile-pb.dtb -kernel
${FILEDIR}/zImage -drive file=${FILEDIR}/rootfs.ext2,if=scsi,format=raw -append
"root=/dev/sda console=ttyAMA0,115200" -serial stdio -net nic,model=rtl8139 -net user
Before starting a build, the reader should double check the following:
• board/[brand]/[model] contains all configuration files, patches, binary blobs and extra
build steps for the kernel Linux and U-Boot. This also includes ‘genimage.cfg’, an intuitive
configuration file read by ‘genimage’ which creates the bootable image for the storage
device, usually a SD card (use ‘dd’ to copy this file). Buildroot comes with many working
‘genimage.cfg’ for all supported boards that can be taken as example for custom
configurations. Also, bear in mind that ‘genimage’ is usually called by ‘post-image.sh’, a
script that runs right after the build and it is to be saved in the same directory. The command
‘make menuconfig’ (‘System configuration’ menu) can be used to make sure the correct post
image script is called.
• configs/[model]_defconfig contains the board default settings.
• package/[brand]/[package] contains all packages that needs installing.
This is very important especially when the system administrator is using Buildroot to create custom
software for a specific board. In fact, to customize U-Boot one will have to:
Once a new configuration has been tested, it can also be saved using:
To add custom packages that system administrators will be able to select from a make menu:
1. Create a subfolder into ‘package’ for the new application. For example, if the new software
is called ‘mynewapp’, the relative path will be ‘package/mynewapp’
2. Store the mynewapp Config.in in ‘package/mynewapp’.
3. Make the new package visible to Buildroot by editing ‘package/Config.in’.
4. Create the mk file called ‘mynewapp.mk’ in ‘package/mynewapp’. For simple packages,
system administrators should be able to create custom configurations just by looking at any
working *.mk.
config BR2_PACKAGE_MYNEWAPP
bool "This is my new app"
While the following is to be added to package/Config.in to make Buildroot aware of the new
software:
• Build the package using the Buildroot toolchain located in the ‘output/host/usr/bin’
directory.
• Copy the package to the staging area which would be something like
‘board/[brand]/[model]/overlay/usr/bin’.
• Use ‘make menuconfig’ to modify the value of BR2_ROOTFS_OVERLAY by accessing the
menu ‘System configuration >> Root filesystem overlay directories’.
• http://downloads.yoctoproject.org/releases/yocto/
After having checked that the host machine matches the system requirements, as outlined in the
‘documentation/ref-manual’ directory, before running any of the Yocto command shown in this
tutorial, the following have to be executed:
$> cd poky
$> source oe-init-build-env
The ‘oe-init-build-env’ script needs to be sourced at the beginning of each work session in order to
set up the environment.
The main task of BitBake is parsing metadata to find tasks to be executed to produce the expected
output. The behaviour of BitBake can be controlled via metadata on a global level through the
following files:
• Recipes (.bb), contain the information needed to build a single package, such as:
◦ Source code location
◦ Dependencies information
◦ Patches location
◦ Compilation information
◦ Packaging information
• Class data (.bbclass), contain information and settings that recipes need to share
• Configuration data (*.conf), define configuration variables such as compiler options,
machine configuration options and much more
Yocto Project uses layers to store and organise its metadata: non-homogeneous metadata should not
be mixed up in the same layer. Therefore, for example, the GUI and the middleware metadata
should be split into two different layers. All layers names begin with the ‘meta’ prefix. BitBake will
try to find layers by traversing the list of directories specified in the
‘[project_home]/conf/bblayers.conf’ file by the BBLAYERS variable, as in the following excerpt:
BBPATH = "${TOPDIR}"
BBFILES ?= ""
BBLAYERS ?= " \
${TOPDIR}/layer1 \
${TOPDIR}/layer2 \
${TOPDIR}/layer3 \
${TOPDIR}/layer4 \
"
To add support for a specific platform or technology, system administrators can either create a new
layer from scratch or simply download one from the internet. To create a layer from scratch, the
reader can run the following to generate the basic structure of the new layer:
After having opportunely customised the new layer ‘layer.conf’, the system administrator needs to
copy the new layer to the chosen directory by modifying the ‘bblayers.conf’ file accordingly by
running the following:
$> bitbake-layers add-layer [layer_name]
To make sure the new layer has been actually added, the system administrator can run the
following:
Recipes are files that contain tasks whose names always begin with the ‘do_’ prefix (‘do_build’
always being the default task), written using Python and shell script, parsed and executed by
BitBake. The following example recipe builds and installs a piece of software:
As the recipes syntax is very intuitive, readers with a good programming background will be able to
start creating their own custom recipes in no time.
As already explained, the first step is sourcing the ‘oe-init-build-env’ script to prepare the
environment and set up the directory ‘build/’ as the working directory. The default working
directory can be changed by passing the desired folder as parameter, as follows:
The system administrator will now open the ‘build/conf/local.conf’ file looking for the ‘MACHINE’
variables group, making sure to remove the hashtag symbol from the targeted architecture. The
configuration below, targets the ARM architecture for the QEMU emulator:
MACHINE ?= "qemuarm"
#MACHINE ?= "qemuarm64"
#MACHINE ?= "qemumips"
#MACHINE ?= "qemumips64"
#MACHINE ?= "qemuppc"
#MACHINE ?= "qemux86"
#MACHINE ?= "qemux86-64
Running the following command inside the ‘poky’ directory shows all available images the system
administrator can choose from:
$> ls meta*/recipes*/images*/*.bb
After having chosen an image, the system administrator can run the build:
The default settings will place all artefacts in the ‘poky/build/tmp’. Moreover, the administrator
might also want to monitor the following folders:
To test the image that was just built in the previous step, the system administrator will need to
locate the image first. As already explained, this file can be found into the architecture specific
folder ‘poky/build/tmp/deploy/images/[arch_specific]’. After having located the image, the system
administrator can run:
When the image is fully loaded, the administrator should be able to login as ‘root’ providing a blank
password.
To quickly add recipes to a test or development image without having to manage layers, the system
administrator can simply modify the ‘IMAGE_INSTALL_append’ variable in the ‘local.conf’ file as
follows (including a space right before the first package to include):
IMAGE_INSTALL_append = " new_pkg_one new_pkg_two"
Should any of the test packages be needed on a production image too, the administrator will create a
recipe file that includes:
• A ‘require’ directive that points at the desired base image stored into any of the ‘images’
directories
• The ‘IMAGE_INSTALL’ variable to list the additional packages
require [recipe_dir]/images/[image_file].bb
IMAGE_INSTALL += "new_pkg_one new_pkg_two"
However, working with any of the ‘IMAGE_INSTALL’ variables is dangerous, as they might cause
issues with dependencies.
Generating a SDK allows programmers, software engineers and software testers to work on images,
kernels, applications and QA test cases without having to install the full Yocto Project:
• The system administrator creates the SDK and installs it on the software engineer machine
• The system engineer builds or downloads the target image
• The system engineer starts developing or testing an application
The following command line creates the SDK installation script in the ‘/tmp/deploy/sdk’ directory:
After having installed the SDK, the software engineer will need to source the correct script to
initialise the environment: the scrip to run will not be ‘oe-init-build-env’ as this does not work for
the SDK.