Toolchain Container Images

From Wiki-DB
Revision as of 10:42, 18 September 2020 by Pzimmermann (talk | contribs)
Jump to navigationJump to search

DH electronics provides Docker images with preinstalled toolchains.

Introduction

For cross developing of applications we provide Docker container images with preinstalled toolchains. These are made for use by the VM for Application Development. You need to have Docker installed (How to install Docker), in our VM starting with Stretch Vxx Docker is preinstalled and preconfigured. The container images are available Docker Hub.

Note: For userspace application development, we recommend using the ELBE/Yocto-SDK which came with your root filesystem. This is because the SDKs comes with all needed development headers and libraries for its respective root filesystem.

Available Toolchains

Images with native Debian GCC toolchain

At the Docker repository dhelectronics/debian-build-essential images with the standard native GCC toolchain of Debian are located.

The images are based on the Debian (slim-varaiant) image with cmake, ccache, curl, bc, lzop, xz-utils and jq additionally installed. The Debian GCC toolchain is installed via the package debian-build-essential.

Tags consists of a combination of the used version of Debian and the architecture of the image (e.g. buster-amd64). Currently there is any combination of the Debian versions jessie, stretch and buster with the architectures amd64, arm32v5 and arm32v7 is possible. Note that Docker uses another names for distinguishing the different architectures of ARM processors: arm32v5 corresponds to Debian's armel while arm32v7 corresponds to Debian's armhf architecture.

You can use QEMU's user mode emulation for running the ARM-containers on an amd64-machine, the resulting software of a build still runs on the respective ARM architecture. To use this you have to install the packages binfmt-support and qemu-user-static on the host. To activate this for the container:

  • If your host is on Debian stretch or earlier, you have to include the usermode emulator into the container at the start of your container. This can be done with a bind mount. Add the option --mount type=bind,src=/usr/bin/qemu-arm-static,dst=/usr/bin/qemu-arm-static to the run command of the container.
  • If your host is on Debian buster or later, this works automaticly. You don't have to alter the run command of the container.

Note that running a conatiner on emulated hardware affects the performance of the compiler. Tests have indicated that building the Linux kernel with one thread on the native armhf compiler running on an amd64 machine with emulation is about 9 times slower than building the same kernel with one thread on the same machine via a crossbuild compiler.

Images with crossbuild GCC toolchain

At the Docker repository dhelectronics/debian-cross-build-essential images with the GCC toolchain for crosscompiling are located.

The images are based on the Debian (slim-varaiant) image with build-essential, cmake, ccache, curl, bc, lzop, xz-utils, jq, git and ketchup additionally installed. All images run on an amd64 host. There are two kinds of images:

Debian crossbuild toolchain (DIST)

These images are using the standard Debian crossbuild toolchain for armhf. This toolchain is installed via the package crossbuild-essentail-armhf. At the moment there are images based on stretch and buster available.

Linaro/ARM toolchain (DIST-linaro-X)

These images use the toolchain of Linaro (up to GCC 7) or ARM (beginning with GCC 8) in the version X and uses Debian DIST as a basis for this image (e.g. stretch-linaro-8). The toolchain is installed inside /opt and the PATH-variable is extened to include the directory with the binaries of the toolchain. At the moment all images are based on Debian stretch and the GCC versions 4.9, 6, 7 and 8 are available.

Using the containers

Open console inside the container

You can start the container with the current work directory mounted into the container:

$ docker run -it --rm --mount type=bind,src=$(pwd)/,dst=$(pwd) --workdir $(pwd) dhelectronics/debian-cross-build-essential:buster

After the container has started a console is open, now you can run any command to build the application (e.g. make all). When the build is finished, you can quit the console with CTRL+D.

Call the buildsystem at container start

Alternativly you can call the build command directly at the run command of the container:

$ docker run -it --rm --mount type=bind,src=$(pwd)/,dst=$(pwd) --workdir $(pwd) dhelectronics/debian-cross-build-essential:buster make all

Use the symlink wrapper

We created a python script called docker-symlink-wrapper.py (Not yet downloadable). This script can create symlinks which point into container. If one of these symlink is called the script itself is called and the script does start the apropiate container for this symlink and calls the command inside of it and passes all arguments. It is possible to set the tag of the container image which should be started. This example uses the debian-cross-build-toolchain, what the commands are doing exactly, look at the documentation.

To create the symlinks, you need a JSON file which defines the needed things about the container images:

{
	"symlinks":[
		"arm-linux-gnueabihf-as",
		"arm-linux-gnueabihf-ld",
		"arm-linux-gnueabihf-gcc",
		"arm-linux-gnueabihf-g++",
		"arm-linux-gnueabihf-ar",
		"arm-linux-gnueabihf-nm",
		"arm-linux-gnueabihf-strip",
		"arm-linux-gnueabihf-objcopy",
		"arm-linux-gnueabihf-objdump"
	],
	"registry":"",
	"image":"dhelectronics/debian-cross-build-essential",
	"tag":"stretch-linaro-8",
	"installpath":"/usr/local/bin"
}

Then you can call the symlink script with superuser privileges to create the symlinks:

sudo docker-symlink-wrapper.py install cross-build install.json

Now the symlinks are installed and every call to arm-linux-gnueabihf-gcc and the other symlinks will go into the container. Note that when calling a symlink only the current working directory is mounted into the container! If you want to compile a file, you have to be inside the directory of the file or one of its parent directories.

To get the list of available versions of the toolchain you can use the following command:

docker-symlink-wrapper.py list-versions cross-build

To set the tag (= version of the toolchain) of the container, there is a command. As an example if you want to use Linaro GCC 6:

sudo docker-symlink-wrapper.py set-version cross-build stretch-linaro-6

Extend the container with libraries

Note: If you only need libraries which came on your root filesystem, we recommend using the ELBE/Yocto-SDK which came with this root filesystem. In the SDK the corresponding headers and libraries are already preinstalled.

Create a modified container image

You can a new container image which includes the needed library. For this you need to create a new Dockerfile inside an empty container. Here is an example Dockerfile to include the C/C++ libraries of mosquitto (MQTT-broker/client) into the debian-build-essential:

FROM dhelectronics/debian-build-essential:buster-arm32v7
RUN apt-get update && apt-get install -y --no-install-recommends libmosquitto-dev libmosquittopp-dev 

Now you can create the new container image with:

docker build -t your-custom-image:latest .

After that the container can be started like any other container. If you want to use the symlink script, you have to create your own JSON file to create the symlinks. The symlinks of the normal debian-build-essential container images have to be removed because they would collide with each other (unless you install the symlinks into another directory but then the symlink which comes first inside the PATH enviornmental variable will be prefered over the other which can cause unwanted behavior).

Install libraries at runtime

When you run a console inside the container, you can run apt to install addional libraries. Note that when the container is removed, any changes to the container are lost.

Include libraries and headers into your project folder

You can include needed libraries and headers into a sub directory of your project directory which is mounted into the container. So you do not need to modify the image or the container at runtime.