Skip to content

alpine-buildworker

updated stars forks watchers issues issues-pr
pulls stars aarch64 armhf armv7l x86_64

MultiArch Alpine Linux + S6 + Python3 + Buildbot (Worker)


This image serves as the base container for running a Buildbot worker instance to execute build-steps as defined in the master. Also, checkout the alpine-buildmaster image for running a master service.

Based on Alpine Linux from the python3 image with the buildbot-worker (Source) package(s) installed in it.


Get the Image

Pull the image from Docker Hub.

docker pull woahbase/alpine-buildworker
Image Tags

The image is tagged respectively for the following architectures,

latest tag is annotated as multiarch so pulling without specifying any architecture tags should fetch the correct image for your architecture. Same goes for any of the version tags.

non-x86_64 images used to contain the embedded qemu-user-static binary which has been redundant for a while, and is being deprecated starting with our Alpine Linux v3.22 base-image release, see qemu-user-static or the more recent binfmt instead for running multi-arch containers.


Run


Running the container starts the service.

docker run --rm \
  --name docker_buildworker \
  -e BUILDBOT_MASTERADDRESS=your.buildmaster.local`#(1)` \
  -e BUILDBOT_WORKERNAME=buildbot-worker`#(2)` \
  -e BUILDBOT_WORKERPASS=insecurebydefault`#(3)` \
  -v $PWD/buildworker`#(4)`:/home/alpine/buildbot \
woahbase/alpine-buildworker
  1. (Required) Address of the master node. (Port defaults to 9989)
  2. (Required) Name of the worker node.
  3. (Required) Password of the worker node.
  4. (Optional) Path to your buildbot worker configurations root directory, if you need it to persist.
Multi-Arch Support

If you want to run images built for other architectures on the same host (e.g. a x86_64 machine), you will need to have the specific binary format support configured on your host machine before running the image (otherwise you get an exec format error). Here's how,

For recent images, we can use tonistiigi's binfmt image to register binary execution support for the target architecture, like the following,

docker run --rm --privileged tonistiigi/binfmt --install <architecture>

Architecture is that of the image we're trying to run, can be arm64 for aarch64, arm for both armv7l and armhf, or amd64 for x86_64. See binfmt.

Previously, multiarch, had made it easy for us by packing qemu into an image, so we could just run

docker run --rm --privileged multiarch/qemu-user-static --reset -p yes

However, that image (see qemu-user-static) seems to have fallen behind in updates, and with newer images the binfmt method is preferable.

Now images built for other architectures will also be executable. This is optional though, without the above, you can still run the image that is specifically made to support your host architecture.


Configuration

We can customize the runtime behaviour of the container with the following environment variables.

ENV Vars Default Description
BUILDBOT_HOME /home/alpine/buildbot Default root directory for buildbot configurations.
BUILDBOT_PROJECTNAME buildbot Project name that is prepended to the worker name, e.g. default is buildbot-worker.
BUILDBOT_SETUP_ARGS --force --log-count=2 --log-size=5000 --relocatable These arguments are passed to setup the worker.
BUILDBOT_SKIP_SETUP unset If true, skips worker setup tasks, useful when your already have your configurations setup, or would like to do it manually.
BUILDBOT_USE_CUSTOM_TACFILE unset Whether to use custom tacfile provided in the image that logs to stdout by default, set to a non-empty string (e.g 1) to enable, or use the one generated by package.
BUILDBOT_CUSTOM_TACFILE /defaults/worker.tac Customizable path to tacfile provided in the image.
BUILDBOT_BASEDIR unset Used in the custom tacfile to determine where builder files are stored, defaults to "." when unset (current directory where buildbot.tac exists), or any other directory (must exist).
BUILDBOT_LOGDEST stdout Used in the custom tacfile to determine where logs are sent, can be either of stdout (default), syslog, or file.
BUILDBOT_LOGROTATE_LENGTH 5000 Used in the custom tacfile to determine maximum lines-in-logfile before it is rotated.
BUILDBOT_LOGROTATE_MAXFILES 2 Used in the custom tacfile to determine maximum rotated logfiles that are kept in storage.
BUILDBOT_MASTERADDRESS localhost Address of the master node, required for the worker to connect.
BUILDBOT_MASTERPORT 9989 Port of the master node.
BUILDBOT_PROTOCOL pb Protocol used by the worker when connecting to the master.
BUILDBOT_WORKERNAME ${BUILDBOT_PROJECTNAME}-worker Name of the worker, required for the worker to connect.
BUILDBOT_WORKERPASS insecurebydefault Password of the worker, required for the worker to connect.
BUILDBOT_WORKER_KEEPALIVE 180 Check-in with master periodically with this interval.
BUILDBOT_WORKER_MAXDELAY 180 If master is lost, worker process exits after this amount of time.
BUILDBOT_WORKER_MAXRETRIES 5 If master is lost, how many retries till connection failure .
BUILDBOT_WORKER_USETLS 0 Whether to use TLS for connecting to master, (only when using custom tacfile) requires certificate to be setup on the worker.
BUILDBOT_WORKERINFO_ADMIN docker Sets worker info/admin file.
BUILDBOT_WORKERINFO_HOST ${HOSTNAME} Sets worker info/host file.
BUILDBOT_WORKERINFO_ACCESSURI ssh://${HOSTNAME} Sets worker info/access_uri file.
BUILDBOT_SKIP_CUSTOMIZE unset Skip post-setup customization tasks.
BUILDBOT_SKIP_PERMFIX unset If set to a non-empty-string value (e.g. 1), skip ensuring files in ${BUILDBOT_HOME} are owned by ${S6_USER}, enabled by default.
BUILDBOT_ARGS --nodaemon --no_save Customizable arguments passed to worker service. (Runs as a twisted application instead of calling buildbot-worker executable)
S6_PIP_PACKAGES empty string Space-separated list of packages to install globally with pip.
S6_PIP_REQUIREMENTS empty string Path to requirements.txt to install globally with pip.
S6_PIP_USER_PACKAGES empty string Space-separated list of packages to install with pip for S6_USER. These are installed in ~/.local/.
S6_PIP_USER_REQUIREMENTS empty string Path to requirements.txt to install with pip for S6_USER.
S6_NEEDED_PACKAGES empty string Space-separated list of extra APK packages to install on start. E.g. "curl git tzdata"
PUID 1000 Id of S6_USER.
PGID 1000 Group id of S6_USER.
S6_USER alpine (Preset) Default non-root user for services to drop privileges to.
S6_USERHOME /home/alpine (Preset) HOME directory for S6_USER.
Did you know?

You can check your own UID/GID by running the command id in a terminal.

Also,

  • The env variable BUILDBOT_ROLE determines if you are running a master or worker. This also determines what image you'll be running when used with the makefile. This is baked into the image so does not need to be changed unless you know what you're doing.

  • Setup tasks are only run when the buildbot.tac file does not exist or BUILDBOT_SKIP_SETUP is not set. Same goes for arguments / environment variables specific to setup, they are not needed anymore after setup is complete.

  • Includes a placeholder script for further customizations before starting processes. Override the shellscript located at /etc/s6-overlay/s6-rc.d/p22-buildbot-customize/run with your custom pre-tasks as needed.

  • The service does not run buildbot-worker, instead calls twistd directly, pass BUILDBOT_ARGS accordingly.

  • Mount the configurations at the BUILDBOT_HOME directory inside the container, by default it is /home/alpine/buildbot. This is optional for workers, however make sure you have enough cpu power and memory for the workers to do the heavy lifting if required.

Stop the container with a timeout, (defaults to 2 seconds)

docker stop -t 2 docker_buildworker

Restart the container with

docker restart docker_buildworker

Removes the container, (always better to stop it first and -f only when needed most)

docker rm -f docker_buildworker

Shell access

Get a shell inside a already running container,

docker exec -it docker_buildworker /bin/bash

Optionally, login as a non-root user, (default is alpine)

docker exec -u alpine -it docker_buildworker /bin/bash

Or set user/group id e.g 1000/1000,

docker exec -u 1000:1000 -it docker_buildworker /bin/bash

Logs

To check logs of a running container in real time

docker logs -f docker_buildworker

As-A-Service

Run the container as a service with the following as reference (and modify it as needed).

With docker-compose (alpine-buildworker.yml)

---
services:
  buildworker:
    container_name: buildworker
    # depends_on:
    #   buildmaster:
    #     condition: service_healthy
    deploy:
      resources:
        limits:
          cpus: '2.00'
          memory: 1024M
      restart_policy:
        condition: on-failure
        delay: 10s
        max_attempts: 5
        window: 120s
    environment:
      # BUILDBOT_PROJECTNAME: ${BUILDWORKER_PROJECTNAME:-buildbot}
      # BUILDBOT_USE_CUSTOM_TACFILE: 1
      # BUILDBOT_SKIP_SETUP: 1
      # BUILDBOT_SKIP_PERMFIX: 1

      BUILDBOT_MASTERADDRESS: your.buildmaster.local
      # BUILDBOT_MASTERPORT: 9989
      # BUILDBOT_PROTOCOL: pb
      BUILDBOT_WORKERNAME: buildbot-worker
      BUILDBOT_WORKERPASS: insecurebydefault
      # BUILDBOT_WORKER_KEEPALIVE: 180
      # BUILDBOT_WORKER_MAXDELAY: 180
      # BUILDBOT_WORKER_MAXRETRIES: 5
      # BUILDBOT_WORKER_USETLS: 0
      # BUILDBOT_WORKERINFO_ADMIN: docker
      # BUILDBOT_WORKERINFO_HOST: buildworker
      # BUILDBOT_WORKERINFO_ACCESSURI: ssh://buildworker

      PUID: ${PUID}
      PGID: ${PGID}
      # TZ: ${TZ}
    # healthcheck:
    #   interval: 2m
    #   retries: 5
    #   start_period: 5m
    #   test:
    #     - CMD-SHELL
    #     - >
    #       wget --quiet --tries=1 --no-check-certificate --spider http://${BUILDBOT_MASTERADDRESS}:8010/ || exit 1
    #   timeout: 10s
    hostname: buildworker
    image: woahbase/alpine-buildworker:${BUILDWORKER_TAG:-latest}
    network_mode: bridge
    ports: []
    volumes:
      - type: bind
        source: ${BUILDWORKER_DIR:?err}
        target: /home/alpine/buildbot
        bind:
          create_host_path: false
      - type: bind
        source: /etc/localtime
        target: /etc/localtime
        read_only: true
        bind:
          create_host_path: false

Build Your Own

Feel free to clone (or fork) the repository and customize it for your own usage, build the image for yourself on your own systems, and optionally, push it to your own public (or private) repository.

Here's how...


Setting up


Before we clone the /repository, we must have Git, GNU make, and Docker (optionally, with buildx plugin for multi-platform images) setup on the machine. Also, for multi-platform annotations, we might require enabling experimental features of Docker.

Now, to get the code,

Clone the repository with,

git clone https://github.com/woahbase/alpine-buildbot
cd alpine-buildbot

To get a list of all available targets, run

make help
Always Check Before You Make!

Did you know, we could check what any make target is going to execute before we actually run them, with

make -n <targetname> <optional args>

Build and Test


To create the image for your architecture, run the build and test target with

make build test ROLE=worker

For building an image that targets another architecture, it is required to specify the ARCH parameter when building. e.g.

make build test ARCH=aarch64 ROLE=worker
make build test ARCH=armhf ROLE=worker
make build test ARCH=armv7l ROLE=worker
make build test ARCH=x86_64 ROLE=worker
Build Parameters

All images have a few common build parameters that can be customized at build time, like

  • ARCH
The target architecture to build for. Defaults to host architecture, auto-detected at build-time if not specified. Also determines if binfmt support is required before build or run and runs the regbinfmt (or inbinfmt for recent images) target automatically. Possible values could be aarch64, armhf, armv7l, or x86_64.
  • BUILDDATE
The date of the build. Can be used to create separate tags for images. (format: yyyymmdd)
  • DOCKERFILE
The dockerfile to use for build. Defaults to the file Dockerfile, but if per-arch dockerfiles exist, (e.g. for x86_64 the filename would be Dockerfile_x86_64) that is used instead.
  • TESTCMD
The command to run for testing the image after build. Runs in a bash shell.
  • VERSION
The version of the app/tool, may need to be preset before starting the build (e.g. for binaries from github releases), or extracted from the image after build (e.g. for APK or pip packages).
  • REGISTRY
The registry to push to, defaults to the Docker Hub Registry (docker.io) or any custom registry that is set via docker configurations. Does not need to be changed for local or test builds, but to override, either pass it by setting an environment variable, or with every make command.
  • ORGNAME
The organization (or user) name under which the image repositories exist, defaults to woahbase. Does not need to be changed for local or test builds, but to override, either pass it by setting an environment variable, or with every make command.

The image may also require custom parameters (like binary architecture). Before you build, check the makefile for a complete list of parameters to see what may (or may not) need to be set.

BuildX and Self-signed certificates

If you're using a private registry (a-la docker distribution server) with self-signed certificates, that fail to validate when pulling/pushing images. You will need to configure buildx to allow insecure access to the registry. This is configured via the config.toml file. A sample is provided in the repository, make sure to replace YOUR.PRIVATE.REGISTRY with your own (include port if needed).


Make to Run


Running the image creates a container and either starts a service (for service images) or provides a shell (can be either a root-shell or usershell) to execute commands in, depending on the image. We can run the image with

make run ROLE=worker

But if we just need a root-shell in the container without any fance pre-tasks (e.g. for debug or to test something bespoke), we can run bash in the container with --entrypoint /bin/bash. This is wrapped in the makefile as

make shell ROLE=worker
Nothing vs All vs Run vs Shell

By default, if make is run without any arguments, it calls the target all. In our case this is usually mapped to the target run (which in turn may be mapped to shell).

There may be more such targets defined as per the usage of the image. Check the makefile for more information.


Push the Image


If the build and test steps finish without any error, and we want to use the image on other machines, it is the next step push the image we built to a container image repository (like /hub), for that, run the push target with

make push ROLE=worker

If the built image targets another architecture then it is required to specify the ARCH parameter when pushing. e.g.

make push ARCH=aarch64 ROLE=worker
make push ARCH=armhf ROLE=worker
make push ARCH=armv7l ROLE=worker
make push ARCH=x86_64 ROLE=worker
Pushing Multiple Tags

With a single make push, we are actually pushing 3 tags of the same image, e.g. for x86_64 architecture, they're namely

  • alpine-buildworker:x86_64
The actual image that is built.
  • alpine-buildworker:x86_64_${version}
It is expected that the application is versioned when built or packaged, it can be specified in the tag, this makes pulling an image by tag possible. Usually this is obtained from the parameter VERSION, which by default, is set by calling a function to extract the version string from the package installed in the container, or from github releases. Can be skipped with the parameter SKIP_VERSIONTAG to a non-empty string value like 1.
  • alpine-buildworker:x86_64_${version}_${builddate}
When building multiple versions of the same image (e.g. for providing fixes or revisions), this ensures that a more recent push does not fully replace a previously pushed image. This way, although the architecture and version tags are replaced, it is possible to roll back to the previously built image by build date (format yyyymmdd). This value is obtained from the BUILDDATE parameter, and if not essential, can be skipped by setting the parameter SKIP_BUILDDATETAG to a non-empty string value like 1.
Pushing To A Private Registry

If you want to push the image to a custom registry that is not pre-configured on your system, you can set the REGISTRY variable either on the build environment, or as a makefile parameter, and that will be used instead of the default Docker Hub repository. Make sure to have push access set up before you actually push, and include port if needed. E.g.

export REGISTRY=your.private.registry:5000
make build test push

or

make build test push REGISTRY=your.private.registry:5000

Annotate Manifest(s)


For single architecture images, the above should suffice, the built image can be used in the host machine, and on other machines that have the same architecture too, i.e. after a push.

But for use-cases that need to support multiple architectures, there's a couple more things that need to be done. We need to create (or amend if already created beforehand) a manifest for the image(s) that we built, then annotate it to map the images to their respective architectures. And for our three tags created above we need to do it thrice.

Did you know?

We can inspect the manifest of any image by running

docker manifest inspect <imagename>:<optional tag, default is latest>


Tag Latest

Assuming we built the images for all supported architectures, to facilitate pulling the correct image for the architecture, we can create/amend the latest manifest and annotate it to map the tags :aarch64, :armhf, :armv7l, :x86_64 to the tag :latest by running

make annotate_latest ROLE=worker
How it works

First we create or amend the manifest with the tag latest

docker manifest create \
woahbase/alpine-buildworker:latest \
woahbase/alpine-buildworker:aarch64 \
woahbase/alpine-buildworker:armhf \
woahbase/alpine-buildworker:armv7l \
woahbase/alpine-buildworker:x86_64 \
;
docker manifest create --amend \
woahbase/alpine-buildworker:latest \
woahbase/alpine-buildworker:aarch64 \
woahbase/alpine-buildworker:armhf \
woahbase/alpine-buildworker:armv7l \
woahbase/alpine-buildworker:x86_64 \
;

Then annotate the image for each architecture in the manifest with

docker manifest annotate --os linux --arch arm64 \
    woahbase/alpine-buildworker:latest \
    woahbase/alpine-buildworker:aarch64;
docker manifest annotate --os linux --arch arm --variant v6 \
    woahbase/alpine-buildworker:latest \
    woahbase/alpine-buildworker:armhf;
docker manifest annotate --os linux --arch arm --variant v7 \
    woahbase/alpine-buildworker:latest \
    woahbase/alpine-buildworker:armv7l;
docker manifest annotate --os linux --arch amd64 \
    woahbase/alpine-buildworker:latest \
    woahbase/alpine-buildworker:x86_64;

And finally, push it to the repository using

docker manifest push -p woahbase/alpine-buildworker:latest

Tag Version

Next, to facilitate pulling images by version, we create/amend the image-version manifest and annotate it to map the tags :aarch64_${version}, :armhf_${version}, :armv7l_${version}, :x86_64_${version} to the tag :${version} by running

make annotate_version ROLE=worker
How it works

First we create or amend the manifest with the tag ${version}

docker manifest create \
woahbase/alpine-buildworker:${version} \
woahbase/alpine-buildworker:aarch64_${version} \
woahbase/alpine-buildworker:armhf_${version} \
woahbase/alpine-buildworker:armv7l_${version} \
woahbase/alpine-buildworker:x86_64_${version} \
;
docker manifest create --amend \
woahbase/alpine-buildworker:${version} \
woahbase/alpine-buildworker:aarch64_${version} \
woahbase/alpine-buildworker:armhf_${version} \
woahbase/alpine-buildworker:armv7l_${version} \
woahbase/alpine-buildworker:x86_64_${version} \
;

Then annotate the image for each architecture in the manifest with

docker manifest annotate --os linux --arch arm64 \
    woahbase/alpine-buildworker:${version} \
    woahbase/alpine-buildworker:aarch64_${version};
docker manifest annotate --os linux --arch arm --variant v6 \
    woahbase/alpine-buildworker:${version} \
    woahbase/alpine-buildworker:armhf_${version};
docker manifest annotate --os linux --arch arm --variant v7 \
    woahbase/alpine-buildworker:${version} \
    woahbase/alpine-buildworker:armv7l_${version};
docker manifest annotate --os linux --arch amd64 \
    woahbase/alpine-buildworker:${version} \
    woahbase/alpine-buildworker:x86_64_${version};

And finally, push it to the repository using

docker manifest push -p woahbase/alpine-buildworker:${version}

Tag Build-Date

Then, (optionally) we create/amend the ${version}_${builddate} manifest and annotate it to map the tags :aarch64_${version}_${builddate}, :armhf_${version}_${builddate}, :armv7l_${version}_${builddate}, :x86_64_${version}_${builddate} to the tag :${version}_${builddate} by running

make annotate_date ROLE=worker
How it works

First we create or amend the manifest with the tag ${version}_${builddate}

docker manifest create \
woahbase/alpine-buildworker:${version}_${builddate} \
woahbase/alpine-buildworker:aarch64_${version}_${builddate} \
woahbase/alpine-buildworker:armhf_${version}_${builddate} \
woahbase/alpine-buildworker:armv7l_${version}_${builddate} \
woahbase/alpine-buildworker:x86_64_${version}_${builddate} \
;
docker manifest create --amend \
woahbase/alpine-buildworker:${version}_${builddate} \
woahbase/alpine-buildworker:aarch64_${version}_${builddate} \
woahbase/alpine-buildworker:armhf_${version}_${builddate} \
woahbase/alpine-buildworker:armv7l_${version}_${builddate} \
woahbase/alpine-buildworker:x86_64_${version}_${builddate} \
;

Then annotate the image for each architecture in the manifest with

docker manifest annotate --os linux --arch arm64 \
    woahbase/alpine-buildworker:${version}_${builddate} \
    woahbase/alpine-buildworker:aarch64_${version}_${builddate};
docker manifest annotate --os linux --arch arm --variant v6 \
    woahbase/alpine-buildworker:${version}_${builddate} \
    woahbase/alpine-buildworker:armhf_${version}_${builddate};
docker manifest annotate --os linux --arch arm --variant v7 \
    woahbase/alpine-buildworker:${version}_${builddate} \
    woahbase/alpine-buildworker:armv7l_${version}_${builddate};
docker manifest annotate --os linux --arch amd64 \
    woahbase/alpine-buildworker:${version}_${builddate} \
    woahbase/alpine-buildworker:x86_64_${version}_${builddate};

And finally, push it to the repository using

docker manifest push -p woahbase/alpine-buildworker:${version}_${builddate}

That's all folks! Happy containerizing!


Maintenance

Sources at Github. Built and tested at home using Buildbot. Images at Docker Hub.

Maintained (or sometimes a lack thereof?) by WOAHBase.