Skip to content

alpine-nginx

updated stars forks watchers issues issues-pr
pulls stars aarch64 armhf armv7l x86_64

MultiArch Alpine Linux + S6 + NGINX Web Server/Reverse Proxy.


This image serves as a standalone web/reverse proxy server, or as the base image for applications / services that use or require NGINX.

Based on Alpine Linux from the s6 image with the nginx package installed in it. Also includes modules for stream and http-headers-more for those who need it.


Get the Image

Pull the image from Docker Hub.

docker pull woahbase/alpine-nginx
Image Tags

The image is tagged respectively for the following architectures,

   aarch64

   armhf

   armv7l

   x86_64

latest tag is annotated as multiarch so pulling without any tags should fetch the correct image for your architecture. Same goes for any of the version tags.

non-x86_64 builds have embedded binfmt_misc support and contain the qemu-user-static binary that allows for running it also inside an x86_64 environment that has support for it.


Run

Running the container starts the service.

docker run --rm \
  --name docker_nginx \
  -p 80:80 \
  -p 443:443 \
  -v $PWD/config:/config \
woahbase/alpine-nginx
Multi-Arch Support

If you want to run images for other architectures on a x86_64 machine, you will need to have binfmt support configured for your machine before running the image. multiarch, has made it easy for us containing that into a docker container, just run

docker run --rm --privileged multiarch/qemu-user-static --reset -p yes

Now images built for other architectures will also be executable. This is optional though, without the above, you can still run the image that is made for your architecture.


Configuration

We can customize the runtime behaviour of the container with the following environment variables.

ENV Vars Default Description
ROOTDIR /config Root directory for nginx configs or webserver files.
NGINXDIR /config/nginx Default directory for nginx configurations.
CONFSDIR /config/nginx/conf.d For shared configuration snippets.
SITESDIR /config/nginx/http.d For webserver host configuration files.
STREAMSDIR /config/nginx/stream.d For stream configuration files. (Optional, requires stream module to be enabled in configurations).
NGINX_NO_HTTP unset Set to 'true' to disable default http(80) conf file, has no effect if custom site-confs exist.
NGINX_NO_HTTPS unset Set to 'true' to disable default https(443) conf file, and default self-signed certificate generation on first run.
KEYDIR /config/keys Default certificate/privatekey location.
PKEYFILE /config/keys/private.key Default path to privatekey. (Make sure site-confs reflect the same)
CERTFILE /config/keys/certificate.crt Default path to certificate. (Make sure site-confs reflect the same)
SSLSUBJECT see here Default SSL Subject for self-signed certificate generation on first run.
NGINX_NO_CERTGEN unset Set to 'true' to disable default self-signed certificate generation on first run.
HTPASSWDFILE /config/keys/.htpasswd Default path to .htpasswd. (Make sure site-confs reflect the same)
WEBADMIN admin Default admin user in .htpasswd. (Not changed if file already exists)
PASSWORD insecurebydefault Default admin user password in .htpasswd.
NGINX_NO_HTPASSWD unset Set to 'true' to disable default htpasswd generation on first run.
NGINX_SKIP_FASTCGI_PARAM unset If set to true, skip appending custom fastcgi_param configuration. (since 1.26.2)
WEBDIR /config/www For serving files, e.g. either static HTML or dynamic scripts i.e. with PHP.
NGINX_ADD_DEFAULT_INDEX unset If set to true and no files exist inside WEBDIR, a static index.html is copied into it. Useful for testing. (since 1.26.2) Previously NGINX_SKIP_DEFAULT_INDEX, enabled by default.
NGINX_PERMFIX_WEBDIR unset If set to true, ensures files inside $WEBDIR are owned/accessible by S6_USER. (since 1.26.2)
S6_NEEDED_PACKAGES empty string Space-separated list of extra APK packages to install on start. E.g. "curl git tzdata"
PUID 1000 Id of S6_USER.
PGID 100 Group id of S6_USER.
S6_USER alpine (Preset) Default non-root user for services to drop privileges to.
S6_USERHOME /home/alpine (Preset) HOME directory for S6_USER.
Did you know?

You can check your own UID/GID by running the command id in a terminal.

Also,

  • Default configs setup a static site at / by copying /defaults/index.html at the $WEBDIR (default /config/www/). Mount the /config/ locally to persist modifications (or your webapps). NGINX configurations are at /config/nginx, and vhosts at /config/nginx/http.d/. For JSON indexable (requires custom configuration) storage mount the data partition at /storage/.

  • Includes two default site configuration (for http:80 and http:443) in /defaults directory which are used as a starter configuration if none exist, these are no way intended to be used in production, you are better off rolling your own.

  • Includes a placeholder script for further customizations before starting processes. Override the shellscript located at /etc/s6-overlay/s6-rc.d/p12-nginx-customize/run with your custom pre-tasks as needed.

  • Default configs set up a https and auth protected web location at /secure.

  • If you're proxying multiple containers at the same host, or reverse proxying multiple hosts through the same container, you may need to add --net=host and/or add entries in your firewall to allow traffic.

SSL Subject

Default configs generate a 4096-bit self-signed certificate. By default the value of SSLSUBJECT is

/C=US/ST=NY/L=EXAMPLE/O=EXAMPLE/OU=WOAHBase/CN=*/[email protected]

Did you know?

To validate the configuration when modified, we could just get a debug shell into the container and run

nginx -c /config/nginx/nginx.conf -t

And if that is ok, we could reload the configuration without stopping nginx with

nginx -c /config/nginx/nginx.conf -s reload

Stop the container with a timeout, (defaults to 2 seconds)

docker stop -t 2 docker_nginx

Restart the container with

docker restart docker_nginx

Removes the container, (always better to stop it first and -f only when needed most)

docker rm -f docker_nginx

Shell access

Get a shell inside a already running container,

docker exec -it docker_nginx /bin/bash

Optionally, login as a non-root user, (default is alpine)

docker exec -u alpine -it docker_nginx /bin/bash

Or set user/group id e.g 1000/100,

docker exec -u 1000:100 -it docker_nginx /bin/bash

Logs

To check logs of a running container in real time

docker logs -f docker_nginx

As-A-Service

Run the container as a service with the following as reference (and modify it as needed).

With docker-compose (alpine-nginx.yml)

---
services:
  nginx:
    container_name: nginx
    deploy:
      # mode: global
      resources:
        limits:
          cpus: '2.00'
          memory: 512M
      restart_policy:
        condition: on-failure
        delay: 10s
        max_attempts: 5
        window: 120s
    environment:
      PUID: ${PUID:-1000}
      PGID: ${PGID:-100}
      # TZ: ${TZ}
      # WEBADMIN: ${WEBADMIN}
      # PASSWORD: ${WEBPASSWORD}
      # HEALTHCHECK_URL: http://localhost/status  ## requires stub_status
      # CERTFILE: /config/keys/certificate.crt
      # PKEYFILE: /config/keys/private.key
      # HTPASSWDFILE: /config/keys/.htpasswd
    # healthcheck:
    #   interval: 2m
    #   retries: 5
    #   start_period: 5m
    #   test:
    #     - CMD-SHELL
    #     - >
    #       wget --quiet --tries=1 --no-check-certificate --spider
    #       ${HEALTHCHECK_URL:-"http://localhost:80/"} || exit 1
    #   timeout: 10s
    hostname: nginx
    image: woahbase/alpine-nginx:${NGINX_TAG:-latest}
    network_mode: bridge
    ports:
      - protocol: tcp
        host_ip: 0.0.0.0
        published: 443
        target: 443
      - protocol: tcp
        host_ip: 0.0.0.0
        published: 80
        target: 80
    volumes:
      - type: bind
        source: ${NGINX_DIR:?err}/config
        target: /config
        bind:
          create_host_path: true
      # - type: bind
      #   source: ${CERTIFICATE_DIR:?err}
      #   target: /config/keys
      #   bind:
      #     create_host_path: false
      # - type: bind
      #   source: /storage
      #   target: /storage
      #   bind:
      #     create_host_path: true
      - type: bind
        source: /etc/localtime
        target: /etc/localtime
        read_only: true
        bind:
          create_host_path: false

With HashiCorp Nomad (alpine-nginx.hcl)

variables {
  dc   = "dc1" # to load the dc-local config file
  pgid = 100   # gid for docker
  puid = 1000  # uid for docker
  version = "1.26.2"
}
# locals { var = yamldecode(file("${var.dc}.vars.yml")) } # load dc-local config file

job "nginx" {
  datacenters = [var.dc]
  # namespace   = local.var.namespace
  priority    = 70
  # region      = local.var.region
  type        = "service"

  constraint { distinct_hosts = true }

  # vault { policies = ["nomad-kv-readonly"] }

  group "docker" {
    count = 1

    restart {
      attempts = 2
      interval = "2m"
      delay    = "15s"
      mode     = "fail"
    }
    update {
      max_parallel     = 1
      min_healthy_time = "10s"
      healthy_deadline = "3m"
      auto_revert      = false
    }

    service {
      name        = "${NOMAD_JOB_NAME}-http"
      port        = "http"
      tags        = ["ins${NOMAD_ALLOC_INDEX}", attr.unique.hostname]
      canary_tags = ["canary${NOMAD_ALLOC_INDEX}"]
      check {
        name      = "${NOMAD_JOB_NAME}@${attr.unique.hostname}:${NOMAD_HOST_PORT_http}"
        type      = "tcp"
        # path      = "/status"
        interval  = "60s"
        timeout   = "10s"
      }
      check_restart {
        limit     = 3
        grace     = "10s"
      }
    }
    service {
      name        = "${NOMAD_JOB_NAME}-https"
      port        = "https"
      tags        = ["ins${NOMAD_ALLOC_INDEX}", attr.unique.hostname]
      canary_tags = ["canary${NOMAD_ALLOC_INDEX}"]
      check {
        name      = "${NOMAD_JOB_NAME}@${attr.unique.hostname}:${NOMAD_HOST_PORT_https}"
        type      = "tcp"
        # path      = "/status"
        interval  = "60s"
        timeout   = "10s"
      }
      check_restart {
        limit     = 3
        grace     = "10s"
      }
    }

    ephemeral_disk { size = 128 } # MB
    network {
      # dns { servers = local.var.dns_servers }
      port "http"  { static = 80  }
      port "https" { static = 443 }
    }
    volume "nomad-nginx-data" {
      type      = "host"
      read_only = false
      source    = "nomad-nginx-data"
    }

    task "nginx" {
      driver = "docker"

      config {
        healthchecks { disable = true }
        hostname     = NOMAD_JOB_NAME
        image        = "woahbase/alpine-nginx:${var.version}"
        network_mode = "host"
        ports        = ["http", "https"]

        logging {
          type = "journald"
          config {
            mode = "non-blocking"
            tag = NOMAD_JOB_NAME
          }
        }

        # mount {
        #   type     = "bind"
        #   source   = "secrets/keys"
        #   target   = "/config/keys"
        #   readonly = true
        # }

        # mount {
        #   type     = "bind"
        #   source   = "local/nginx"
        #   target   = "/config/nginx"
        #   readonly = true
        # }

        mount {
          type     = "bind"
          source   = "/etc/localtime"
          target   = "/etc/localtime"
          readonly = true
        }
      }

      volume_mount {
        # ensure policies allow vault-generated-token to read-write to the volume
        volume      = "nomad-nginx-data"
        destination = "/config"
        read_only   = false
      }

      env {
        PGID = var.pgid
        PUID = var.puid
        # TZ   = local.var.tz
      }

      resources {
        cpu    = 256 # MHz
        memory = 256 # MB
      }

      # template {
      #   destination = "secrets/env"
      #   data        = <<-EOE
      #     CERTFILE=/config/keys/certificate.crt
      #     PKEYFILE=/config/keys/private.key
      #     HTPASSWDFILE=/config/keys/.htpasswd
      #
      #     {{ with secret "kv/data/nomad/${var.dc}/nginx" }}
      #     USERNAME={{ .Data.data.username }}
      #     PASSWORD={{ .Data.data.password }}
      #     {{ end }}
      #   EOE
      #   change_mode = "restart"
      #   env         = true
      #   perms       = "444"
      #   error_on_missing_key = true
      # }

      # template {
      #   destination   = "secrets/keys/certificate.crt"
      #   data          = <<-EOP
      #     {{ with secret "kv/data/nomad/${var.dc}/certificates/selfsigned" -}}
      #     {{   index .Data.data "certificate.crt"}}
      #     {{- end }}
      #   EOP
      #   change_mode   = "script"
      #   change_script { command = "/config/nginx/validate-n-reload.sh" }
      #   perms         = "444"
      #   error_on_missing_key = true
      # }

      # template {
      #   destination   = "secrets/keys/private.key"
      #   data          = <<-EOP
      #     {{ with secret "kv/data/nomad/${var.dc}/certificates/selfsigned" -}}
      #     {{   index .Data.data "private.key"}}
      #     {{- end }}
      #   EOP
      #   change_mode   = "script"
      #   change_script { command = "/config/nginx/validate-n-reload.sh" }
      #   perms         = "444"
      #   error_on_missing_key = true
      # }

      # template {
      #   destination   = "secrets/keys/.htpasswd"
      #   data          = <<-EOP
      #     {{ with secret "kv/data/nomad/${var.dc}/nginx" -}}
      #     {{   index .Data.data "htpasswd"}}
      #     {{- end }}
      #   EOP
      #   change_mode   = "script"
      #   change_script { command = "/config/nginx/validate-n-reload.sh" }
      #   perms         = "444"
      #   error_on_missing_key = true
      # }

      # template {
      #   destination   = "local/nginx/nginx.conf"
      #   data          = <<-EOC
      #     {{ key "nomad/${var.dc}/nginx/nginx.conf" }}
      #   EOC
      #   change_mode   = "script"
      #   change_script { command = "/config/nginx/validate-n-reload.sh" }
      #   perms         = "644"
      #   error_on_missing_key = true
      # }

      # template {
      #   destination   = "local/nginx/http.d/http"
      #   data          = <<-EOC
      #     {{ key "nomad/${var.dc}/nginx/http" }}
      #   EOC
      #   change_mode   = "script"
      #   change_script { command = "/config/nginx/validate-n-reload.sh" }
      #   perms         = "644"
      #   error_on_missing_key = true
      # }

      # template {
      #   destination   = "local/nginx/http.d/https"
      #   data          = <<-EOC
      #     {{ key "nomad/${var.dc}/nginx/https" }}
      #   EOC
      #   change_mode   = "script"
      #   change_script { command = "/config/nginx/validate-n-reload.sh" }
      #   perms         = "644"
      #   error_on_missing_key = true
      # }

      # template {
      #   destination   = "local/nginx/http.d/upstream"
      #   data          = <<-EOT
      #     {{- /* key "nomad/${var.dc}/nginx/http-upstream" */ -}}
      #     {{- /* defaults params for upstreams */ -}}
      #     {{- $DP := "fail_timeout=10s max_fails=3 weight=1;" -}}
      #
      #     {{- /* multiple instances set ip_hash, optionally 'least_conn;' */ -}}
      #     {{- $MU := "ip_hash;" -}}
      #
      #     {{- /* when service lost redirect to blackhole, dont fail nginx */ -}}
      #     {{- $NC := "server 127.0.0.1:65535 down;" -}}
      #
      #     # hashicorp service proxies
      #     upstream sv_consul { {{range service "consul|passing"}}server {{.Address}}:8500      {{$DP}} {{else}} {{$NC}} {{end}} {{$MU}} }
      #     upstream sv_nomad  { {{range service "http.nomad"    }}server {{.Address}}:{{.Port}} {{$DP}} {{else}} {{$NC}} {{end}} {{$MU}} }
      #     upstream sv_vault  { {{range service "active.vault"  }}server {{.Address}}:{{.Port}} {{$DP}} {{else}} {{$NC}} {{end}} {{$MU}} }
      #
      #     # add your own services here
      #   EOT
      #   change_mode   = "script"
      #   change_script { command = "/config/nginx/validate-n-reload.sh" }
      #   perms         = "644"
      # }

      # template {
      #   destination = "local/nginx/validate-n-reload.sh"
      #   data        = <<-EOS
      #     #!/bin/bash
      #     ###
      #     ## reload nginx only if config valid
      #     ###
      #     nginx -c /config/nginx/nginx.conf -t \
      #     && \
      #     nginx -c /config/nginx/nginx.conf -s reload \
      #   EOS
      #   change_mode = "noop"
      #   perms       = "755"
      # }
    }
  }
}

Build Your Own

Feel free to clone (or fork) the repository and customize it for your own usage, build the image for yourself on your own systems, and optionally, push it to your own public (or private) repository.

Here's how...


Setting up


Before we clone the /repository, we must have Git, GNU make, and Docker (optionally, with buildx plugin for multi-platform images) setup on the machine. Also, for multi-platform annotations, we might require enabling experimental features of Docker.

Now, to get the code,

Clone the repository with,

git clone https://github.com/woahbase/alpine-nginx
cd alpine-nginx

To get a list of all available targets, run

make help
Always Check Before You Make!

Did you know, we could check what any make target is going to execute before we actually run them, with

make -n <targetname> <optional args>

Build and Test


To create the image for your architecture, run the build and test target with

make build test 

For building an image that targets another architecture, it is required to specify the ARCH parameter when building. e.g.

make build test ARCH=aarch64 
make build test ARCH=armhf 
make build test ARCH=armv7l 
make build test ARCH=x86_64 
Build Parameters

All images have a few common build parameters that can be customized at build time, like

  • ARCH
The target architecture to build for. Defaults to host architecture, auto-detected at build-time if not specified. Also determines if binfmt support is required before build or run and runs the regbinfmt target automatically. Possible values are aarch64, armhf, armv7l, and x86_64.
  • BUILDDATE
The date of the build. Can be used to create separate tags for images. (format: yyyymmdd)
  • DOCKERFILE
The dockerfile to use for build. Defaults to the file Dockerfile, but if per-arch dockerfiles exist, (e.g. for x86_64 the filename would be Dockerfile_x86_64) that is used instead.
  • TESTCMD
The command to run for testing the image after build. Runs in a bash shell.
  • VERSION
The version of the app/tool, may need to be preset before starting the build (e.g. for binaries from github releases), or extracted from the image after build (e.g. for APK or pip packages).
  • REGISTRY
The registry to push to, defaults to the Docker Hub Registry (docker.io) or any custom registry that is set via docker configurations. Does not need to be changed for local or test builds, but to override, either pass it by setting an environment variable, or with every make command.
  • ORGNAME
The organization (or user) name under which the image repositories exist, defaults to woahbase. Does not need to be changed for local or test builds, but to override, either pass it by setting an environment variable, or with every make command.

The image may also require custom parameters (like binary architecture). Before you build, check the makefile for a complete list of parameters to see what may (or may not) need to be set.

BuildX and Self-signed certificates

If you're using a private registry (a-la docker distribution server) with self-signed certificates, that fail to validate when pulling/pushing images. You will need to configure buildx to allow insecure access to the registry. This is configured via the config.toml file. A sample is provided in the repository, make sure to replace YOUR.PRIVATE.REGISTRY with your own (include port if needed).


Make to Run


Running the image creates a container and either starts a service (for service images) or provides a shell (can be either a root-shell or usershell) to execute commands in, depending on the image. We can run the image with

make run 

But if we just need a root-shell in the container without any fance pre-tasks (e.g. for debug or to test something bespoke), we can run bash in the container with --entrypoint /bin/bash. This is wrapped in the makefile as

make shell 
Nothing vs All vs Run vs Shell

By default, if make is run without any arguments, it calls the target all. In our case this is usually mapped to the target run (which in turn may be mapped to shell).

There may be more such targets defined as per the usage of the image. Check the makefile for more information.


Push the Image


If the build and test steps finish without any error, and we want to use the image on other machines, it is the next step push the image we built to a container image repository (like /hub), for that, run the push target with

make push 

If the built image targets another architecture then it is required to specify the ARCH parameter when pushing. e.g.

make push ARCH=aarch64 
make push ARCH=armhf 
make push ARCH=armv7l 
make push ARCH=x86_64 
Pushing Multiple Tags

With a single make push, we are actually pushing 3 tags of the same image, e.g. for x86_64 architecture, they're namely

  • alpine-nginx:x86_64
The actual image that is built.
  • alpine-nginx:x86_64_(version)
It is expected that the application is versioned when built or packaged, it can be specified in the tag, this makes pulling an image by tag possible. Usually this is obtained from the parameter VERSION, which by default, is set by calling a function to extract the version string from the package installed in the container, or from github releases. Can be skipped with the parameter SKIP_VERSIONTAG to a non-empty string value like 1.
  • alpine-nginx:x86_64_(version)_(builddate)
When building multiple versions of the same image (e.g. for providing fixes or revisions), this ensures that a more recent push does not fully replace a previously pushed image. This way, although the architecture and version tags are replaced, it is possible to roll back to the previously built image by build date (format yyyymmdd). This value is obtained from the BUILDDATE parameter, and if not essential, can be skipped by setting the parameter SKIP_BUILDDATETAG to a non-empty string value like 1.
Pushing To A Private Registry

If you want to push the image to a custom registry that is not pre-configured on your system, you can set the REGISTRY variable either on the build environment, or as a makefile parameter, and that will be used instead of the default Docker Hub repository. Make sure to have push access set up before you actually push, and include port if needed. E.g.

export REGISTRY=your.private.registry:5000
make build test push

or

make build test push REGISTRY=your.private.registry:5000

Annotate Manifest(s)


For single architecture images, the above should suffice, the built image can be used in the host machine, and on other machines that have the same architecture too, i.e. after a push.

But for use-cases that need to support multiple architectures, there's a couple more things that need to be done. We need to create (or amend if already created beforehand) a manifest for the image(s) that we built, then annotate it to map the images to their respective architectures. And for our three tags created above we need to do it thrice.

Did you know?

We can inspect the manifest of any image by running

docker manifest inspect <imagename>:<optional tag, default is latest>


Tag Latest

Assuming we built the images for all supported architectures, to facilitate pulling the correct image for the architecture, we can create/amend the latest manifest and annotate it to map the tags :aarch64, :armhf, :armv7l, :x86_64 to the tag :latest by running

make annotate_latest 
How it works

First we create or amend the manifest with the tag latest

docker manifest create \
woahbase/alpine-nginx:latest \
woahbase/alpine-nginx:aarch64 \
woahbase/alpine-nginx:armhf \
woahbase/alpine-nginx:armv7l \
woahbase/alpine-nginx:x86_64 \
;
docker manifest create --amend \
woahbase/alpine-nginx:latest \
woahbase/alpine-nginx:aarch64 \
woahbase/alpine-nginx:armhf \
woahbase/alpine-nginx:armv7l \
woahbase/alpine-nginx:x86_64 \
;

Then annotate the image for each architecture in the manifest with

docker manifest annotate --os linux --arch arm64 \
    woahbase/alpine-nginx:latest \
    woahbase/alpine-nginx:aarch64;
docker manifest annotate --os linux --arch arm --variant v6 \
    woahbase/alpine-nginx:latest \
    woahbase/alpine-nginx:armhf;
docker manifest annotate --os linux --arch arm --variant v7 \
    woahbase/alpine-nginx:latest \
    woahbase/alpine-nginx:armv7l;
docker manifest annotate --os linux --arch amd64 \
    woahbase/alpine-nginx:latest \
    woahbase/alpine-nginx:x86_64;

And finally, push it to the repository using

docker manifest push -p woahbase/alpine-nginx:latest

Tag Version

Next, to facilitate pulling images by version, we create/amend the image-version manifest and annotate it to map the tags :aarch64_(version), :armhf_(version), :armv7l_(version), :x86_64_(version) to the tag :(version) by running

make annotate_version 
How it works

First we create or amend the manifest with the tag (version)

docker manifest create \
woahbase/alpine-nginx:(version) \
woahbase/alpine-nginx:aarch64_(version) \
woahbase/alpine-nginx:armhf_(version) \
woahbase/alpine-nginx:armv7l_(version) \
woahbase/alpine-nginx:x86_64_(version) \
;
docker manifest create --amend \
woahbase/alpine-nginx:(version) \
woahbase/alpine-nginx:aarch64_(version) \
woahbase/alpine-nginx:armhf_(version) \
woahbase/alpine-nginx:armv7l_(version) \
woahbase/alpine-nginx:x86_64_(version) \
;

Then annotate the image for each architecture in the manifest with

docker manifest annotate --os linux --arch arm64 \
    woahbase/alpine-nginx:(version) \
    woahbase/alpine-nginx:aarch64_(version);
docker manifest annotate --os linux --arch arm --variant v6 \
    woahbase/alpine-nginx:(version) \
    woahbase/alpine-nginx:armhf_(version);
docker manifest annotate --os linux --arch arm --variant v7 \
    woahbase/alpine-nginx:(version) \
    woahbase/alpine-nginx:armv7l_(version);
docker manifest annotate --os linux --arch amd64 \
    woahbase/alpine-nginx:(version) \
    woahbase/alpine-nginx:x86_64_(version);

And finally, push it to the repository using

docker manifest push -p woahbase/alpine-nginx:(version)

Tag Build-Date

Then, (optionally) we create/amend the (version)_(builddate) manifest and annotate it to map the tags :aarch64_(version)_(builddate), :armhf_(version)_(builddate), :armv7l_(version)_(builddate), :x86_64_(version)_(builddate) to the tag :(version)_(builddate) by running

make annotate_date 
How it works

First we create or amend the manifest with the tag (version)_(builddate)

docker manifest create \
woahbase/alpine-nginx:(version)_(builddate) \
woahbase/alpine-nginx:aarch64_(version)_(builddate) \
woahbase/alpine-nginx:armhf_(version)_(builddate) \
woahbase/alpine-nginx:armv7l_(version)_(builddate) \
woahbase/alpine-nginx:x86_64_(version)_(builddate) \
;
docker manifest create --amend \
woahbase/alpine-nginx:(version)_(builddate) \
woahbase/alpine-nginx:aarch64_(version)_(builddate) \
woahbase/alpine-nginx:armhf_(version)_(builddate) \
woahbase/alpine-nginx:armv7l_(version)_(builddate) \
woahbase/alpine-nginx:x86_64_(version)_(builddate) \
;

Then annotate the image for each architecture in the manifest with

docker manifest annotate --os linux --arch arm64 \
    woahbase/alpine-nginx:(version)_(builddate) \
    woahbase/alpine-nginx:aarch64_(version)_(builddate);
docker manifest annotate --os linux --arch arm --variant v6 \
    woahbase/alpine-nginx:(version)_(builddate) \
    woahbase/alpine-nginx:armhf_(version)_(builddate);
docker manifest annotate --os linux --arch arm --variant v7 \
    woahbase/alpine-nginx:(version)_(builddate) \
    woahbase/alpine-nginx:armv7l_(version)_(builddate);
docker manifest annotate --os linux --arch amd64 \
    woahbase/alpine-nginx:(version)_(builddate) \
    woahbase/alpine-nginx:x86_64_(version)_(builddate);

And finally, push it to the repository using

docker manifest push -p woahbase/alpine-nginx:(version)_(builddate)

That's all folks! Happy containerizing!


Maintenance

Sources at Github. Built and tested at home using Buildbot. Images at Docker Hub.

Maintained (or sometimes a lack thereof?) by WOAHBase.