alpine-pulseaudio
Legacy Image
This image still uses the old-style format for Dockerfiles/makefile recipes, that may (or may not) be compatible with the newer image sources. The container should keep working as expected, but for building new images, a significant part of the code needs to be updated.
Container for Alpine Linux + S6 + PulseAudio + Bluez
This image containerizes the PulseAudio Network Sound server to setup a central sound service inside a local network, also runs the Bluez bluetooth daemon to work with bluetooth speakers or sources. Includes Pulsemixer to manage pulseaudio from CLI.
Based on Alpine Linux from the s6 image with the pulseaudio package installed in it.
Get the Image¶
Pull the image from Docker Hub.
Image Tags
The image is tagged respectively for the following architectures,
latest tag is retagged from x86_64
, so pulling without any tag fetches you that image. For any other architectures specify the tag for that architecture. e.g. for armv8
or aarch64
host it is alpine-pulseaudio:aarch64
.
non-x86_64 builds have embedded binfmt_misc support and contain the qemu-user-static binary that allows for running it also inside an x86_64 environment that has support for it.
Run¶
Running the container starts the service.
docker run --rm -it \
--name docker_pulseaudio \
-p 4713:4713 \
--net=host \
--cap-add NET_ADMIN \
--device /dev/snd \
--device /dev/bus/usb \
-v $PWD/config/pulse:/etc/pulse \
-v /var/run/dbus:/var/run/dbus \
-v /etc/localtime:/etc/localtime:ro \
woahbase/alpine-pulseaudio:x86_64
Multi-Arch Support
If you want to run images for other architectures on a x86_64 machine, you will need to have binfmt support configured for your machine before running the image. multiarch, has made it easy for us containing that into a docker container, just run
Now images built for other architectures will also be executable. This is optional though, without the above, you can still run the image that is made for your architecture.
Configuration¶
-
This image runs PulseAudio under the user
root
, but also has a userpulse
configured to drop privileges to the passedPUID
/PGID
which is ideal if its used to run in non-root mode. That way you only need to specify the values at runtime and pass the-u pulse
if need be. (runid
in your terminal to see your ownPUID
/PGID
values.) -
PulseAudio config files read from
/etc/pulse
. If you have custom cards other than your default sound output jack, most likely you will need to edit or remount this with your own. For example, you can keep the files insideconfig/pulse
and mount it as/etc/pulse
on start. -
Does not run own systemd or dbus daemon so cards might not get detected automatically, default configuration loads only the defauls Alsa Sink, so will need to modify configurations to detect the new hardware.
-
Default configuration listens to port
4713
. Will need to have this port accessible from add devices to get sound. (Check your firewall). -
Bluetooth configurations read from
/etc/bluetooth
. -
To persist paired bluetooth configurations, preserve the contents of
/var/lib/bluetooth
by mounting it someplace likeconfig/devices
. -
DBUS can cause permission issues if the host is not configured to allow Bluez or PulseAudio. Host configuration defaults for these are provided inside
/config/dbus
. -
Any drivers for audio (and/or bluetooth as in Rasperry Pis) will need to be installed in the host machine.
-
Don't forget to set the environment variable
PULSE_SERVER
as your server host in the client machines so that they forward their sound to the server. Check out this link for more information. -
To run only the pulseaudio server without starting bluetooth, set the environment variable
DISABLEBLUETOOTH
to the stringtrue
.
Stop the container with a timeout, (defaults to 2 seconds)
Restart the container with
Removes the container, (always better to stop it first and -f
only when needed most)
Shell access¶
Get a shell inside a already running container,
Optionally, login as a non-root user, (default is alpine
)
Or set user/group id e.g 1000/100,
Logs¶
To check logs of a running container in real time
Build Your Own¶
Feel free to clone (or fork) the repository and customize it for your own usage, build the image for yourself on your own systems, and optionally, push it to your own public (or private) repository.
Here's how...
Setting up¶
Before we clone the /repository, we must have Git, GNU make, and Docker (optionally, with buildx plugin for multi-platform images) setup on the machine. Also, for multi-platform annotations, we might require enabling experimental features of Docker.
Now, to get the code,
Clone the repository with,
Always Check Before You Make!
Did you know, we could check what any make target is going to execute before we actually run them, with
Build and Test¶
To create the image for your architecture, run the build
and test
target with
For building an image that targets another architecture, it is required to specify the ARCH
parameter when building. e.g.
Make to Run¶
Running the image creates a container and either starts a service (for service images) or provides a shell (can be either a root-shell or usershell) to execute commands in, depending on the image. We can run the image with
But if we just need a root-shell in the container without any fance pre-tasks (e.g. for debug or to test something bespoke), we can run bash
in the container with --entrypoint /bin/bash
. This is wrapped in the makefile as
Nothing vs All vs Run vs Shell
By default, if make
is run without any arguments, it calls the target all
. In our case this is usually mapped to the target run
(which in turn may be mapped to shell
).
There may be more such targets defined as per the usage of the image. Check the makefile for more information.
Push the Image¶
If the build and test steps finish without any error, and we want to use the image on other machines, it is the next step push the image we built to a container image repository (like /hub), for that, run the push
target with
If the built image targets another architecture then it is required to specify the ARCH
parameter when pushing. e.g.
That's all folks! Happy containerizing!
Maintenance¶
Sources at Github. Images at Docker Hub.
Maintained (or sometimes a lack thereof?) by WOAHBase.