inotifywait is a simple utility which blocks, waiting for specified files or directories to change. This turns out to be quite useful as a signalling interfaces between a running container, and an external configuration management system.
Recently I came across a problem where I needed to restart a process once its configuration was updated. This was previously (part of) an application that ran on a Debian EC2, and so it was simply restarted through systemd, with Salt config management states. This would template the desired configuration and then restart the service with systemctl. While it is possible to run process managers like systemd inside a container, there’s a lot to be said about avoiding init sytems in containers.
One snag with containerising something like this is that it doesn’t quite fit with the ‘ideal’ containerised software model; where stateless containers are essentially just running one process and their configuration is implemented prior to or during the start of the container runtime, using volume mounts, ConfigMaps, lifecycle hooks, or init containers. Typically it also remains the same for the lifetime of the container. Should the configuration need to change, you’d simply toss the container(s) and replace them.
For various external reasons, we opted to keep with Salt for our config management. This meant I needed a way for Salt to signal to the container that something has changed, and so the process - OpenVPN in this case - needs to restart. inotifywait is a pretty neat way of doing this - rather than run the target process from the Dockerfile, run it from an entrypoint.sh script (which is already very commonly done), which allows the flexibility of defining extra steps, including wrapping the process within an inotifywait block.
Salt itself requires a minion process to run on each ‘host’ to be managed, which is the process that executes the actions to reach defined states. Using this to touch a file that inotifywait is watching, allows a system to remotely ’trigger’ the script actions within the container.
So starting with the Dockerfile, this simplified example shows a few setup steps that ultimately just runs an entrypoint script:
FROM debian:stretch-slim# Install packagesRUN apt update -y && apt upgrade -yRUN apt install -y dumb-init procps openvpn bash nmap netcat inotify-tools# Install salt-minionCOPY install-salt-minion.sh /tmp/.RUN /tmp/install-salt-minion.sh# Copy the entrypoint script and make it executableCOPY entrypoint.sh /RUN chmod +x /entrypoint.sh# Use yelp/dumb-init to streamline the PID1 process handlingENTRYPOINT ["/usr/bin/dumb-init", "--"]CMD ["/entrypoint.sh"]
The entrypoint script does a few things. Firstly, it starts the salt-minion daemon, so that the team can continue using the salt-master cli (in a container or otherwise). Note that there’s nothing really to catch the salt-minion here, so it’s not very robust. The only real way of knowing something is wrong is through the logs, or a lack of keep alive checks from the Salt master.
The salt-minion config, ID, and key pair are mapped in with ConfigMaps and Secrets, so we won’t have to re-accept keys if the pod restarts.
#!/bin/bash
# start the minion daemon to allow live changes from the mastersalt-minion --daemon
# redirect minion logs to allow Docker/k8s log collectiontail -n 0 -q -F /var/log/salt/minion >> /proc/1/fd/1 &
# trigger salt state apply from the minion on startwhile ! salt-call state.apply
do echo "Retrying autosalt..." sleep 10doneecho "Salt complete"# Setup tun interface for OpenVPNmkdir -p /dev/net
mknod /dev/net/tun c 10200# Choose a file that inotifywait will monitor, and we use this as the flag file from salt.LOCK_FILE=/opt/restart
while true; do# Run the main process and save the PID openvpn /etc/openvpn/server.conf & OPENVPN_PID=$(echo $!) echo "Started OpenVPN process with PID $OPENVPN_PID"# block here until the file is touched, which will then kill the openvpn process and restart it inotifywait -q -e attrib,close ${LOCK_FILE} echo "Restarting OpenVPN..." kill ${OPENVPN_PID}done
The key thing here is the LOCK_FILE and the inotifywait call. In a while loop, we start the desired application process as usual, except we also retrieve the PID using echo $!. If the application process blocks like openvpn does, & will send it to the background.
Now, our entrypoint is running the application, and inotifywait is blocking, waiting for something to change in the LOCK_FILE. Once something does change, the application process is killed, and we go round the loop again to restart it.
Now this means I can change the OpenVPN config file, and then touch $LOCK_FILE to have the changes apply ‘dynamically’ within the container. With Salt, it could look something like this:
The only real difference would be the restart_openvpn state which typically would use service.running, now needs to touch the lock file instead.
The biggest advantage of this approach is really just a ‘pragmatism over principle’ argument. If, like us, you already have an existing configuration management system with a lot of intertwined states for a much wider system, it’s difficult to justify moving configuration out of that single-point-of-configuration-truth, purely because you’ve decided to containerise one (of many) processes that was previously running on a traditional host or VM. Rearchitecting an application for k8s often isn’t trivial, especially when it’s not particularly modular and you have limited practical scope to improve that.
This approach requires minor changes to the Salt ‘restart service’ states, but not necessarily a refactor. And it allows Salt to have some limited and pre-defined control over what goes on inside a container, without stepping on, or developing a secure interface to, the Docker sock or k8s pod scheduling, or indeed the APIs on the managed services you might be using.
It also allows our wider teams to continue with Salt master cli workflows they’re used to, rather than, for example, having to modify Helm values which template a server.conf ConfigMap. It’s faster too - just restarting a process within a loop, vs deleting and reschuduling a pod.
It’s not perfect though. Depending on your viewpoint, it add some hidden complexity. K8s is already complex enough, and debugging an issue could be made harder when there’s another little control plane interface directly into containers. Without some well thought out pod healthchecks for example, it could hide from K8s the true heath state of the pod.