Satis Queue Worker
As part of a recent server migration we had to move a Satis instance. The server it was being moved from was a baremetal server, the server it was going to was a Kubernetes cluster, quite different environments.
The Problem
As we were moving from a classic bare metal server to a K8 cluster everything had to be put into containers, this meant we needed two containers, one for Apache which handles the webhooks and serves the static content and one for Satis which does the package building. The problem is that the very nature of containers means they aren't aware of other containers so when Apache recieves a webhook how can it tell Satis which package needs building?
The Solution
A queue. The solution is to get Apache to write all the packages that need building to a queue. There are many queuing solutions (eg Beanstalkd & Amazon SQS) but I wanted something simple something that didn't require another server to run, maintain and update. The anwser was simply to use the file system. When a webhook comes in write the request to a file, but how do these files get from the Apache container to Satis? Volumes, containers are allowed to share parts of file systems. So the process looks something like this:
- Webhook is recieved
- The name of the package to be built is extracted
- The package name is written to a new file in the queue
- Satis notices the new file and reads the name from the file
- Satis builds the new package
This process has many benefits beyond just running in a K8 cluster, it allows package builds to be scaled.
Open-source?
Defintiely! If you want to run this same setup all the code is on GitHub. Additionally a prebuilt image is available on Docker hub.
The Gotchas
This approach is not flawless, there are problems with using the file system as a queue the primary one being there is no locking. There is a very brief period where it is possible for 2 worker instances to process the same job, the chances of this are slim due especially as the poll time for each builder is randomised so workers aren't all looking for work at the same time. It's also worth mentioning that even if two workers process the same job all it means is that a package will get built twice waisting some CPU cycles.
Another issue was preventing duplicate jobs. Every time a package is updated it causes a rebuild of the latest code, what happens if there are two updates in quick succession, the latest code will get built twice, again not the end of the world but this is easily avoided by naming each of the job files with the package name, this makes it trivial to check if that package is already queued.
Room For Improvement
There are definitely areas where we can improve:
- Retrying Jobs. Currently if a job fails it won't be retried.
- Package purging. Satis has a purge command to remove old dev versions, this could be run on a cron
- Idealy containers should be run as non-root users
- Optional Slack notifications when package builds are completed