Everybody knows how to keep a linux box updated. It is also common sense that running things in docker containers is more secure by definition. After all they isolate services from each other. So if you are running containers on a fully patched host, there should be no security holes at all. Not even close! Keeping containers up to date is a total different thing. That brings up the questions how to keep your containers up to date, and how to decide wether containerising is really worth it in your scenario.
What is a secure system?
The definition of a secure system may vary depending on the person you ask. Keeping a system secure is of course much more than just applying the latest patches and updates, but nevertheless: For an average system installing updates and applying security patches is _the most important _point when running systems exposed to the internet. From an update perspective, there has to be some kind of "Chain of trust" in order to be able to securely update your system. If a single chain link breaks, your systems may be compromised without you even noticing:
Lets define this Chain of trust:
- Update timely - updates as soon as they are available
- Secure Sources - All updates have to be fetched from trustworthy sources. The preconditions for a trustworthy source are:
- Trustworthy repositories - The sources / servers / update sites where you pull your updates from must be trusted
- Trustworthy contributions - Same applies for the contents of these repositories (or the source code which is the base for the packages)
- Ensure integrity - Proving the integrity of the package (e.g. by signatures) ensures that the package contents has not changed on its way from the author to your system
So lets check whether all of these requirements are fulfilled when running docker containers.
The classic server
You're running linux on a bare metal box (or some virtual machine). To keep them updated, you run a bunch of commands like
apt-get update apt-get upgrade
from time to time. The package manager of your distribution then takes care of the update process. It accesses the package repositories, fetches the newest security updates, and finally installs them onto your system.
But why do we have to do these updates manually? Can't we simply update our stuff automatically? Well, at least if we are talking about classic server systems an automatic update can break your system at any time. Database upgrades, overwritten custom configs or simply unattended upgrades that run at an unfortunate point in time will cause you headaches.
More advanced setups will somehow monitor the current state of the system and notify the administrator that there are pending updates. This can either be done by using tools like apticron which will notify you by mail if there are new updates available. Even more professional setups use a dedicated monitoring system like icinga2 to trigger alerts.
Is our Chain broken?
No it is actually not. If you frequently run your updates, your package manager will take care about all the requirements we have defined
- Update timely - Check! We get notified as soon as there is something to install, and run the update immediately
- Secure Sources - Check! All requirements defined below are fulfilled
- Trustworthy epositories - Check! The default repositories that come with your distribution are maintained by the same people that have build your distribution. So if you don't trust them, you wouldn't have used the distribution at all.
- Trustworthy contributions - Check! Code changes are reviewed by the open source community, the packaging of the final artefacts is done by a package manager responsible for the package contents. At least for debian there is also a very long lasting testing phase before things go into production.
- Ensure integrity - Check! Your package manager checks signatures and checksums of the packages that are going to be installed, and alerts you if there is something wrong
Whack it in a container
So as we have heard that a containerised service is more secure, we threw all our services in their very own containers. We now still have our bare metal box running linux and a docker daemon, but everything else is hidden in multiple containers. To keep our host system updated, we make use of the exact same strategy as we discussed in the last chapter. But as our host system consists of nothing more than a dockerd running on a kernel, there won't be that much updates at all.
Fresh paint for your containers
How do you update a container? You update the docker image and restart your machine based on the new version of the image. So is it just another two-liner?
docker pull imagename docker restart myContainer
At a first glance yes. But lets first focus on how to keep all of your containers updated. You might have a lot of them running, so do you really want to update each single one of them manually?
There are several solutions to batch update all containers at once. One of these is Watchtower. It runs an additional container that inspects all other containers and compares your current image versions agains the latest versions in DTR. If a newer image is detected, it pulls it from DTR, tears down you container and restarts it with the latest image version. All of that happens without the need to interact manually.
Trust the whale?
Once more, lets check our requirements for the container scenario. Our chain of trust is not broken but rusty:
The sources of our images are not as secure as the package repositories our our distribution is. The blueprints for the images (aka Dockerfiles) are often available on github or some other public place, but there are also many images that do not even publish their Dockerfiles. To sum up:
- If the Dockerfile is not available, you are using a black-box without easy possibility for insight what is going on. You always can view the history of your docker image to get an idea of what is going on. But is that really the way to go for each and every image update?
- If the Dockerfile is available, it depends on if you can verify that the Dockerfile being published is really the blueprint for the image you are using. When using docker hub this is implicitly ensured by their build process. Otherwise there is no cryptographic secure way (like signatures) to really prove that.
Even more security problematic is the contents of the image. With docker the process of updating your containers is even more automated (which is also a benefit). But that also means that in certain scenarios the author of the Dockerfile is deciding on his own about which changes go into production. Especially in smaller projects there are no code reviews at all.
The question of wether to trust an open source project or not is as old as open source itself. Containerising your services just adds another (potentially) unsafe layer on top of the ones you already have in a classic scenario with daemons. After all, you definitely gain security as if something goes wrong, it is isolated to the malicious container. On the other hand, it also increases the possibility to get something malicious - especially with smaller projects that might not even be open source. Which way to go always depends on your knowledge of the used technology, and your willingness to invest time into security. Just be aware that you have to take more things into consideration than you would have to when using your distribution's repositories.
So docker containers are insecure - but not more than your classic systems if you do it right ;-)