Docker brings a new approach to software delivery and particularly to infrastructure management. The benefits of containers over VMs are many - they’re faster, lighter, cost-effective, and enable collaboration. However, all these benefits wouldn’t suffice if your Docker stack is prone to attacks. Security is a prime concern when running Docker in production. This page explains unique security risks which affect Docker deployments, the upside of security benefits when using Docker, and provides 8 Docker security best practices.
Unique Security Risks Affecting Docker Users
Introducing Vulnerabilities from Container Images
Part of the uniqueness of the Docker experience is the ability to share container images within an organization, or with the entire world. While this has made development more efficient, it comes with its security risks. Many publicly available images on Docker Hub contain vulnerabilities and aren’t suitable for running in production environments. Many of these come from open source container images. Therefore, every container image needs to be scanned before it is downloaded.
In the past, when creating VMs, it was a fairly safe and restricted activity that didn’t involve the risk of outside vulnerabilities. But as containers are created from container images, scanning container images is a new security challenge and needs new security measures. Today, Docker Hub and other third-party registries like Quay have built-in scanning features that can easily detect common vulnerabilities. As an extension, only official and verified container images should be allowed to be downloaded onto your system.
Hard-coding Secrets in Images
The way you handle sensitive information like passwords, tokens, API keys, and more are vital to the security of your Docker system. It’s risky to store secrets in container images or as part of the application code. This leaves it visible to unauthorized team members, or even external users and applications that have access to the containers. Secrets stored as part of the application code can get pushed to Git repositories and get pushed downstream where it’s accessible to many.
Docker has introduced a secrets management feature for its Swarm orchestrator that helps to better handle secrets. It encrypts and stores secrets in an in-memory filesystem on a manager node. This manager node leases the secret to a container when it performs a task that requires access to data that’s protected by the secret. Once the task is completed, access to the secret is revoked from the container, and the secret is deleted from the in-memory storage on the node. This brings a centralized, programmatic way to manage Docker secrets and is essential to enforcing Docker security.
Large Attack Surface
On traditional servers tasks like network management, SSH access, and log analysis require root access. The same model can’t be applied to a container as it would easily compromise the underlying host - Docker daemon.
With Docker, access to the Docker host is restricted as all these processes are performed outside the container. By default, Docker containers are run as unprivileged. This limits damage even if one container is compromised as the other containers are sufficiently isolated as is the host.
Lack of Granular RBAC
Docker brings complexity in how team members are given access to different parts of the system. Traditionally, Dev, QA, and IT teams had their own resources and it was easy for IT to configure access for teams only for their tools and infrastructure. With the DevOps approach, Dev, QA, and IT need access to the same containers but at different points in the development pipeline. Within these broad teams, some users need restricted access while other need full access to modify and manage the containers. Setting this up is a complex task.
Docker enables role-based access control (RBAC) to configure access according to roles, and even make exceptions when needed. You can set up access to only view containers, or only manage individual containers but not have access to the kernel, or give full admin access to manage the entire stack including the kernel and RBAC itself. With the number of containers running at any given time, this can be difficult to monitor, but it is essential when regulating your container workflows.
Lack of Visibility
Container lifespans are much shorter than VMs. While VMs would run for months or even years, Docker containers are retired every few days and replaced with updated ones. This makes it hard to keep track of containers.
Additionally, with the many layers of the Docker stack - registry, host, and client - and each one having multiple components, there’s a need to secure each layer. This requires keeping a close watch on changes at every layer.
Monitoring is essential to Docker security, and Docker monitoring tools need to be able to surface high priority issues despite the complexity of the stack. Docker monitoring needs to be comprehensive covering all stages of the container workflow.
Lateral Network Movement
Network security is especially important for Docker containers. Traditional firewalls enforce perimeter security, and this approach doesn’t work with Docker. The reason is that if a single Docker host is infected, the attack can spread to other hosts on the same network. To prevent this from happening, network security needs to be enforced at a granular level at every host. There should be policies governing how hosts communicate with one another. This policy-based networking restricts the spread of vulnerabilities within the system.
Security Benefits of Docker
Immutability and Change Management
Docker takes an immutable approach to infrastructure. This means that with every update or change made to a container, the same container isn’t updated, rather, a new and updated container replaces the existing container. The advantage of this is that changes can be recorded, and you can even rollback to particular points in the past. This level of agility and control was not possible with VMs as it would be expensive, time-consuming and resource-intensive to set up a process like this. Docker enables security at a new level by allowing for an immutable approach to infrastructure.
Increased Isolation Between Processes
In the early days, it was common to hear the phrase ‘containers do not contain’. This is now passe. There are various levels of isolation at every layer of the stack from the kernel to the network. Core Linux security features like namespaces, cgroups, apparmor, SELinux, and Seccomp provide adequate isolation between containers. Additionally, policy-based networking and RBAC provide isolation at the higher layers.
“Reverse Uptime” – Prevent Persistent Attacks by Refreshing Containers
Traditionally, you’d want instances to run for long periods of time. The longer the uptime, the more trusted and stable the instance is supposed to be. This, however, is a flawed assumption. With Docker containers, ‘reverse uptime’ is the way to go. Rather than look at long-running containers as a good thing, this approach requires you to replace or refresh the oldest running instance with a new one. This way attacks can be easily stopped as the system is dynamic. Static instances are easier to penetrate, but dynamically changing ones are moving targets and make for safer container systems.
8 Docker Security Best Practices
1. Shift Left to Eliminate Vulnerabilities
DevOps requires a ‘shift left’ mentality towards the development pipeline. This means that QA and IT teams are involved from step 1 - conceptualizing and development of the application. An offshoot of this is that developers are required to scan images as they’re built and after they’re stored. This scanning is done by a registry tool such as Quay.
The underlying principle is that as QA and IT collaborate with Dev at the initial stages of development, they can enforce security regulations that all teams approve of, and that bake security and reliability into the application from the very start.
2. Control the Inflow of Images, Enforce the Use of Trusted Images Only
Though sharing container images is a powerful way to build efficiency into the software development process, the sharing should be monitored. This is best done by setting up a private registry like Quay. Having a private registry gives more control in the hands of admins, and prevents the unauthorized entry of suspicious public and open source images. The registry can identify the signature of a container image and tell if it’s from a reliable source.
3. Manage Secrets in Images
Secrets should be managed via the secrets management feature of Docker, which ensure no hard coding of secrets in container images. Docker injects the secrets in runtime only to containers that need them. This follows a blacklist approach where secrets are not accessible by any user or resource by default and are only exposed temporarily during a task. Docker has strong defaults to ensure secrets are highly available, making it a strong choice for secrets management.
4. Harden Docker Hosts using CIS Benchmark
The CIS has published a list of security best practices for Docker containers. Checking your system against these benchmarks continuously ensures you’re protected from many common vulnerabilities. Some security tools have these benchmarks built into their monitoring system and are able to alert you to any deviations along the way.
5. Enforce Isolation with Namespaces and cgroups
As mentioned earlier, namespaces and cgroups are core Linux kernel security features that isolate containers from each other. Namespaces limit what a container can see. In other words, they place restrictions on how containers can communicate with each other. Cgroups restrict what a container can do. This means they set limits for how much CPU, memory, networking, and block IO any container can use. This way they prevent infected containers from hogging all the available resources themselves, and from spreading to neighboring containers.
6. Nano-segment the Network
Since peripheral firewalls leave the rest of the system open for attack when breached, they aren’t suited for containers. Instead, securing each container with a container-level firewall prevents lateral movement between containers once an attack begins. Tools like Project Calico enforce this type of security and allow you to manage them via policies that can adapt to changes in the environment. This type of network security is container-aware, and effective against network breaches.
7. Enforce the Principle of Least Privilege
You need to keep the attack surface of your system as small as possible. You can do this by using “lean” images wherever possible. Considering the scale of containers, and that each container image may spawn hundreds of containers, this is a benefit that multiplies over time. You should also use a minimal base OS like CoreOS or RancherOS. These OSes are purpose-built to run containers and have hence shed all the bloat of traditional OSes. A lighter stack provides multiple benefits like being faster, requiring lesser resources to run, saving on costs, but most importantly, by reducing the attack surface and hence securing the system by default.
8. Monitor Containers in Runtime to Gain Visibility and Detect Malicious Behavior
Threat detection during runtime is the most complex security measure as it requires scanning every component of the Docker system when in production. This involves massive quantities of data, numerous connections between the components, a dizzying number of events, and attacks that are growing more complex each day. The only way to do this well is to use machine learning to analyze all the performance data and derive meaningful alerts and insights.
Fortunately, today, there are solutions like the Aqua Container Security Platform that provide end-to-end monitoring of your Docker system and equip you with real-time insights into the most complex and urgent security issues. By understanding the challenges with Docker security, and implementing the best practices, you can take advantage of all the benefits Docker has to offer without compromising on security.