It’s very easy and quick to get started with Docker on your laptop. It takes a few minutes to download a container image and spin up a new container from it locally. What starts out so simple gets complicated as you scale to hundreds of containers, and want to deploy these containers into production. All of a sudden you need to think of how to execute all the tasks related to container management, how to transition to a microservices architecture, and how to secure containers in production.
Transition to Microservices
The switch from Docker as an experiment to Docker in production is largely dependent on your organization’s state. Many startups are cloud-native, and for them, running their apps in Docker in production probably happened from day one. But for most SMBs and enterprises, there’s a need to have backward compatibility to support legacy applications and tools that are still in use and are essential to the organization’s day-to-day activities. In these cases, you can’t just run an entire legacy application in a Docker container. Instead, you need a plan to modernize your architecture over time, and in the process move to containers gradually.
From a monolithic application, your goal is to move to a microservices architecture, which allows you to manage your application as a collection of interrelated services that can be managed and deployed separately from each other. This process of decomposing your application starts with peripheral services that are easy to branch out, then moves to the core parts of the application.
Use an Orchestration Tool
The next step is to use a powerful orchestration tool like Kubernetes. With the many moving parts of a dynamic Docker stack, a container orchestration platform is able to provide a layer of abstraction between the containers and the infrastructure that powers them.
Kubernetes is a powerful tool that automates and simplifies the creation and management of container resources. It has been going through rapid evolution and recent updates like role-based access control (RBAC), and secrets encryption are making it even more capable of running production workloads.
There are other orchestration tools like Swarm and Mesos, but Kubernetes is winning this battle because of its extensibility, long feature list, and large open source community. As you look to take Docker into production, an orchestration tool like Kubernetes is a must-have.
Adopt the Service Mesh
When you have tens or hundreds of microservices to manage in production, it’s very important to ensure they can communicate with each other. In legacy applications, communication was simple as it was between different parts of the same service. But with microservices apps, communication is complex as each service needs to talk to multiple other services to perform common tasks.
The service mesh is emerging as the default method for networking Docker container in production. It’s aptly called a mesh because the various connections between services are so complex, it creates a mesh pattern that may seem chaotic, but actually is necessary for normal functioning of a microservices application.
Tools like Linkerd and Istio are making great progress in how the service mesh is managed.Linkerd provides a consistent data plane for inter-service communication and Istio acts as the control plane for this mesh.
Additionally, tools like Project Calico are segmenting these networks using a policy-based approach. Rather than having a single peripheral firewall, Calico enables micro-firewalls around each service. This way, even if one service is compromised the other services are still protected by their firewall. At the scale of Docker, this type of container-aware security is essential.
Take an Integrated Approach to Security
Container security is perhaps the most important concern for organizations running Docker in production. This is because containers are completely different from traditional VMs. They can’t be adequately secured with a Docker-provided security tool, as Docker itself doesn’t have a hold on the entire container ecosystem. Instead, you need to take a best-of-breed approach to integrate specialist tools that excel in a particular aspect of container security and ensure segregation of duties between those who create and run containers on the one hand, and those who create, monitor and enforce security policy on the other.
The first step is to understand how Docker differs from a VM, and what this means for security. Docker inherits many kernel-level security features from Linux. This includes features like namespaces and cgroups which restrict a container’s access to its neighboring containers and limit its use of system resources.
A level above that, you need to monitor and control the usage of container images. This requires scanning of images downloaded from a registry like Docker Hub. You ideally want to allow only official and vetted images to be downloaded and shared within your organization. Access and permission control needs to be monitored to ensure only those who need access to a container have it, and others don’t. The principle of least privilege is paramount here. The CIS Docker benchmark is a great standard to measure how secure your system is as it covers many aspects of securing containers in production.
Along with these efforts, you need a dedicated container security tool that can automatically track violations and concerns across your Docker stack end-to-end. Aqua Security is one such platform that delivers threat detection during runtime and is able to spot intrusions or vulnerabilities from within before they escalate and cause large-scale damage.
What is a microservices architecture? How is a service mesh different from an API?
Martin Fowler explains what’s new about microservices and why you should transition to it from a legacy application.
It’s easy to confuse the two, but this is a useful read to understand how APIs perform a similar yet different function from a service mesh.
What is a microservices architecture?
How is a service mesh different from an API?