Docker in Production

Running Docker in Production

Learn about running Docker in a production environment: strategies for scaling up, selecting a cloud-host vendor, orchestrating multiple clusters of containers, and more.

In this page: everything you need to know about running Docker in production.

Running Docker in production doesn't necessarily mean less work than running VMs in production. Your containers won’t magically run on their own. In fact, Docker in production may mean more work than managing VMs. However, it’s the only way forward if you want to modernize your applications and deliver a user experience that your customers expect. Docker does this by adding speed to every step of the development pipeline. It makes every step - development, testing, and deployment - repeatable, automated and highly programmatic.

Start Small

A mistake many make when moving from running containers locally to a production environment is to expect to have it all set up from the get-go. They may imagine that they’d be able to scale to a thousand container instances right at the start without any hassles. Or they may believe that they need to have a completely microservices-based application when they move to containers in production. However, these are unreasonable expectations, and the right approach is to start from where you are, start small, and build out your ideal system over time.

While it’s easy to create a thousand container instances quickly as soon as you start, ensuring they’re all secure, well networked, and monitored is a whole other set of challenges. This is where many fail at running Docker in production.

You don’t need to have a microservices application right at the start. Legacy applications work just fine in containers. The better way is to look for low hanging fruit in your application - services that you can easily branch out from your monolith and deploy in containers separately. In some cases, a monolith can run in a container just fine. In other cases, organizations make their start in containers by deploying a new, “greenfield” application. Eventually, you’d want to move to a microservices model, but expecting to get there from the start is a recipe for failure.

Decide Where To Run It

When running containers in production, an important decision is about where you’ll host your containers. In today’s cloud-powered world nobody wants to build their own datacenter in-house and manage their own servers. In fact, offloading the management of infrastructure to a vendor is seen as a major benefit. This being the case, you need to make a decision about which vendor you find most aligned with your values and goals for a cloud environment for Docker.

The options are endless. The first ones that come to mind are the CaaS services from the cloud vendors. AWS ECS, Azure AKS, and Google Cloud GKE are capable cloud offerings from the big three IaaS platforms. The good thing is if your other infrastructure is with them, you are already familiar with the user interface, and you already use a bunch of supporting tools for monitoring, security, logging, and more. The cloud vendors are in a race to provide the simplest container experience using Kubernetes. Google Cloud was the first to go all out with Kubernetes, followed by Azure rebranding its container service to AKS, and most recently AWS jumped on the bandwagon with EKS now supporting Kubernetes. With the cloud vendors wanting to handle much of the plumbing running Docker in production in one of their platforms is becoming a very attractive proposition.

You’ll need to watch out for vendor lock-in. Getting locked into one system can be a big bet at a time when container technology is so nascent and no vendor is head and shoulders above the rest. If you commit to one now and find it hard to switch to another platform down the line, that is restrictive. However, tools like Kubernetes and the HashiCorp Suite are making hybrid cloud environments a reality.

Orchestration is a Necessity

Kubernetes is the leading container orchestrator today and is all set to grow in adoption, bringing more consistency to the Docker ecosystem. Kubernetes removes the nitty-gritty of managing individual container instances and underlying infrastructure. It abstracts away the low-level details and lets you manage services instead.

With powerful features like Federation, Kubernetes is built to let you manage multiple clusters of containers. These clusters can be located in a single cloud platform, or across multiple platforms. What Kubernetes’ Federation feature brings is a uniform way to sync resources across the clusters and automatically configures load balancing and service discovery across these clusters.

Scheduling is simply the process of placing pods on nodes within your Kubernetes cluster. The advanced scheduling features of Kubernetes gives you various options on how you’d like to place pods on nodes. You could have them spread out evenly on nodes based on available resources, or have certain pods located on nodes with a specific type of resources, or something in-between.

With these and many more unique features, Kubernetes is becoming indispensable to running Docker in production.

Make Your Docker Images Private

The way you create, host, and share your Docker images has a bearing on security. Docker images are hosted in a registry. This could be Docker Hub, Quay, Artifactory, or a host of other options. The important thing is to have a private registry. This way you protect yourself from infected or vulnerable container images that are spread around loosely in public from affecting your system.

Despite using a private container registry, you still need to scan every container image downloaded. Always prefer official images to unofficial ones. Scan images for commonly known vulnerabilities, as even official images may contain vulnerabilities. All private container registries include this feature by default, but to go a step further, tools like Aqua Security are able to scan images in a broader context, and with greater accuracy.

Monitor and Log Everything

You can only manage what you monitor. In production, visibility is key to running a successful Docker operation. When running distributed Docker systems that span hundreds or thousands of container instances, tracking the performance of all these instances can be a daunting task. The reporting data needs to real-time, easy to correlate, and have enough detail for you to drill down into the root cause of issues.

For a Docker stack, especially one powered by Kubernetes, Prometheus has emerged as the leading monitoring tool. Its uniqueness is that it monitors only time-series metrics, and lets you easily correlate one metric with another at the same point in time. It works well with Kubernetes able to automatically identify components like pods, containers, nodes, and services. Prometheus is easily integrated with Kibana for advanced visualization and quicker analysis of monitoring data.

Running Docker in production is full of challenges at every step. But by taking the right approach you can save yourself a lot of mistakes, and ease your transition to the other side - the place where Docker has transformed the way you build and ship applications.

Further Reading

  • No labels