What Is Container Technology?
One of the central problems in software development is that applications often do not operate correctly in a new environment. Small differences in hardware, configuration, or dependent libraries can create problems, making it difficult to move an application to a different platform.
Containers solve this problem, providing a lightweight package that lets you deploy applications anywhere, making them more portable. Container images package services and applications together with their configurations and dependencies. Software development teams can test containerized applications as complete units and deploy them in containerized form to a host operating system.
Another important element of containers is that they are immutable, meaning that they cannot be changed after being deployed. To modify a container, you tear it down and deploy a new one. This makes containers much more reliable than traditional infrastructure: deployments become repeatable, and you can easily roll back by deploying an older version of a container image. Immutability also makes it possible to deploy the same container image in development, testing, and production environments, supporting agile development principles.
This is part of our series of articles about Docker containers.
In this article:
What Are the Benefits of Containers?
Containers provide a highly effective way to deploy applications and services at scale on any hardware. Applications or services running as containers use a small fraction of the resources on the host (enabling a large number of containers to run on one host). They are well isolated, so they don’t interfere with each other or directly affect the host’s operations.
Here are the main benefits of containers compared to other ways of running software on host infrastructure:
- Lightweight—because containers share the system’s operating system kernel, there is no need to run a complete operating system instance for each application, reducing the size of container files and resources needed. Containers can start quickly, are torn down easily, and are easy to scale horizontally, meaning they can better support cloud-native applications.
- Portability and platform independence—containers have all their dependencies inside. This means that the same software can be created once and run consistently on laptops, on-premise hardware, or in the cloud, with no reconfiguration required.
- Support for modern architectures—containers can be constructed from a simple configuration file and have a high level of portability and consistency. This makes them highly suitable for DevOps, microservices architectures, and serverless computing, in which software is built from small components that are iteratively developed.
- Increased utilization—containers allow developers and operators to increase CPU and memory utilization on physical machines. Containers allow granular deployment and scaling of application components, which can support microservices design patterns.
How Containers Work
Containers contain the components required to run an application, including application files, libraries, dependencies, and environmental variables. The host operating system controls each container’s access to computing resources (i.e., storage, memory, CPU) to ensure that no container consumes all the host’s resources.
A container image file is a static, complete, executable version of a service or application. Different technologies use different image types. A Docker image comprises several layers starting with the base image that contains the necessary dependencies to execute the container’s code. It has static layers topped with a readable and writable layer. Every container has a specific, customized container layer, so the underlying image layers are reusable—developers can save and apply them to other containers.
Another image type is Open Container Initiative (OCI)—it includes configurations, several file-system layers, and a manifest. OCI images have two operating specifications: an image and a container runtime specification. A runtime specification outlines a file system’s functioning, including all the data required for runtimes and performance. In contrast, an image specification provides the necessary information for launching a service or application in an OCI container.
A container engine executes the container images. Most organizations use container orchestration or scheduling solutions like Kubernetes to manage their container deployments. Containers are highly portable because every image contains the dependencies required to execute the code stored in the appropriate container.
The main advantage of containerization is that users can execute a container image on a cloud instance for testing and then deploy it on an on-premises production server. The application performs correctly in both environments without requiring changes to the code within a container.
What Is Docker?
Docker is a platform that lets you develop, ship, and run applications. You can use Docker to separate applications from the infrastructure to deliver software more quickly. It enables you to manage the infrastructure similarly to how you manage applications. You can leverage Docker’s methodologies for testing, shipping, and deploying code to reduce delays between writing and running code in production.
Docker helps you package and run applications in a container, a loosely isolated environment. This isolation ensures you can run many containers simultaneously on one host. A container is lightweight and contains everything required to run your application. There is no need to rely on components installed on the host.
Related content: Read our guide to Docker alternatives
What Is the Difference Between Docker Image and Docker Container?
A container is an isolated environment that runs applications. The container does not affect other system elements, and the system does not impact the application. An image is a read-only template that includes instructions to create a container. A Docker image can create containers that run on the Docker platform.
A Docker image executes code within a Docker container. You can add a writable layer of core functionality on a Docker image to create running containers. Docker images and containers share a similar purpose but have different uses. An image serves as a snapshot of the environment, while a container runs your software.
Related content: Read our guide to Docker architecture
What Are Windows Containers?
In the past, Docker Toolbox, a variant of Docker for Windows, used to run a VirtualBox instance with a Linux operating system on top of it. It allowed Windows developers to test containers before deploying them on production Linux servers.
Microsoft adopted container technology, enabling containers to run natively on Windows 10 and Windows Server. Microsoft and Docker worked together to build a native Docker for Windows variant. Kubernetes and Docker Swarm shortly followed.
It is now possible to create and run native Windows and Linux containers on Windows 10 devices. You can also deploy and orchestrate these on Windows servers or Linux servers if you use Linux containers.
What are Container Runtimes?
Containers are lightweight virtual, isolated entities that include dependencies. They require a container runtime (which typically comes with the container engine) that can unpack the container image file and translate it into a process that can run on a computer.
You can find various types of available container runtimes. Ideally, you should choose the runtime compatible with the container engine of your choice. Here are key container runtimes to consider:
- containerd—this container runtime manages the container lifecycle on a host, which can be a physical or virtual machine (VM). containerd is a daemon process that can create, start, stop, and destroy containers. It can also pull container images from registries, enable networking for a container, and mount storage.
- LXC—this Linux container runtime consists of templates, tools, and language and library bindings. LXC is low-level, highly flexible, and covers all containment features supported by the upstream kernel.
- CRI-O—this is an implementation of the Kubernetes Container Runtime Interface (CRI) that enables you to use Open Container Initiative (OCI)-compatible runtimes. CRI-O offers a lightweight alternative to employing Docker as a runtime for Kubernetes. It lets Kubernetes use any OCI-compliant runtime as a container runtime for running pods. CRI-O supports Kata and runc containers as container runtimes, but you can plug any OCI-conformant runtime.
- Kata—a Kata container can improve the isolation and security of container workloads. It offers the benefits of using a hypervisor, including enhanced security, alongside container orchestration functionality provided by Kubernetes. Unlike the runC runtime, the Kata container runtime uses a hypervisor for isolation when spawning containers, creating lightweight VMs and putting containers inside.
Learn more in our detailed guide to container runtimes
Containers vs. Virtual Machines
A virtual machine (VM) is an environment created on a physical hardware system that acts as a virtual computer system with its own CPU, memory, network interfaces, and storage. It is a “guest operating system” running within the “host operating system” installed directly on the host machine.
Containerization and virtualization are similar in that applications can run in multiple environments with complete isolation. The main differences are size and portability:
- VMs are typically measured in gigabytes. Each VM has its own operating system, which can perform multiple resource-intensive functions at once. Because more resources are available on the VM, it can abstract, partition, clone, and emulate servers, operating systems, desktops, databases, and networks.
- Containers are much smaller, typically measured in megabytes, and they only package specific applications, their dependencies and the minimal execution environment they require. A container typically runs one or more applications, and does not attempt to emulate or replicate an entire server.
Learn more in our guide to Docker vs. virtual machines
Containers and Kubernetes
Kubernetes is a container orchestration platform provided as open source software. It enables you to unify a cluster of machines as a single pool of computing resources. You can employ Kubernetes to organize applications into groups of containers. Kubernetes uses the Docker engine to run the containers, ensuring your application runs as intended.
Here are key features of Kubernetes:
- Compute scheduling—Kubernetes automatically considers the resource needs of containers to find a suitable place to run them.
- Self-healing—when a container crashes, Kubernetes creates a new one to replace it.
- Horizontal scaling—Kubernetes can observe CPU or custom metrics and add or remove instances according to actual needs.
- Volume management—Kubernetes can manage your application’s persistent storage.
- Service discovery and load balancing—Kubernetes can load balance IP addresses, multiple instances, and DNS.
- Automated rollouts and rollbacks—Kubernetes monitors the health of new instances during updates. The platform can automatically roll back to a previous version if a failure occurs.
- Secret and configuration management—Kubernetes can manage secrets and application configuration.
Containers serve as the foundation of modern, cloud native applications. Docker offers the tools needed to create container images easily, and Kubernetes provides a platform that runs everything.
Best Practices for Building Container Images
Use the following best practices when writing Dockerfiles to build images:
- Ephemeral—you should build containers as ephemeral entities that you can stop or delete at any moment. It enables you to replace a container with a new one from the Dockerfile with minimal configuration and setup.
- dockerignore—a .dockerignore file can help you reduce image size and build time. You achieve this by excluding any unnecessary files from the build context. By default the Docker image includes the recursive contents of a directory in which the Dockerfile resides, and .dockerignore lets you specify files that should not be included.
- Size—you should reduce image file sizes to minimize the attack surface. However, you do need to keep Dockerfiles readable. You can apply a multi-stage build (available only for Docker 17.05 or higher) or a builder pattern.
- Multi-stage build—this build lets you use multiple FROM statements within a single Dockerfile. It enables you to selectively copy artifacts from one stage to another, leaving behind anything unneeded in the final image. You can use it to reduce image file sizes without maintaining separate Dockerfiles and custom scripts for a builder pattern.
- Packages—never install unnecessary packages when you build images.
- Commands—do not use multiple RUN commands. When possible, use multi-line commands for faster builds, for example, when you need to install a list of packages.
- Sorting—you should sort all multi-line lists of packages into alphanumerical order. It can help you easily identify duplicates and more easily update and review your list.
Learn more in our detailed guide to container images
Best Practices for Container Security
Container security is a process that includes various steps. It covers container building, content and configuration assessment, runtime assessment, and risk analysis. Here are key security best practices for containers:
- Prefer slim containers—you can minimize the application’s attack surface by removing unnecessary components.
- Use only trusted base images—the CI/CD process should only include usable images that were previously scanned and tested for reliability.
- Harden the host operating system—you should use a script to configure the host properly according to CIS benchmarks. You can use a lightweight Linux distribution for hosting containers like CoreOS or Red Hat Enterprise Linux Atomic Host.
- Remove permission—you should never run a privileged container because it allows malicious users to take over the host system. It threatens your entire infrastructure.
- Manage secrets—a secret can include database credentials, SSL keys, encryption keys, or API keys. You must manage secrets to ensure it is impossible to discover them.
- Run source code tests—software composition analysis (SCA) and static application security testing (SAST) tools have evolved to support DevOps and automation. They are integral to container security, helping you track open source software, license restrictions, and code vulnerabilities.