Containers in Cloud Computing: Enabling Portability, Agility and Automation

Learn about the role of containers in cloud computing, how containers operate in the cloud, and key cloud container challenges and solutions

June 10, 2021

What is the Role of Containers in Cloud Computing?

Containers are a common option for deploying and managing software in the cloud. Containers are used to abstract applications from the physical environment in which they are running. A container packages all dependencies related to a software component, and runs them in an isolated environment. 

With containers, commonly running the Docker container engine, applications deploy consistently in any environment, whether a public cloud, a private cloud, or a bare metal machine. Containerized applications are easier to migrate to the cloud. Containers also make it easier to leverage the extensive automation capabilities of the cloud—they can easily be deployed, cloned or modified using APIs provided by the container engine or orchestrator. 

In this article, you will learn:

Use Cases of Containers in the Cloud

Containers are becoming increasingly important in cloud environments. Many organizations are considering containers as an alternative to virtual machines (VMs), which were traditionally the preferred option for large-scale enterprise workloads. 

The following use cases are especially suitable for running containers in the cloud:

  • Microservices—containers are lightweight, making them well suited for applications with microservices architectures consisting of a large number of loosely coupled, independently deployable services.
  • DevOps—many DevOps teams build applications using a microservices architecture, and deploy services using containers. Containers can also be used to deploy and scale the DevOps infrastructure itself, such CI/CD tools.
  • Hybrid and multi-cloud—for organizations operating in two or more cloud environments, containers are highly useful for migrating workloads. They are a standardized unit that can be flexibly moved between on-premise data centers and any public cloud.
  • Application modernization—a common way to modernize a legacy application is to containerize it, and move it as is to the cloud (a model known as “lift and shift”).

How Do Cloud Containers Work?

Container technology began with the separation of partitions and chroot processes, introduced as part of Linux. Modern container engines take the form of application containerization (such as Docker) and system containerization (such as Linux containers). 

Containers rely on isolation, controlled at the operating system kernel level, to deploy and run applications. Containers share the operating system kernel, and do not need to run a full operating system—they only need to run the necessary files, libraries and configuration to run workloads. The host operating system limits the container’s ability to consume physical resources.

In the cloud, a common pattern is to use containers to run an application instance. This can be an individual microservice, or a backend application such as a database or middleware component. Containers make it possible to run multiple applications on the same cloud VM, while ensuring that problems with one container do not affect other containers, or the entire VM. 

Cloud providers offer several types of services you can use to run containers in the cloud:

  • Hosted container instances—let you run containers directly on public cloud infrastructure, without the intermediary of a cloud VM. An example is Azure Container Instances (ACI).
  • Containers as a Service (CaaS)—manages containers at scale, typically with limited orchestration capabilities. An example is Amazon Elastic Container Service (ECS) or Amazon Fargate.
  • Kubernetes as a Service (KaaS)—provides Kubernetes, the most popular container orchestrator, as a managed service. Lets you deploy clusters of containers on the public cloud. An example is Google Kubernetes Engine (GKE).

Related content: read our guide to Docker architecture

Virtual Machines and Containers in a Cloud Environment

In most cloud computing environments, the basic unit used to deploy workloads is a virtual machine (VM). Like containers, virtual machines (VMs) are independent computing environments abstracted from hardware. Unlike containers, VMs require a full copy of the operating system to run. 

VMs can be used to run guest operating systems different from the host system, so if the host is running Windows, the VM can run Linux, or any other OS. In many technical scenarios, VMs provide improved isolation and security compared to containers.

However, a VM is essentially a standalone machine with its own operating system, so it takes a lot longer to start up and run than a container. VMs images, which are used to create new VMs, are heavier than container images and more difficult to automate.

In the cloud, the most common scenario is running containers on top of compute instances, which are technically virtual machines. Cloud providers are now offering the ability to run containers directly on their bare metal servers, without VMs as an intermediary, a model known as “container instances”. 

Related content: read our guide to Docker vs virtual machine

Bridging Containers and the Cloud: Challenges and Solutions

Migration

Containers can significantly reduce costs, but in traditional computing environments, it can be difficult to transition existing applications to containers. In many organizations, IT staff do not have container experience, and need to be trained or assisted by consultants. Cloud computing on its own raises technical challenges for many operations teams, and containers may add another level of complexity.

Like any technology shift, organizations and technical teams must adapt to cloud native technology. The container ecosystem offers a variety of tools that can make adoption easier, including managed services that emphasize swift onboarding and ease of use.

Container Security

Cloud providers use a shared responsibility model, where the cloud provider is responsible for securing the underlying infrastructure, and customers are responsible for correctly configuring security controls, to secure their workloads and data. 

As far as containers are concerned, the cloud provider assumes responsibility for the underlying container hosts and the hypervisor, while containers themselves and persistent storage volumes they use must be secured by your organization. Securing containers includes several aspects:

  • Container images can contain vulnerable software components or malware.
  • The default configuration of container engines like Docker provides extensive privileges. Attackers can leverage the shared kernel to infect other containers, and the host operating system, if containers are not properly locked down.
  • Containers are short-lived, making it more difficult to keep track of them, monitor them, and identify security issues.

Security is crucial during the entire lifecycle of a container. Scan container images to ensure they are safe, use configuration best practices to lock down containers and eliminate unnecessary privileges, and restrict access and network traffic to a minimum. Finally, keep track of running containers using monitoring and security tools that support containerized environments.

Related content: read our guide to Docker security

Container Networking

Container networks can be highly complex, and this complexity can also lead to security issues. In a containerized environment, you cannot use traditional networking techniques. Container networking uses standards like Container Network Interface (CNI), and is managed using overlay networks—these create isolated, private networks for communication between containers and hosts. 

On the cloud, things get even more complicated, because cloud providers offer their own terminology for networking, such as virtual private clouds (VPC) and security groups to control access. When running standalone containers on the cloud, you will need to manage their networking and make sure it aligns with private networks you have set up within the public cloud. If you get things wrong, you can end up exposing containers to the public Internet.

Most organizations solve these concerns by using managed container services, or adopting orchestrators like Kubernetes or Nomad, which have built-in networking management for clusters of containers.

Related content: read our guide to Docker networking