What Is a Container Runtime?

A container runtime, also known as container engine, is a software component that can run containers on a host operating system.

Amit Sheps
September 23, 2021

In a containerized architecture, container runtimes are responsible for loading container images from a repository, monitoring local system resources, isolating system resources for use of a container, and managing container lifecycle. 

Common container runtimes commonly work together with container orchestrators. The orchestrator is responsible for managing clusters of containers, taking care of concerns like container scalability, networking, and security. The container engine takes responsibility for managing the individual containers running on every compute node in the cluster.

Common examples of container runtimes are runC, containerd, Docker, and Windows Containers. There are three main types of container runtimes—low-level runtimes, high-level runtimes, and sandboxed or virtualized runtimes.

In this article:

3 Types of Container Runtimes

1. Low-Level Container Runtimes

The Open Container Interface (OCI) is a Linux Foundation project started by Docker, which aims to provide open standards for Linux containers. The main open source project developed by OCI is runC, released in 2015. runC is a low-level container runtime that implements the OCI specification. It forms the basis for many other container runtime engines.

The OCI provides runtime specifications. Runtimes implemented according to OCI specs are called low-level runtimes, because the primary focus is on container lifecycle management. 

Native low-level runtimes are responsible for creating and running containers. Once the containerized process runs, the container runtime is not required to perform other tasks. This is because low-level runtimes abstract the Linux primitives and are not designed to perform additional tasks. 

The most popular low-level runtimes include:

  • runC—created by Docker and the OCI. It is now the de-facto standard low-level container runtime. runC is written in Go. It is maintained under moby—Docker’s open source project.
  • crun—an OCI implementation led by Redhat. crun is written in C. It is designed to be lightweight and performant, and was among the first runtimes to support cgroups v2.
  • containerd—an open-source daemon supported by Linux and Windows, which facilitates the management of container life cycles through API requests. The containerd API adds a layer of abstraction and enhances container portability.

2. High-Level Container Runtimes

Examples of popular high-level runtimes include:

  • Docker (Containerd)—the leading container system, offering a full suite of features, with free or paid options. It is the default Kubernetes container runtime, providing image specifications, a command-line interface (CLI) and a container image-building service. 
  • CRI-O—an open-source implementation of Kubernetes’ container runtime interface (CRI), offering a lightweight alternative to rkt and Docker. It allows you to run pods using OCI-compatible runtimes, providing support primarily for runC and Kata (though you can plug-in any OCI-compatible runtime). 
  • Windows Containers and Hyper-V Containers—two lightweight alternatives to Windows Virtual Machines (VMs), available on Windows Server. Windows Containers offer abstraction (similar to Docker) while Hyper-V provides virtualization. Hyper-V containers are easily portable, as they each have their own kernel, so you can run incompatible applications in your host system. 

3. Sandboxed and Virtualized Container Runtimes

The OCI includes specifications for sandboxed and virtualized implementations:

  • Sandboxed runtimes—provide increased isolation between the containerized process and the host, as they don’t share a kernel. The process runs on a unikernel or kernel proxy layer, which interacts with the host kernel, thus reducing the attack surface. Examples include gVisor and nabla-containers. 
  • Virtualized runtimes—provide increased host isolation by running the containerized process in a virtual machine (through a VM interface) rather than a host kernel. This can make the process slower compared to a native runtime. Examples include kata-containers and the now deprecated clearcontainers and runV.

Related content: read our guide to leading container engines (coming soon) ›

How Kubernetes Works with Container Engines

Container orchestrators like Kubernetes are responsible for managing and scaling containerized workloads. In Kubernetes, the kubelet is an agent that runs on every computing node. It receives commands specifying what containers should be running, and relays them to a container runtime on the node. It also collects information from the container runtime about currently running containers, and passes it back to the Kubernetes control plane.

The kubelet communicates with the container engine through the standard Container Runtime Interface (CRI), described in the next section.

Related content: read our guide to Kubernetes architecture

When Kubernetes collaborates with container engines, the central responsibility of the container engine is to give the orchestrator a way to monitor and control the containers that are currently running. It deals with: 

  • Verifying and loading container images
  • Monitoring system resources
  • Isolating and allocating resources
  • Container lifecycle management

To carry out these activities, the engine draws on the resources required to run a container. It makes use of standardized interfaces to coordinate the resources, including:

  • Container Storage Interface (CSI)—regulates how containers access storage equipment
  • Container Networking Interface (CNI)—specifies how containers communicate over a network

The Container Runtime Interface (CRI)

To deal with the increasing difficulty of incorporating multiple runtimes into Kubernetes, the community specified an interface—particular functions that a container runtime would need to put in place on behalf of Kubernetes—named the Container Runtime Interface (CRI). 

This corrected the issue of having extensive integration between container runtimes and the Kubernetes codebase, which became difficult to maintain, and made it more difficult to develop new container runtimes that would support Kubernetes. 

The CRI interface also makes it clear to developers of container runtimes, which functions need to support to work with Kubernetes. The primary functions are:

  • The runtime needs to be capable of starting/stopping pods
  • The runtime must deal with all container operations within pods—start, pause, stop, delete, kill
  • The runtime should handle images and be able to retrieve them from a container registry
  • The runtime should provide helper and utility functions around metrics collection and logs

Learn more in our detailed guide to the Container Runtime Interface (CRI)

Container Runtime Security with Aqua

Aqua’s container runtime security controls protect workloads from attack using a combination of system integrity protection, application control, behavioral monitoring, host-based intrusion prevention and optional anti-malware protection. 

Amit Sheps
Amit is the Director of Technical Product Marketing at Aqua. With an illustrious career spanning renowned companies such as CyberX (acquired by Microsoft) and F5, he has played an instrumental role in fortifying manufacturing floors and telecom networks. Focused on product management and marketing, Amit's expertise lies in the art of transforming applications into cloud-native powerhouses. Amit is an avid runner who relishes the tranquility of early morning runs. You may very well spot him traversing the urban landscape, reveling in the quietude of the city streets before the world awakes.