Container Runtime Interface (CRI): Past, Present, and Future

Learn about the need for the Container Runtime Interface and the history of container runtimes, how CRI is used today, and how Docker’s lack of support for CRI will impact your project.

September 9, 2021

What is Container Runtime Interface?

Container runtime interface (CRI) is a plugin interface that lets the kubelet—an agent that runs on every node in a Kubernetes cluster—use more than one type of container runtime. Container runtimes are a foundational component of a modern containerized architecture.

CRI was first introduced in Kubernetes v1.5. Prior to the introduction of the CRI, rkt and Docker were directly integrated into the source code of the kubelet. This made it difficult to integrate new container runtimes with Kubernetes. CRI enables Kubernetes users to easily make use of multiple container runtimes, and enables developers of container runtimes to easily integrate them with the Kubernetes ecosystem.

In this article:

A Brief History of Container Runtimes

Docker and Kubernetes have gained huge popularity over the past few years, but the concept of containerization is not new. Here is a brief history of the evolution of container runtime technology, leading up to the introduction of the CRI standard:

2008: CGroups added to Linux

cgroups were introduced into the Linux operating system, and a project called Linux Containers (LXC) used groups and namespaces to create an isolated environment for running Linux applications. At roughly the same time, Google began a parallel containerization project called LMCTFY.

2013: Release of Docker

Docker Container was released, built on top of LXC. Its main innovation was the ability to easily define container images, which allowed users to package containers and move them consistently between physical machines. 

2015: Kubernetes released and CNCF takes off

  • Kubernetes version 1.0 was released, and the Cloud Native Computing Foundation (CNCF) was founded to promote container and serverless technology. Google donated the Kubernetes project to the CNCF. 
  • Just as Kubernetes was taking off, the Open Container Initiative (OCI) was founded, with the goal of creating a governance structure for the burgeoning container ecosystem. The OCI created a standard specification for containers, known as the OCI Runtime Specification. 
  • A new tool called runc was built in line with OCI specifications. It became a standard component that interprets the OCI Runtime Specification and makes it possible to run containers. runc is a low-level component used throughout the container ecosystem, for example by popular container runtimes Docker, CRI-O and Kata Containers.

Why Does Kubernetes Need CRI?

To understand the need for CRI in Kubernetes, let’s start with a few basic concepts:

  • kubelet—the kubelet is a daemon that runs on every Kubernetes node. It implements the pod and node APIs that drive most of the activity within Kubernetes. 
  • Pods—a pod is the smallest unit of reference within Kubernetes. Each pod runs one or more containers, which together form a single functional unit.
  • Pod specs—the kubelet read pod specs, usually defined in YAML configuration files. The pod specs say which container images the pod should run. It provides no details as to how containers should run—for this, Kubernetes needs a container runtime.
  • Container runtime—a Kubernetes node must have a container runtime installed. When the kubelet wants to process pod specs, it needs a container runtime to create the actual containers. The runtime is then responsible for managing the container lifecycle and communicating with the operating system kernel.

Related content: Read our guide to Kubernetes architecture ›

In the early days of Kubernetes, the only container runtime was Docker. A bit later, Kubernetes introduced rkt as an additional option. However, Kubernetes developers quickly realized that this was problematic:

  • Tightly coupling Kubernetes to specific container engines could break Kubernetes, as container runtimes and Kubernetes itself evolved.
  • It would be difficult to integrate new container engines with Kubernetes, because this requires a deep understanding of Kubernetes internals. This would create an effective monopoly on container runtimes within Kubernetes.

The solution was clear: creating a standard interface that would allow Kubernetes—via the kubelet—to interact with any container runtime. This would allow users to switch out container runtimes easily, combine multiple container runtimes, and encourage the development of new container engines. 

In 2016, Kubernetes introduced the Container Runtime Interface (CRI), and from that point onwards, the kubelet does not talk directly to any specific container runtime. Rather, it communicates with a “shim”, similar to a software driver, which implements the specific details of the container engine.

What are runc and the Open Container Initiative (OCI)?

The Open Container Initiative (OCI) provides a set of industry practices that standardize the use of container image formats and container runtimes. CRI only supports container runtimes that are compliant with the Open Container Initiative. 

The OCI provides specifications that must be implemented by container runtime engines. Two important specifications are:

  • runc—a seed container runtime engine. The majority of modern container runtime environments use runc and develop additional functionality around this seed engine.
  • OCI image specification—OCI adopted the original Docker image format as the basis for the OCI image specification. The majority of open source build tools support this format, including BuildKit, Podman, and Buildah. Container runtimes that implement the OCI runtime specification can unbundle OCI images and run its content as a container. 

Related content: Read our guide to container images ›

Which Container Runtime Engines Support CRI?

The following table shows the most common container runtime environments that support CRI, and thus can be used within Kubernetes, their support in managed Kubernetes platforms, and their pros and cons.

Container RuntimeSupport in Kubernetes PlatformsProsCons
ContainerdGoogle Kubernetes Engine, IBM Kubernetes Service, AlibabaTested at huge scale, used in all Docker containers. Uses less memory and CPU than Docker.Supports Linux and WindowsNo Docker API socket.Lacks Docker’s convenient CLI tools.
CRI-ORed Hat OpenShift, SUSE Container as a ServiceLightweight, all the features needed by Kubernetes and no more.UNIX-like separation of concerns (client, registry, build)Mainly usage within Red Hat platformsNot easy to install on non Red Hat operating systemsOnly supported in Windows Server 2019 and later
Kata ContainersOpenStackProvides full virtualization based on QEMUImproved securityIntegrates with Docker, CRI-O, containerd, and FirecrackerSupports ARM, x86_64, AMD64Higher resource utilizationNot suitable for lightweight container use cases
AWS FirecrackerAll AWS servicesAccessible via direct API or containerdTight kernel access using seccomp jailerNew project, less mature than other runtimesRequires more manual steps, developer experience still in flux

Does Docker Support CRI?

The short answer is no. In the past, Kubernetes included a bridge called dockershim, which enabled Docker to work with CRI. From v1.20 onwards, dockershim will not be maintained, meaning that Docker is now deprecated in Kubernetes. Kubernetes currently plans to remove support for Docker entirely in a future version, probably v1.22. 

However, Docker images will continue to work in Kubernetes, because they are based on the OCI image specification.

Here is what the deprecation of Docker in Kubernetes means for you, depending on your use case:

  • Kubernetes end-users do not need to change their environment, and can continue using Docker in their development processes. However, developers should realize that the images they create will run within Kubernetes using other container runtimes, not Docker.
  • Users of managed Kubernetes services like Google Kubernetes Engine (GKE) or Elastic Kubernetes Service (EKS) need to ensure worker nodes are running a supported container runtime (i.e. not Docker). Customized nodes may need to be updated.
  • Administrators managing clusters on their own infrastructure must reinstall container runtimes on their nodes (if they are currently running Docker) to avoid clusters from breaking, when Docker support is removed in the future. Kubernetes nodes should run another, CRI-based container runtime, like condainerd or CRI-O.

Container Runtime Security with Aqua

When it comes to workload protection at runtime – prevention and detection isn’t enough. True runtime security means stopping attacks in progress. That means enforcement that happens after the workload has started. This does not mean policy controls that are applied before a workload starts. Why does this matter? Because if you think you are stopping attacks in a production environment, but all you are doing is applying a policy like OPA, for example, you are not achieving the intended control and outcome of protecting against real attacker behavior in cloud native environments. Shift-left is only prevention, which we all know is important, but just one layer of a true defense-in-depth approach.  With Aqua, importantly, whether the method is mitigating an exploit or stopping command and control behavior, the workload security policies are granular and can be used without downtime or binary actions to only allow or kill an image.