Containers still mean “Docker” to many people. Docker popularised the modern use of containers in software development and deployment. These days, other technologies are around too. Here’s how Containerd, Docker and Kubernetes relate to each other.

The Beginnings

At its release in 2013, Docker was a self-contained project with everything you needed to build and run containers. What it lacked was an easy way to orchestrate container deployments in the cloud.

By the end of 2013, a group of Googlers were already addressing this with a prototype of what would become Kubernetes. Kubernetes is intended to simplify the operation of containerised workloads across large fleets of machines.

Back in those early days, Kubernetes was inextricably linked to Docker. It used Docker directly to interact with containers, even though it only needed a subset of functionality – the parts responsible for actually running containers.

Docker’s developer-centric UI got in the way of Kubernetes. It had to bypass the human-friendly aspects of the project using a dedicated tool, Dockershim. The issues were compounded by the differing directions in which Docker and Kubernetes were headed. Docker launched Swarm, its own Kubernetes alternative, offering orchestration as a built-in Docker “mode”.

The Rise of Containerd

As Kubernetes grew and more third-party tools arose around Docker, the limitations of its architecture became clear. At the same time, the Open Container Initiative (OCI) began standardising container formats and runtimes. This resulted in an OCI specification defining a container which could be used by multiple runtimes, of which Docker is an example.

Docker then extracted its container runtime out into a new project, containerd. This includes Docker’s functionality for executing containers, handling low-level storage and managing image transfers. Containerd was donated to the Cloud Native Computing Foundation (CNCF) in order to provide the container community with a basis for creating new container solutions.

The emergence of containerd makes it easier for projects like Kubernetes to access the low-level “Docker” elements they need. Instead of actually using Docker, they now have a more accessible interface to the container runtime. The OCI standardisation of container technologies means other runtimes can be used too.

Understanding Containerd’s Role

To fully understand containerd, it’s necessary to look at the nature of containers. Containers are really an abstraction over various Linux kernel features. In order to run a container, you need to use syscalls to set up the containerised environment. The steps vary by platform and distribution.

Containerd drops in to abstract this low-level wiring. It’s intended as a “client layer” that container software then builds atop of. This might be developer-oriented software, like Docker, or cloud-oriented devops tools such as Kubernetes.

Previously, Kubernetes development was left with two bad options: keep writing shims around the hefty Docker interface, or start interacting with Linux kernel features directly. By breaking containerd out of Docker, a third alternative became available: use containerd as a system abstraction layer, without involving Docker.

Here’s a summary of how the three technologies combine:

Docker – A developer-oriented software with a high level interface that lets you easily build and run containers from your terminal. It now uses containerd as its container runtime. Containerd – An abstraction of kernel features that provides a relatively high level container interface. Other software projects can use this to run containers and manage container images. Kubernetes – A container orchestrator that works with multiple container runtimes, including containerd. Kubernetes is focused on deploying containers in aggregate across one or more physical “nodes. ” Historically, Kubernetes was tied to Docker.

Containerd is only one container backend. Other containers implementing the Open Containers Runtime specification include runC and CRI-O. These runtimes can also be used with Docker and Kubernetes; each has its own distinctions.

The OCI

The OCI is the body responsible for defining container standards. Its work has been instrumental in facilitating the interoperability between different component technologies.

The OCI’s image specification defines what a container should look like. The runtime specification sets out an interface for running containers. Projects like containerd then implement these specifications.

Importantly, one of the OCI’s priorities is to support the container usage experience popularised by Docker. Its images must be executable on the target platform without any user-defined arguments (e.g. docker run hello-world:latest). OCI images must therefore contain sufficient metadata to enable this automatic configuration.

You may also see references to the Container Runtime Interface (CRI). This is a Kubernetes-specific abstraction over the OCI specification. The CRI builds on the OCI specs to enable support for interchangeable container runtimes within Kubernetes.

What About My Docker Images?

Images you build with Docker aren’t really “Docker images” at all. As Docker now uses the containerd runtime, your images are built in the standardised Open Container Initiative (OCI) format.

You shouldn’t need to worry about incompatibilities between your Docker images and the environment they’re used in. Images you build with Docker can still be deployed using Kubernetes. This is because Kubernetes also supports OCI images, through its use of containerd (and other standards-compliant runtimes). It’s up to the runtime to handle the pulling and running of images, not the high level interface which tools like Docker and Kubernetes provide.

Kubernetes and Docker

Kubernetes deprecated the Docker runtime in late 2020. It will be removed in a future release, currently scheduled for late 2021. After that, Kubernetes will no longer offer Docker runtime support. An alternative runtime compatible with the OCI specs, such as containerd, will need to be used instead.

This announcement prompted concern about the implications for developers. The change shouldn’t impact most existing workflows. As we’ve already seen, Docker produces OCI-compliant images which OCI-compliant runtimes can run. Any images you build with docker build will still work within Kubernetes, even after the Docker runtime is removed.

Two different technologies are being considered – the Docker command-line interface used to create and run containers, and the Docker runtime which the command-line interface wraps around.

It’s All Too Confusing!

In just a few short years, containers have transformed how many developers work. The expansion in the surrounding ecosystem has been a natural byproduct of this shift. The Containers === Docker mentality proved to be too stifling as it prevented tools like Kubernetes from showing their full potential.

The move towards standardisation has resulted in a plethora of new terms, tools and technologies. Nonetheless, nothing has really changed for developers, whether you’re interacting with the Docker CLI on your machine or a Kubernetes cluster in the cloud.

Each high-level user-facing interface (such as Docker and Kubernetes) now benefits from a choice of interchangeable low-level container runtimes (like containerd and runC). This enables a greater degree of flexibility and lets new container-based technologies establish themselves in a standards-aligned manner.