Kubernetes: Why Use It, How It Works, Options and Alternatives

Everything you wanted to know about Kubernetes—use cases, advantages, architecture and components, managed Kubernetes options, and leading K8s alternatives.

November 10, 2020

What Is Kubernetes?

Kubernetes is a powerful open-source orchestration tool, designed to help you manage microservices and containerized applications across a distributed cluster of computing nodes. Kubernetes aims to hide the complexity of managing containers through the use of several key capabilities, such as REST APIs and declarative templates that can manage the entire lifecycle.

Here are notable features of Kubernetes:

  • A highly resilient infrastructure. 
  • Zero downtime deployment.
  • Automated rollback, self-healing, and scaling of containers. 
  • Automated self healing capabilities, such as auto-restart, auto-placement, and auto-replication.

Kubernetes was originally developed by Google. It is now supported by the Cloud Native Computing Foundation (CNCF). Kubernetes is portable and can run on any public or private cloud platform, including AWS, Azure, Google Cloud, and OpenStack, as well as on bare metal machines.

In this article:

Kubernetes Use Cases and Advantages

Here are key Kubernetes use cases:

  • Schedule and run containers on clusters of either physical or virtual machines (VMs).
  • Fully implement and utilize a container-based infrastructure in your production environments. 
  • Automate operational tasks.
  • Create cloud-native applications with Kubernetes as a runtime platform

Here are key advantages of Kubernetes:

  • Orchestrate your containers across several hosts.
  • Optimize hardware usage to make the most of your resources.
  • Mount and add storage to run stateful apps.
  • Automate and control application deployments and updates.
  • Scale your containerized applications as well as their resources—on the fly.
  • Declaratively manage services to ensure deployed applications run as intended.
  • Perform health checks and enable application self-healing.

Note that Kubernetes relies on other projects in order to fully provide the above orchestration features. To fully realize the power of Kubernetes, most users incorporate other components, including:

  • Registry—you can use open source projects, such as Docker Registry.
  • Networking—notable projects include Cilium and Calico. Learn more in our guide to Kubernetes networking ›
  • Telemetry—notable projects include Prometheus and the Elastic Stack.
  • Security—a wide range of options are available, including LDAP, RBAC, and SELinux, as well as OAuth with multi-tenancy layers.
  • Services—there is a rich catalog of application patterns you can choose from.
  • Package management—the Helm package manager lets you wrap Kubernetes applications as a package and deploy them seamlessly in any cluster.
  • Container Storage Interface (CSI)—a standard interface that allows storage device manufacturers to integrate them with Kubernetes.
  • Container Networking Interface (CNI)—a framework for dynamically configuring networking resources in Kubernetes clusters.

Kubernetes Architecture and Components

Kubernetes architecture is, at its foundation, a client-server architecture. 

The server side of Kubernetes is the known as the control plane. By default, there is a single control plane server that acts as a controlling node and point of contact. This server consists of components including the kube-apiserver, etcd storage, kube-controller-manager, cloud-controller-manager, kube-scheduler, and Kubernetes DNS server. 

The client side of Kubernetes comprises the cluster nodes—these are machines on which Kubernetes can run containers. Node components include the kubelet and kube-proxy.

Learn more in our detailed guide to Kubernetes architecture ›

To get started with Kubernetes components, see our huge list of Kubernetes tutorials

Image Source: Kubernetes

Kubernetes Cluster

A Kubernetes cluster is a collection of nodes on which you can run your workloads. A node can be a physical (bare metal) machine, a virtual machine (VM), or managed by a serverless computing system such as Amazon Fargate

Clusters are managed by the Kubernetes control plane, which coordinates container activity on nodes and moves the cluster towards the user’s desired state.

Learn more in our detailed guide to Kubernetes clusters ›

Kubernetes Control Plane Components

Below are the main components found on the control plane node:

etcd server

A simple, distributed key-value store which is used to store the Kubernetes cluster data (such as the number of pods, their state, namespace, etc.), API objects, and service discovery details. It should only be accessible from the API server for security reasons. etcd enables notifications to the cluster about configuration changes with the help of watchers. Notifications are API requests on each etcd cluster node to trigger the update of information in the node’s storage.


The Kubernetes API server is the central management entity that receives all REST requests for modifications (to pods, services, replication sets/controllers, and others), serving as a frontend to the cluster. Also, this is the only component that communicates with the etcd cluster, making sure data is stored in etcd and is in agreement with the service details of the deployed pods.


Runs a number of distinct controller processes in the background (for example, the replication controller controls the number of replicas in a pod; the endpoints controller populates endpoint objects like services and pods) to regulate the shared state of the cluster and perform routine tasks. 

When a change in a service configuration occurs (for example, replacing the image from which the pods are running or changing parameters in the configuration YAML file), the controller spots the change and starts working towards the new desired state.


Responsible for managing controller processes with dependencies on the underlying cloud provider (if applicable). For example, when a controller needs to check if a node was terminated or set up routes, load balancers or volumes in the cloud infrastructure, all that is handled by the cloud-controller-manager.


Helps schedule the pods (a co-located group of containers inside which our application processes are running) on the various nodes based on resource utilization. It reads the service’s operational requirements and schedules it on the best fit node. 

For example, if the application needs 1GB of memory and 2 CPU cores, then the pods for that application will be scheduled on a node with at least those resources. The scheduler runs each time there is a need to schedule pods. The scheduler must know the total resources available as well as resources allocated to existing workloads on each node.


kubectl is a command-line tool that interacts with kube-apiserver and sends commands to the master node. Each command is converted into an API call.

Kubernetes Nodes

A node is a Kubernetes worker machine managed by the control plane, which can run one or more pods. The Kubernetes control plane automatically handles the scheduling of pods between nodes in the cluster. Automatic scheduling in the control plane takes into account the resources available on each node, and other constraints, such as affinity and taints, which define the desired running environment for different types of pods.

Below are the main components found on a Kubernetes worker node:

  • kubelet – the main service on a node, which manages the container runtime (such as containerd or CRI-O). The kubelet regularly takes in new or modified pod specifications (primarily through the kube-apiserver) and ensures that pods and their containers are healthy and running in the desired state. This component also reports to the master on the health of the host where it is running.
  • kube-proxy – a proxy service that runs on each worker node to deal with individual host subnetting and expose services to the external world. It performs request forwarding to the correct pods/containers across the various isolated networks in a cluster.

Learn more in our detailed guide to Kubernetes nodes ›

Kubernetes Pods

A pod is the smallest unit of management in a Kubernetes cluster. It represents one or more containers that constitute a functional component of an application. Pods encapsulate containers, storage resources, unique network IDs, and other configurations defining how containers should run.

Kubernetes Services

In Kubernetes, a service is an abstraction that represents a set of pods which represent an application or component and includes access policies for those pods. 

Kubernetes guarantees the availability of a given pod and its replicas, but the actual pod instances running within a service are temporary and may be replaced by others. This means that other Pods that need to communicate with this application or component cannot rely on the IP address of the underlying pod.

Kubernetes Management Consoles

Kubernetes Dashboard

The Kubernetes Dashboard is an official web-based user interface (UI) designed specifically for Kubernetes clusters. You can view all the workloads running on your cluster from the dashboard. It also includes features to help you control and modify your workloads, as well as the ability to view activity logs from your pods. You can also view basic resource usage on Kubernetes nodes.

Learn more in our detailed guide to the Kubernetes dashboard ›


Lens provides an open-source user interface (UI) for Kubernetes clusters. It displays live updates on the state of cluster objects as well as any collected metrics. 

Lens connects to your clusters using the kubeconfig file, and then displays information about your cluster and all of the objects it contains. You can also use Lens with a Prometheus stack to provide metrics about the cluster, nodes, and the health of cluster components.


Octant is an open source tool that provides visibility into how applications are running within a Kubernetes cluster. It provides graphic visualizations of Kubernetes object dependencies and lets you forward local ports to a running pod. You can also use this tool to inspect pod logs and navigate through different clusters. 

Octant provides a dashboard that can help you inspect cluster workloads in real-time. It lets you explore Kubernetes objects, such as cron jobs, deployments, daemon sets, jobs, services, and pods. It also provides a resource graph that visualizes the status of objects and shows how objects depend on one another.

Kubernetes Helm

Kubernetes Helm is a Kubernetes package manager similar to Yum or Apt. You can package applications using Helm, including Kubernetes objects configured according to specific requirements. Anyone can download and install a Helm package with one click. In Helm, these packages are called graphs (similar to deb or rpm). Helm encourages developers to share Kubernetes-based applications and standardize Kubernetes development.

Learn more in our detailed guide to Kubernetes Helm ›

Managed Kubernetes

Kubernetes is a complex system and can be challenging to deploy and manage in-house. There are several managed Kubernetes services that take care of Kubernetes management and operations. These services fully manage the Kubernetes control plane, providing ready-to-go clusters you can use to start deploying applications.

Below are a few popular managed Kubernetes platforms.

Learn more in our detailed guides to:

Kubernetes on AWS

Amazon Web Services (AWS) provides several options that can help simplify Kubernetes workflows, for example:

  • Amazon EKS—eliminates the need to provision or manage Kubernetes master instances and etcd.
  • Amazon ECR—lets you store, manage, and encrypt your container images to facilitate fast deployment.
  • Amazon Fargate—a serverless service that lets you run containers without managing the underlying server infrastructure. 

You can also integrate these services with other AWS offerings, including Amazon Virtual Private Cloud (Amazon VPC), AWS Identity and Access Management (IAM), and service discovery.

Learn more in our detailed guide to Kubernetes on AWS ›

Kubernetes on Azure

Here are several key options for running Kubernetes on Microsoft Azure:

  • Azure Kubernetes Service (AKS)—provides a fully managed, secure, and highly available Kubernetes service.
  • Azure Red Hat OpenShift—offers a fully managed OpenShift service operated by both Red Hat and Azure.
  • Azure Container Instances (ACI)—lets you run containers on Azure without having to manage servers.

Kubernetes on Google Cloud

Google Kubernetes Engine (GKE) is an intuitive cloud-based service that was developed by the same people as Kubernetes. It helps you run containerized applications in a cloud environment and to build an effective Kubernetes strategy. Anthos enables Google to provide a consistent on-premise and multi-cloud Kubernetes experience—Anthos helps ensure that you run your Kubernetes clusters from anywhere in an efficient and reliable way.

Kubernetes on VMware

VMware has extended support to Kubernetes since 2019. This means you can use the virtualization platform vSphere, including the ESXi hypervisor, to run containers. 

To manage your standard clusters, you can also use Tanzu, which helps ensure compatibility with your Kubernetes development later on. The Tanzu Kubernetes Grid is a platform that lets you run Kubernetes in a production environment, manage multiple Kubernetes clusters throughout your onsite servers, public cloud deployments, and VMware infrastructure.

The Tanzu stack is designed to facilitate simpler operations and development. Here are several of its key features:

  • Faster release cycles—Tanzu offers a collection of container images built and maintained by administrators that enables organizations. The use of these images can help teams increase the velocity of application development and delivery.
  • Full-stack observability—Tanzu provides a single view for all stakeholders responsible for monitoring and analyzing cluster infrastructure and application metrics.  
  • Compliant with various runtimes—Tanzu supports OCI-compliant and CRI-compliant runtimes to allow teams to leverage containers created using any runtime engine. 
  • Ephemeral and persistent storage—Tanzu uses vSphere to manage storage. vSphere comes with a CNS-CSI driver that enables it to support any Kubernetes storage solution that follows persistent and ephemeral storage. 
  • Full-stack networking capabilities—Tanzu employs VMWare’s NSX Container Networking Solution to provide full-stack networking capabilities. Teams can leverage this feature to implement Kubernetes-native networking solutions.

Learn more in our detailed guide to Kubernetes on VMware ›

Kubernetes Distributions

Several software vendors have created their own distributions of the original Kubernetes project. Each vendor offers their Kubernetes distribution packaged with other technologies that provide added value. Here are a few popular Kubernetes distributions.

Openshift Container Platform

OpenShift Container Platform is a hybrid cloud platform designed by Red Hat to help organizations build and scale containerized applications. It is built on top of Kubernetes and utilizes several additional technologies, including Docker-style Linux containers and Red Hat Enterprise Linux (RHEL). 

Here are several key technologies employed by OpenShift Container Platform:

  • OKD—a community distribution of Kubernetes that powers OpenShift. It is built around OCI container packaging and Kubernetes clusters. 
  • Red Hat OpenShift Container Storage—provides available and dynamic persistent storage for containerize-based applications. 
  • Software Defined Networking (SDN)—offers plugins you can use to configure overlay networks for your Kubernetes clusters. 


Rancher is an open source platform that lets you run containers in production across several environments, including public clouds and on-premises infrastructure. Rancher does that by capturing computing resources from private or public clouds and then seamlessly deploying Kubernetes resources on the captured computing resources. 

Notable features include container load balancing, cross-host networks, persistent storage services, user management, multi-tenancy, built-in security for Kubernetes clusters, and multi-cloud management. 


Mirantis is a cloud computing company that offers various services, including a Kubernetes distributor for Red Hat OpenStack Platform. OpenStack is an open source platform commonly used to host Infrastructure as a Service (IaaS) operations on physical or virtual machines (VMs). 

The Mirantis Kubernetes Engine is delivered as a stack consisting of custom databases, staging components, orchestration functionality, and message queueing. It enables unified cluster operations for multi-cloud applications, helping organizations reduce complexities in infrastructure and operations.

DevOps teams can leverage the engine to build and ship code to public and private clouds quickly. Here are key features of Mirantis Kubernetes Engine:

  • Complies with OCI.
  • Comes with built-in support for Dockershim, a Kubernetes component that allows it to run Docker containers. 
  • Offers Calico as a default CNI plugin to support highly scalable networks as well as multiple networking models. 
  • Relies mainly on software-defined storage.
  • Offers Ceph for object and block storage.

VMWare Tanzu

Kubernetes Alternatives

Kubernetes is a popular container orchestrator, but there are worthy alternatives. We’ll cover two common alternatives, Docker Swarm and Nomad.

In most cases, these Kubernetes alternatives are selected for smaller-scale use cases, where Kubernetes is considered “overkill” and overly complex.

Learn more in our detailed guide to Kubernetes alternatives ›

Docker Swarm

Docker Swarm is an orchestration platform commonly used as an alternative to Kubernetes. Swarm (also known as Swarm mode) is a native Docker orchestrating feature designed especially for Docker engine clusters. 

A Swarm cluster includes the following: 

  • Manager nodes—manages the cluster.
  • Worker nodes—receives instructions from manager nodes and then performs the requested tasks. 

All nodes must be deployed using Docker Engine.

Both Swarm and Kubernetes enable you to automate application management and scaling, by partitioning your application into containers. However, there are key differences between the two, including:

  • Kubernetes focuses on modular orchestration—it is ideal for demanding applications that require complex configurations.
  • Docker Swarm focuses on ease of use—it is ideal for simple applications that do not require advanced workflow automation or complex resource provisioning.

Learn more in our detailed guide to Docker Swarm ›


Nomad is a container orchestrator developed by HashiCorp. It enables organizations to deploy and manage containers alongside legacy applications—using the same workflow for all components. The orchestrator is particularly designed for ease of use. 

Nomad lets you use infrastructure as code (IaC) to declaratively deploy applications. It then runs workloads, such as Docker, microservices, batch applications, and non-container applications side by side. 

Notable features of Nomad include GPU support, device plugins, multi-region federation, and multi-cloud management. It also integrates with HashiCorp products, including Consul, Vault, and Terraform.

Kubernetes vs. Docker

Docker is not really a Kubernetes alternative, but newcomers to the space often ask what is the difference between them. The primary difference is that Docker is a container runtime, while Kubernetes is a platform for running and managing containers across multiple container runtimes. 

Docker is one of many container runtimes supported by Kubernetes. You can think of Kubernetes as an “operating system” and Docker containers as one type of application that can run on the operating system.

Docker is hugely popular and was a major driver for the adoption of containerized architecture. Docker solved the classic “works on my computer” problem, and is extremely useful for developers, but is not sufficient to manage large-scale containerized applications. 

If you need to handle the deployment of a large number of containers, networking, security, and resource provisioning become important concerns. Standalone Docker was not designed to address these concerns, and this is where Kubernetes comes in.

Kubernetes Security

Kubernetes provides numerous security features and capabilities. However, out of the box, Kubernetes is not securely configured. Kubernetes applications also make extensive use of container images, which may contain security vulnerabilities. Due to its complexity and flexibility, it can be difficult to fully secure a Kubernetes cluster and maintain security over time.

Let’s consider three perspectives of Kubernetes security: securing container images during the build process, securing network communication between containers in production environments, and securely configuring Kubernetes infrastructure.

Securing Images in the Build Process

In modern CI/CD development lifecycles, developers write code, which is pushed to a CI/CD pipeline and becomes part of a container image. Application security techniques are critical here—it is important to introduce testing as early as possible during the build process, so any code created or included by developers, or images pulled from repositories, are scanned for security vulnerabilities. 

Keep in mind that container images are built of multiple layers, each of which may contain a large number of components. Automated container scanning technologies are essential to vet images, ensure they do not contain vulnerable components and have not been tampered with. This is the first step to ensuring a secure Kubernetes environment.

Related content: Read our guide to Kubernetes vulnerability scanning ›

Network Security in Production

Networking structures within a Kubernetes cluster are another major attack surface. Most production deployments of Kubernetes use the container network interface (CNI) to create a secure networking layer, supporting network segmentation, with a private subnet for each Kubernetes namespace, and security policies you can use to restrict communication between containers.

Secure Kubernetes Configuration

Kubernetes is enormously flexible, but this makes it more difficult to achieve a secure configuration. It is important to understand that Kubernetes is not secure by default and that you must implement a list of secure configuration best practices to prevent exposure.

Security researchers have published formal best practices for securing Kubernetes—possibly the most commonly used is the Kubernetes CIS benchmark. While the benchmark has over 250 pages of instructions for securing Kubernetes, you can use automated tools like kube-bench to scan your cluster according to the CIS benchmark and identify insecure configurations.

Related content: Read our guide to Kubernetes benchmarks ›