Kubernetes

KUBERNETES IN A NUTSHELL

Learning Kubernetes? This page will introduce you to the architecture, comparison with other container orchestrators, basic operations, clustering, Kubernetes services, deployment on-prem and on the cloud, Kubernetes networking, and more.

Everything you need to know about Kubernetes - from basic to advanced:

1. WHAT IS KUBERNETES?

Kubernetes is a powerful open source orchestration tool developed by Google for managing microservices or containerized applications across a distributed cluster of nodes. Kubernetes provides highly resilient infrastructure with zero downtime deployment capabilities, automatic rollback, scaling, and self-healing of containers (which consists of auto-placement, auto-restart, auto-replication and scaling of containers on the basis of CPU usage).

The main objective of Kubernetes is to hide the  complexity of managing a fleet of containers by providing REST APIs for the required functionalities. Kubernetes is portable in nature, meaning it can run on various public or private cloud platforms such as AWS, Azure, OpenStack, or Apache Mesos. It can also run on bare metal machines.

Kubernetes Advantages and Use Cases

Kubernetes has become popular due to its roots in Google's unparalleled R&D and operations expertise, and the large community that has grown around it since it was release as an open source product. It is used in a variety of scenarios, from simple ones like running WordPress instances on Kubernetes, to scaling Jenkins machines, to secure deployments with thousands of nodes.

See more resources about Kubernetes Advantages and Use Cases

Kubernetes Components and Architecture

Kubernetes follows a client-server architecture. It’s possible to have a multi-master setup (for high availability), but by default there is a single master server which acts as a controlling node and point of contact. The master server consists of various components including a kube-apiserver, an etcd storage, a kube-controller-manager, a cloud-controller-manager, a kube-scheduler, and a DNS server for Kubernetes services. Node components include kubelet and kube-proxy on top of Docker.



High level Kubernetes architecture showing a cluster with a master and two worker nodes (image source)

Kubernetes Master Components

Below are the main components found on the master node:


  • etcd cluster - a simple, distributed key value storage which is used to store the Kubernetes cluster data (such as number of pods, their state, namespace, etc), API objects and service discovery details. It is only accessible from the API server for security reasons. etcd enables notifications to the cluster about configuration changes with the help of watchers. Notifications are API requests on each etcd cluster node to trigger the update of information in the node’s storage.
  • kube-apiserver - Kubernetes API server is the central management entity that receives all REST requests for modifications (to pods, services, replication sets/controllers and others), serving as frontend to the cluster. Also, this is the only component that communicates with the etcd cluster, making sure data is stored in etcd and is in agreement with the service details of the deployed pods.

  • kube-controller-manager - runs a number of distinct controller processes in the background (for example, replication controller controls number of replicas in a pod, endpoints controller populates endpoint objects like services and pods, and others) to regulate the shared state of the cluster and perform routine tasks. When a change in a service configuration occurs (for example, replacing the image from which the pods are running, or changing parameters in the configuration yaml file), the controller spots the change and starts working towards the new desired state.
  • cloud-controller-manager - is responsible for managing controller processes with dependencies on the underlying cloud provider (if applicable). For example, when a controller needs to check if a node was terminated or set up routes, load balancers or volumes in the cloud infrastructure, all that is handled by the cloud-controller-manager.
  • kube-scheduler - helps schedule the pods (a co-located group of containers inside which our application processes are running) on the various nodes based on resource utilization. It reads the service’s operational requirements and schedules it on the best fit node. For example, if the application needs 1GB of memory and 2 CPU cores, then the pods for that application will be scheduled on a node with at least those resources. The scheduler runs each time there is a need to schedule pods. The scheduler must know the total resources available as well as resources allocated to existing workloads on each node.

Node (worker) components

Below are the main components found on a (worker) node:

  • kubelet - the main service on a node, regularly taking in new or modified pod specifications (primarily through the kube-apiserver) and ensuring that pods and their containers are healthy and running in the desired state. This component also reports to the master on the health of the host where it is running.
  • kube-proxy - a proxy service that runs on each worker node to deal with individual host subnetting and expose services to the external world. It performs request forwarding to the correct pods/containers across the various isolated networks in a cluster.

Kubectl

kubectl is a command line tool that interacts with kube-apiserver and send commands to the master node. Each command is converted into an API call.

Learn more about Kubernetes Architecture

2. HOW DOES KUBERNETES COMPARE TO DOCKER SWARM?

Docker vs. Kubernetes - which should you use for container orchestration? Docker Swarm is considered the "lightweight" alternative to Kubernetes. Docker swarm mode allows you to manage a cluster of Docker Engines, natively within the Docker platform. You can use the Docker CLI to create a swarm, deploy application services to a swarm, and manage swarm behavior.

Swarm's capabilities include coordination between containers, allocating tasks to groups of containers, health checks and lifecycle management of containers, redundancy and failover, scaling containers up and down based on load, and rolling updates.

Comparing Mindshare: Docker vs. Kubernetes

Mindshare MetricKubernetesDocker Swarm
Google Monthly Searches165,00033,100
Pages Indexed by Google Past Year1,190,000135,000
News Stories Past Year36,0003,610
Github Stars28,9884,863
Github Commits58,0293,493

These stats are correct as of November, 2017, and will be updated every few months

Technical Comparisons from the Community

Mesosphere: Docker Engine vs. Kubernetes vs. Mesos

  • Undated
  • Opinion by:
    Amr Abdelrazik, Director, Product Marketing, Mesosphere
  • Insight: Each technology was designed for a different purpose. Docker provided a standard file format for encapsulating applications. Kubernetes helps orchestrate containers at large scale. Mesos is actually not an orchestrator, it is a cluster management platform that can run any workloads, including containers (using the Marathon project). Mesos is agnostic to infrastructure giving it higher portability. 
  • Bottom line:
    If you are a developer looking for way to build and package applications, Docker is the best solution. For a DevOps team wanting to build a system dedicated exclusively to Docker containers, Kubernetes is the best fit. For organizations running multiple mission critical workloads including Docker containers, legacy applications (e.g., Java), and distributed data services (e.g., Spark, Kafka, Cassandra, Elastic), Mesos is the best fit. 

Platform9: Kubernetes vs. Docker Swarm

  • Date:
    June 22, 2017
  • Opinion by: 
    Akshai Parthasarathy, Technical Product Marketing Manager, Platform9
  • Bottom line:
  • Kubernetes has over 80% of mindshare for news articles, Github popularity, and web searches, and is the default choice for users. However, there is consensus that Kubernetes is more complex to deploy and manage. The Kubernetes community has tried to mitigate this drawback by offering a variety of deployment options, including Minikube and kubeadm.

Rancher: Docker Swarm vs. Kubernetes White Paper

  • Undated
  • Opinion By: 
    Rancher Labs
  • Insight: 
  • The two platforms have very different constructs and architecture (nodes/tasks vs. pods), so the choice of orchestrator is not a reversible decision. Apps will have to be architected in a way that is tightly coupled with the orchestrator.
  • Bottom line:
    Achieving the same tasks is much more complex in Kubernetes vs. Swarm. But Kubernetes provides a lot of additional functionality like auto-scaling.
Read more industry opinions on Kubernetes vs. Docker

3. BASIC OPERATIONS

Installing Kubernetes

Installation can sometimes be a challenge. Learn how to install Kubernetes on Ubuntu, Windows, CentOS and other platforms.

See our compilation of Kubernetes Installation Resources

Configuration

Kubernetes reads YAML files to configure services, pods and replication controllers. Learn how to use Kubernetes configuration to deploy containers at scale.

See our compilation of Kubernetes Configuration Resources

Monitoring

At any scale, monitoring Kubernetes as well as the health of application deployment and the underlying containers is essential to ensure good performance. Monitoring Kubernetes requires rethinking existing monitoring strategies, especially if you are using traditional hosts such as VMs or physical machines.

Debugging and Troubleshooting

Because of the massively distributed nature of Kubernetes, debugging can be complex. Learn how to how to troubleshoot problems that arise when creating and managing Kubernetes pods, replication controllers, services, and containers. 

See our compilation of resources about Kubernetes Debugging and Troubleshooting

Load Balancing

Load balancing is a straightforward task in many non-container environments, but it involves a bit of special handling when it comes to containers. There are two types of load balancing in Kubernetes - Internal load balancing across containers of the same type using a label, and external load balancing.

See our compilation of resources about configuring and using the Kubernetes load balancer feature

Kubernetes Security

Kubernetes provides many controls that can improve application security. Configuring them requires intimate knowledge of Kubernetes and the specific deployment’s security requirements. Learn more about best practices for deployment, sharing data and network security in Kuberetes.

See our compilation of resources about Kubernetes Security

Kubernetes Storage Management

Storage is a critical part of running stateful containers, and Kubernetes offers powerful primitives for managing it. Learn how to work with Kubernetes storage options and provision storage in Kubernetes.

See our compilation of resources about Kubernetes Storage Management

4. KUBERNETES CLUSTERING

A Kubernetes cluster is made of a master node and a set of worker nodes. In a production environment these run in a distributed setup on multiple nodes. For testing purposes, all the components can run on the same node (physical or virtual) by using minikube. Kubernetes has six main components that form a functioning cluster: API server, Scheduler, Controller manager, kubelet, kube-proxy, etcd.

See our compilation of resources about Kubernetes Cluster Administration

Cluster Policies

For enterprise production deployments of Kubernetes clusters, enforcing cluster-wide policies to restrict what a container is allowed to do is important. Learn about Kubernetes Cluster Policies such as Pod Security Policies, Network Policies and Resource Quotas.

See our compilation of resources about Kubernetes Cluster Policies

Kubernetes Federation

Kubernetes Federation lets you manage deployments and services across all Kubernetes clusters located in different regions. Learn how to set up a Kubernetes Cluster Federation, including tutorials and examples.

See our compilation of resources about Kubernetes Federation

Kubernetes High Availability Clusters

Kubernetes clusters enable a higher level of abstraction, enabling you to deploy and manage a group of containers that comprises a micro-service. Learn about high availability cluster components and how to setup a high availability Kubernetes cluster. 

See our compilation of resources about Kubernetes High Availability Clusters

Logging in a Cluster

Application and system logs can help you understand what is happening inside a Kubernetes cluster. Logs are particularly useful for debugging problems and monitoring cluster activity. Kubernetes  provides two logging end-points for applications and cluster logs: Stackdriver Logging for use with Google Cloud Platform and Elasticsearch. Learn about Kubernetes logging architecture including tutorials and examples.

See our compilation of resources about Kubernetes Logging

Kubernetes Proxies

There are several different proxies you may encounter when using Kubernetes: kubectl, apiserver proxy, kube-proxy, a proxy/load-balancer in front of apiserver and a cloud load balancer on external services - the first two types being the most important for Kubernetes users. The cluster admin will ensure that the latter types are setup correctly.

See our compilation of resources about Kubernetes Proxies

Kubernetes in a Serverless Computing Model

With serverless computing, you just upload the code somewhere, and it runs whenever you invoke it. Simply put, serverless computing frees you from the complexities of configuring and maintaining Kubernetes clusters. Learn how to build a Serverless Kubernetes cluster.

See our compilation of resources about Kubernetes Serverless

5. WORKING WITH KUBERNETES SERVICES

For Kubernetes, a service is an abstraction that represents a set of logical pods where an application or component is running, as well as embedding an access policy to those pods. Actual pods are ephemeral, Kubernetes guarantees the availability of pods and replicas specified but not the liveliness of each individual pod. This means that other pods that need to communicate with this application or component cannot rely on the underlying individual pod’s IP addresses.

A service gets allocated a virtual IP address as well (called a clusterIP in Kubernetes), and lives until explicitly destroyed. Requests to the service get redirected to the appropriate pods, thus the service serves as a stable endpoint used for inter-component or application communication. For Kubernetes-native applications, requests can also be made via an API in Kubernetes’ apiserver which automatically exposes and maintains the actual pods endpoints at any moment.

Specifying Pods in a Service

A service in Kubernetes can be created via an API request by passing in a service definition such as:

kind: Service
apiVersion: v1
metadata:
  name: my-service
spec:
  selector:
    app: MyApp
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9376

Using Services for External Workloads

A service can apply to an external workload as well, allowing to use the same abstraction to connect to a backend or database running outside Kubernetes, for example. Pods can then connect to this service and without knowing about specific endpoints for workloads outside of Kubernetes.

In order to do so the service should be defined as in previous section but without the label selector. After the service is created, the endpoints for the external workload need to be specified. For example:

kind: Endpoints
apiVersion: v1
metadata:
 name: my-service
subsets:
 - addresses:
- ip: 62.82.24.195
 ports:
 - port: 9376

Service Discovery

Service discovery in Kubernetes can be achieved via the cluster DNS (recommended) or via environment variables on the nodes.

Kubernetes exports a set of environment variables for each service currently active in the Kubernetes cluster at pod creation time. These variables are exported on the node where the pod gets created, so they become visible to the pod. For example, these variables (and more) would get exported for each of the active services payments and orders:

PAYMENTS_SERVICE_HOST=10.0.0.11
PAYMENTS_SERVICE_PORT=6379
ORDERS_SERVICE_HOST=10.0.171.239
ORDERS_SERVICE_PORT=6379

Multi-Cluster Services with Cluster Federation

Kubernetes Cluster Federation allows a (federated) service to run on multiple Kubernetes clusters simultaneously. The clusters can be spread across different cloud providers, availability zones and even private clouds, as long as the cluster’s API endpoint and credentials are registered with the Federation API server.

A new federated service can be created by calling the Federation API in the same manner as a cluster-specific kube-apiserver would be called for a (non-federated) service (as described in this article up to now). Federation means that the service will be sharded across all the Kubernetes clusters that are part of the federation.


Learn more about working with Kubernetes Services

6. KUBERNETES DEPLOYMENT

While it’s easy to spin up your first container locally, taking containers into production in a cloud environment is a completely different ball game. There are numerous aspects like scale, networking, security, high availability, and performance that need to be considered. All of these factors come into play when deploying containers. This makes deployment the most stressful part of running containers in production. Fortunately, Kubernetes is a mature and robust option for running containers in production and has some strong defaults, wide-ranging options, and is a complete package when looking for a tool to deploy containerized applications.

Deployment Commands

Create a deployment based on a YAML file
kubectl create
Deploy using a phased rolling update
kubectl rollout             
Check the status of a rolling update
kubectl rollout status              
Rollback a recent or ongoing rolling update to a previous version
kubectl rollback              
Option to delete old replicas
.spec.revisionHistoryLimit              

Kubernetes Deployment Strategies

  • Recreate Deployment - In this approach all replicas are killed, and are then replaced by new replicas. It involves some downtime for as long as it takes for the system to shut down and boot up again. This works fine for applications that are used infrequently, and where users don’t expect them to be available 24x7. This is rare in today’s cloud-driven world, and hence, this isn’t the most popular deployment method.

  • Rolling Update when a deployment command is executed, Kubernetes starts to replace existing replicas with the new updated ones one at a time. This scaling up and scaling down of replicas is how Kubernetes manages deployments, and is what makes Kubernetes particularly effective at managing deployments with containers.

  • Blue-green Deployments - not native to Kubernetes, but you can set them up with ease. In this method the ‘blue’ replicas are the existing instances, and they are to be replaced by the ‘green’ replicas. Once the green replicas are deployed and tested, you can use an external load balancer to route traffic from the ‘blue’ replicas to the ‘green’ ones. The biggest advantage of blue-green deployments is that it ensures a smooth transition without any downtime.

  • Canary Releases - releasing a new version of the app to a subset of users, say 5% of all users. Once this version is tested and reliable for the initial 5% it is released to a bigger subset, until eventually all users get updated to the release without experiencing any downtime. Canary releases let you to test the app in real-world conditions and with real users, however, it does take some upfront planning and management to ensure the release is seamless for the user.


Learn more about Kubernetes Deployment

Kubernetes in Production

The default configurations for Kubernetes components are not designed for heavy and dynamic production workloads, characteristic of DevOps environments and micro-services based application deployments where containers are quickly created and destroyed. Learn how to create a production-ready Kubernetes cluster, including examples and tutorials.

See our compilation of resources about Kubernetes in Production

Running Kubernetes on AWS

Amazon Web Services (AWS) is a popular cloud provider option for Kubernetes deployments, as it allows unlimited scaling of an enterprise containerized application clusters. AWS’ region availability all around the world means Kubernetes clusters can benefit from very low latencies. Additionally, the wide range of AWS services like S3 for raw storage or RDS for relational databases, it becomes easy to use Kubernetes for both stateless and stateful workloads integrated with native AWS services.


Running Kubernetes Cluster on AWS Using Kops

Kops is a production grade tool used to install, upgrade, and operate highly available Kubernetes clusters on AWS and other cloud platforms using the command line. Kops is capable of generating Terraform templates with support for multiple CNI networking plugins and custom Kubernetes add-ons.

Note that all kops commands below that include --yes option can be run first without it to just show which changes would take place (for example, which AWS resources will get created or destroyed when running the command with --yes option).

The following command will create a 1 master (an m3.medium instance) and 2 nodes (two t2.medium instances) cluster in us-west-2a availability zone:

# kops create cluster \ --name my-cluster.k8s.local \ --zones us-west-2a \ --dns private \ --master-size=m3.medium \ --master-count=1 \ --node-size=t2.medium \ --node-count=2 \ --state s3://my-cluster-state \ --yes


Using the Kubernetes EKS Managed Service

Amazon Elastic Container Service for Kubernetes (EKS) is a fully managed service that takes care of all the cluster setup and creation, ensuring multi-AZ support on all clusters and automatic replacement of unhealthy instances (master or worker nodes). It also patches and upgrades clusters to the latest recommended Kubernetes release without requiring any intervention.


Worker nodes are launched on the AWS user’s own EC2 instances, thus not shared with other tenants. In order to use tools such as kubectl , access to master instances must be set up via IAM authenticated public endpoints or through AWS PrivateLink . With AWS PrivateLink, masters appear as an elastic network interface with private IP addresses in the Amazon VPC. This allows to the masters and the EKS service directly from the Amazon VPC, without using public IP addresses or requiring the traffic to traverse the internet.

Amazon EKS also integrates tightly with other AWS services such as ELB for load balancing, or AWS CloudTrail for logging.

Get more details and additional options for Working with Kubernetes on AWS

7. KUBERNETES NETWORKING

Since a Kubernetes cluster consists of various nodes and pods, understanding how they communicate between them is essential. The Kubernetes networking model supports different types of open source implementations. Kubernetes provides an IP address to each pod so that there is no need to map host ports to container ports as in the Docker networking model. Pods behave much like VMs or physical hosts with respect to port allocation, naming, load balancing and application configuration.

Kubernetes vs Docker Networking Model

The Docker networking model relies, by default, on a virtual bridge network called Docker0. It is a per-host private network where containers get attached (and thus can reach each other) and allocated a private IP address. This means containers running on different machines are not able to communicate with each other (as they are attached to different hosts’ networks). In order to communicate across nodes with Docker, we have to map host ports to container ports and proxy the traffic. In this scenario, it’s up to the Docker operator to avoid port clashes between containers.

The Kubernetes networking model, on the other hand, natively supports multi-host networking in which pods are able to communicate with each other by default, regardless of which host they live in. Kubernetes does not provide an implementation of this model by default, rather it relies on third-party tools that comply with the following requirements: all containers are able to communicate with each other without NAT; nodes are able to communicate with containers without NAT; and a container’s IP address is the same from inside and outside the container.

How Pods Communicate with Each Other

Because each pod has a unique IP in a flat address space inside the Kubernetes cluster, direct pod-to-pod communication is possible without requiring any kind of proxy or address translation. This also allows using standard ports for most applications as there is no need to route traffic from a host port to a container port, as in Docker. Note that because all containers in a pod share the same IP address, container-private ports are not possible (containers can access each other’s ports via localhost:<port>) and port conflicts are possible. However, the typical use case for a pod is to run a single application service (in a similar fashion to a VM), in which case port conflicts are a rare situation.

How Pods Communicate with Services

Kubernetes services allow grouping pods under a common access policy (e.g. load-balanced). The service gets assigned a virtual IP which pods outside the service can communicate with. Those requests are then transparently proxied (via the kube-proxy component that runs on each node) to the pods inside the service. Different proxy-modes are supported:

  • iptables: kube-proxy installs iptables rules trap access to service IP addresses and redirect them to the correct pods.
  • userspace: kube-proxy opens a port (randomly chosen) on the local node. Requests on this “proxy port” get proxied to one of the service’s pods (as retrieved from Endpoints API).
  • ipvs (from Kubernetes 1.9): calls netlink interface to create ipvs rules and regularly synchronizes them with the Endpoints API.

Incoming Traffic from the Outside World

Nodes inside a Kubernetes cluster are firewalled from the Internet by default, thus services IP addresses are only targetable within the cluster network. In order to allow incoming traffic from outside the cluster, a service specification can map the service to one or more externalIPs (external to the cluster). Requests arriving at an external IP address get routed by the underlying cloud provider to a node in the cluster (usually via a load balancer outside Kubernetes). The node then knows which service is mapped to the external IP and also which pods are part of the service, thus routing the request to an appropriate pod.

A minimal ingress specification might look like this:

apiVersion: extensions/v1beta1kind: Ingressmetadata:  name: my-ingressspec:  backend:    serviceName: my-service    servicePort: 80

DNS for Services and Pods

Kubernetes provides its own DNS service to resolve domain names inside the cluster in order for pods to communicate with each other. This is implemented by deploying a regular Kubernetes service which does name resolution inside the cluster, and configuring individual containers to contact the DNS service to resolve domain names. Note that this “internal DNS” is compatible and expected to run along with the cloud provider’s DNS service.

Every service gets assigned a DNS name which resolves to the cluster IP of the service. The naming convention includes the service name and its namespace. For example:

my-service.my-namespace.svc.cluster.local

Network Policies 

By default pods can accept traffic from any pod in the cluster. Network policies can restrict how groups of pods are allowed to communicate between them and other network endpoints. A network policy specification uses labels to select pods and define a set of rules which govern what traffic is allowed to and from those pods.

For example:

apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:  name: my-network-policy  namespace: defaultspec:  podSelector:    matchLabels:      role: db  policyTypes:  - Ingress  - Egress  ingress:  - from:    - ipBlock:        cidr: 172.17.0.0/16        except:        - 172.17.1.0/24    - namespaceSelector:        matchLabels:          project: my-project    - podSelector:        matchLabels:          role: frontend    ports:    - protocol: TCP      port: 6379  egress:  - to:    - ipBlock:        cidr: 10.0.0.0/24    ports:    - protocol: TCP      port: 5978

Learn more about Kubernetes Networking

MORE TO COME!

In this page we provided brief introductions of critical Kubernetes concepts, with links to in-depth information both on our Container Technology Wiki, and provided by hundreds of Kubernetes experts around the world.

Our goal is to make this website the world's biggest repository of resources about Kubernetes, and other container-related technologies. Check back soon to see new content, and please let us know in the feedback form below what is the most important Kubernetes information we are still missing.

Further Reading in this Wiki


Top Kubernetes Tutorials from the Community

Tutorial by: Udacity

Length: Long

Can help you learn: Configure and launch an auto-scaling, self-healing Kubernetes cluster, use Kubernetes to manage deploying, scaling, and updating your applications, employ best practices for using containers in general, and specifically Kubernetes, when architecting and developing new microservices.

Tutorial Steps:

  1. Introduction to microservices, docker, building docker images, registries.
  2. Introduction and setup of Kubernetes cluster, working with Kubernetes concepts like pods and services.
  3. Create, deploy, update, and manage Kubernetes applications and services.

Tutorial by: Udemy

Length: Long

Can help you learn: Set up Kubernetes on a single node on AWS, be able to run stateless and stateful applications, manage the Kubernetes cluster.

Tutorial Steps:

    • Introduction to Kubernetes, setup Kubernetes with Kops, setup Kubernetes locally with minikube.
    • Kubernetes concepts like pods, services, containers, nodes, replication controllers, labels, health checks.
    • Advanced use cases like mounting of volumes, Stateful sets, service discovery, auto-scaling, monitoring.
    • Resource allocation (CPU, memory), namespaces, networking, node maintenance.

Tutorial by: Codefresh

Length: Short

Can help you learn: Creating a Kubernetes cluster on AWS using Kops and deploying apps using Codefresh

Tutorial Steps:

  1. Setup environment
  2. Deploy Kubernetes on AWS
  3. Manage Kubernetes and deploy containers using Codefresh UI

Tutorial by: Datawire Inc

Length: Short

Can help you learn: How to use Terraform to deploy a production-quality Kubernetes cluster

Tutorial steps:

  1. Technical design of Kubernetes on AWS
  2. AWS EC2 setup for Kubernetes
  3. Using Kops and Terraform to generate the cluster
  4. Reducing cost of AWS instances used in the cluster

Top Kubernetes Architecture Videos

  • No labels