What is K3s? Architecture, Setup, and Security

Everything you need to know about K3s, the lightweight Kubernetes distribution by SUSE (Rancher Labs).

May 22, 2022

What is K3s? Architecture, Setup, and Security

K3s is a Kubernetes distribution that aims to simplify Kubernetes deployments. It offers a lightweight single binary of approximately 45MB that implements Kubernetes APIs. It was created by SUSE (formally Rancher Labs), and is fully certified by the Cloud Native Computing Foundation (CNCF) as a Kubernetes-compliant distribution.

K3s includes only core Kubernetes components, such as kube-apiserver, kube-scheduler, kubelet, kube-controller-manager, and kube-proxy. Using only core components keeps the distribution lightweight, but you can easily replace components with external add-ons.

It bundles these components into unified processes presented as a simple server and agent model. Once you run the k3s server, it starts the Kubernetes server and automatically registers the localhost as an agent to create a one-node Kubernetes cluster.

In this article:

When to Use K3s

Here are key use cases for k3s:

  • Run a lightweight Kubernetes development environment—k3s enables you to easily and quickly set up and run Kubernetes. It does not require deep knowledge of Kubernetes.
  • Run Kubernetes on Raspberry Pi clusters—Raspberry Pi clusters enable lightweight computing. K3s helps you run Kubernetes on Raspberry Pi devices.
  • Run Kubernetes on ARM architecture—k3s can run ARM-based architectures to make running and managing smaller resource constraints easier.
  • CI/CD pipelines—k3s can be used to build Continuous Deployment (CD) pipelines in a GitOps paradigm. Users can set up a lightweight k3s cluster and use Argo to redeploy an application whenever declarative configurations are updated.

K3s vs K8s

Kubernetes (K8s) is a general-purpose container orchestration platform. K3s is a purpose-built distribution designed to help you run Kubernetes on bare-metal servers. Here are key differences between K3s and K8s:

K3sK8s
K3s deploys applications and spins up clusters faster than K8s.K8s takes longer to deploy applications and spin up clusters, compared to K3s.
K3S is lightweight. Its small size enables it to run clusters in IoT devices with limited resources, like Raspberry Pi.K8s is not operable in edge computing or IoT devices. 
K3s can host only workloads that run in a single cloud. K8s can host workloads that run across multiple environments. 
K3s is ideal for small workloads. Its single binary is approximately 100 MB or less, ensuring it can quickly spin up clusters, schedule pods, and perform other tasks.K8s is ideal for larger workloads. It has a rich set of features and you can deploy it across complex infrastructure.

K3s Architecture

A server node is a bare-metal or virtual machine that runs the k3s server command, and a worker node is a machine that runs the k3s agent command. There are various ways you can use the K3s for your applications. Here is a diagram visualizing a cluster using a single-node K3s server with an embedded SQLite database:

Image Source: Rancher

This configuration registers each agent node to the same server node. It lets a K3s user call the K3s API on the server node to manipulate Kubernetes resources.

You can configure single server clusters to meet for various use cases. However, environments that require uptime of the Kubernetes control plane should run K3s in an HA configuration. An HA K3s cluster consists of 2 or more server nodes that serve the Kubernetes API and run other control plane services and an external datastore.

Image Source: Rancher

A high-availability server configuration should register each node with the Kubernetes API. You can do that by using a fixed registration address. After this registration, agent nodes can connect directly to a server node.

Related content: Read our guide to Kubernetes (K8s) architecture ›

K3s Installation

To install the K3s control plane

You can install K3s on systemd- or openrc-based systems using the installation script. The script installs K3s and other required utilities including kubectl, crictl, ctr, k3s-killall.sh, and k3s-uninstall.sh

Use the following command to install K3s:

curl -sfL https://get.k3s.io | sh -

When K3s starts on the master node, by default, it automatically restarts if the node reboots, the process crashes or is terminated. A kubeconfig file is created at this path: /etc/rancher/k3s/k3s.yaml

To install K3s on worker nodes

To install on worker nodes and have them join the cluster, run the installation script after setting two environment variables: 

  • K3S_URL—instructs K3s to run in worker mode and register as a node at the URL, which is the address of the K3s server. 
  • K3S_TOKEN—the value of the node token stored at the path /var/lib/rancher/k3s/server/node-token on the master node

Here is an example:

curl -sfL https://get.k3s.io | K3S_URL=https://myserver:6443 K3S_TOKEN=mynodetoken sh -

Hardening K3s Clusters

Here are several important best practices for hardening your K3s clusters against cyber attacks. For more details see the full CIS hardening guide.

Related content: Read our guide to the Kubernetes (K8s) CIS benchmark ›

Ensure protect-kernel-defaults is Set on the Host

protect-kernel-defaults is a kubelet flag that protects against deploying K3s on a non-secured host. It causes the kubelet to exit if the required kernel parameters are not set or their values are different from the kubelet defaults.

NetworkPolicies

Ensure all namespaces have a network policy that limits traffic into namespaces and pods. For example, below is a policy that limits traffic into the kube-system namespace only from pods within that namespace:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: intra-namespace
  namespace: kube-system
spec:
  podSelector: {}
  ingress:
   —from:
     —namespaceSelector:
          matchLabels:
            name: kube-system

Note that according to the K3s documentation, network policies are currently experimental.

API Server Audit Configuration

By default, K3s does not create a log directory and audit policy. To meet CIS requirements, you should enable audit logs for the API Server of K3s before you start K3s. Apply with restrictive access permissions to the log directory to avoid leaking sensitive log data. 

Below is an example audit policy that logs all request metadata. You should pass the audit.yaml configuration as an argument to the API Server to activate it.

apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: Metadata

Kubernetes Security with Aqua

Aqua tames the complexity of Kubernetes security with KSPM (Kubernetes Security Posture Management) and advanced agentless Kubernetes Runtime Protection. 

Aqua provides Kubernetes-native capabilities to achieve policy-driven, full-lifecycle protection and compliance for K8s applications:

  • Kubernetes Security Posture Management (KSPM) – a holistic view of the security posture of your Kubernetes infrastructure for accurate reporting and remediation. Helping you identify and remediate security risks.
  • Automate Kubernetes security configuration and compliance – identify and remediate risks through security assessments and automated compliance monitoring. Help you enforce policy-driven security monitoring and governance.
  • Control pod deployment based on K8s risk – determine admission of workloads across the cluster based on pod, node, and cluster attributes. Enable contextual reduction of risk with out-of-the-box best practices and custom Open Policy Agent (OPA) rules.
  • Protect entire clusters with agentless runtime security – runtime protection for Kubernetes workloads with no need for host OS access, for easy, seamless deployment in managed or restricted K8s environments.
  • Open Source Kubernetes Security – Aqua provides the most popular open source tools for securing Kubernetes, including Kube-Bench, which assesses Kubernetes clusters against 100+ tests of the CIS Benchmark, and Kube-Hunter, which performs penetration tests using dozens of known attack vectors.