The idea behind serverless computing is that it lets you, as a developer, focus only on writing your code. With serverless computing, you just upload the code somewhere, and it runs whenever you invoke it. Simply put, serverless computing frees you from the complexities of configuring and maintaining Kubernetes clusters. •This page gathers resources about how to build a Serverless Kubernetes cluster.
Table of Contents:
Below we have compiled publicly available sources from around the world that present views on Kubernetes Serverless.
Best Practices for Running Containers and Kubernetes in Production Covering security, governance, monitoring, storage, networking, container life cycle management and container orchestration.
Kubernetes makes the management part of infrastructure easier, but it in no way manages itself. As the application scales up, down, in, or out, the infrastructure needs to change, be replaced, and/or be upgraded. What it enables the developers to do is become closer to the end user and understand the issues faced by the operations teams while maintaining the application. For Kubernetes to truly become serverless, it has to remove its dependencies from the notion that a physical machine is required in order to interact with the Kubernetes API.
Kubernetes and serverless have more than deserved their status platforms that offer organizations tremendous boosts in agility, scalability and computing performance in a number of ways. However, it is easy to forget that Kubernetes offers advantages that serverless alternatives do not have — and vice versa. Learn how to decide whether Kubernetes or serverless offers the best fit.
This guide will show you how to get started with serverless computing using Bitnami’s Kubeless platform, running on top of a Kubernetes cluster in Minikube or Google Container Engine. This tutorial assumes that you already have a Kubernetes cluster on either Minikube or GKE with kubectl installed.
This tutorial will walk you through enabling serverless in your Kubernetes environment. Kubernetes is a popular platform to manage serverless workloads and microservice application containers and to process workloads more quickly and easily.
The serverless architecture allows developers to focus more on business expansion and innovation, without worrying about traditional server purchase, hardware maintenance, network topology, and resource resizing. The development of containers and container orchestration tools significantly reduces the development costs of serverless products, and has facilitated the birth of many excellent opensource serverless products. Most of these opensource serverless products are Kubernetes -based.
Kubernetes Cluster Policies — For enterprise production deployments of Kubernetes clusters, enforcing cluster-wide policies to restrict what a container is allowed to do is an extremely important requirement. This page gathers resources about Kubernetes Cluster Policies such as Pod Security Policies, Network Policies and Resource Quotas.
Kubernetes Federation — Kubernetes Federation gives you the ability to manage deployments and services across all the Kubernetes clusters located in different regions. This page gathers resources on how to set up a Kubernetes Cluster Federation, including tutorials and examples.
Kubernetes High Availability Clusters — Kubernetes clusters enable a higher level of abstraction to deploy and manage a group of containers that comprise the micro-services in a cloud-native application. This page gathers resources about high availability cluster components and how to set up a high availability Kubernetes cluster.
Kubernetes Logging — Application and system logs can help you understand what is happening inside a cluster. Kubernetes provides two logging end-points for applications and cluster logs: Stackdriver Logging for use with Google Cloud Platform and Elasticsearch. This page gathers resources about Kubernetes logging architecture including tutorials and examples.
Kubernetes Proxies — There are several different proxies you may encounter when using Kubernetes: kubectl, apiserver proxy, kube-proxy, a proxy/load-balancer in front of apiserver and a cloud load balancer on external services. This page gathers resources about the different types of Kubernetes proxies.
Kubernetes Serverless — The idea behind serverless computing is that it lets you, as a developer, focus only on writing your code. With serverless computing, you just upload the code somewhere, and it runs whenever you invoke it. Simply put, serverless computing frees you from the complexities of configuring and maintaining Kubernetes clusters. •This page gathers resources about how to build a Serverless Kubernetes cluster.
Working with Kubernetes Dashboard — Kubernetes Dashboard is a general purpose, web-based UI for Kubernetes clusters. It allows users to manage applications running in the cluster and troubleshoot them, as well as manage the cluster itself. This page gathers resources on how to install, access and secure Kubernetes dashboard.