What are Serverless Containers?
Many IT and DevOps teams are migrating resources from on-premises infrastructure to the cloud. In addition, organizations are moving to container and serverless architectures, collectively known as cloud native technologies. Kubernetes has become the de facto standard for container orchestration. Containerization has compelling benefits, but is also difficult to set up and manage at large scale.
Serverless containers can help organizations leverage the cloud while easily adopting containerized infrastructure. The term “serverless containers” refers to technologies that enable cloud users to run containers, but outsource the effort of managing the actual servers or computing infrastructure they are running on. This can enable more rapid adoption, and easier management and maintenance, of large scale containerized workloads in the cloud.
In this article, you will learn:
- Serverless Containers vs. Serverless Functions
- 3 Leading Serverless Container Platforms Compared
- AWS Fargate
- Azure Container Instance
- Google Cloud Run
Serverless Containers vs. Serverless Functions
The original form of the serverless computing model was serverless functions. Serverless runtime platforms, such as Amazon Lambda or Azure Functions, let you provide code in any programming language, package it as a serverless function, and have it run automatically whenever an integrated system invokes the function. In this model, you pay according to the number and duration of function invocations, and do not need to manage any underlying infrastructure.
There are several important differences between serverless containers and serverless functions:
- Resource limits—serverless functions have certain limits on the CPU and RAM you can use, so you can use them for small processes that are not resource intensive. Serverless containers let you run complete containers, with a much larger CPU and RAM allocation.
- Execution time—serverless functions have a maximum execution time, typically around 15 minutes. Serverless containers can run continuously and do not have a limited execution time.
- Containerization—serverless functions do not use containers. This makes it easier to set up a function, because you don’t need to build and test a container image. The serverless container model requires more setup, but gives you more flexibility to package dependencies and libraries together with your core application.
Related content: read our guide to serverless vs containers to understand the difference between serverless functions and “pure” non-managed container infrastructure
3 Leading Serverless Container Platforms Compared
AWS Fargate lets you run containers on Amazon, without being aware of the underlying Amazon EC2 compute instances. Fargate eliminates the need to configure, provision, or scale the virtual machines (VMs) that run your containers. There is no need to choose a server type, optimize cluster packing, or determine when a cluster should be scaled.
- Maximum resources per pod—up to 30 GB of memory and four virtual CPUs (vCPUs).
- Limited Kuberntes options—for example, Fargate cannot run Privileged pods, Daemonsets, or any pods using either HostPort or HostNetwork.
- Fargate supports only one type of load balancer—the Application Load Balancer.
Integration with Other Services
AWS Fargate can integrate with Amazon Elastic Kubernetes Service (EKS) as well as Amazon Elastic Container Service (ECS).
Amazon ECS is a lightweight container management service, not based on Kubernetes, that lets you orchestrate Docker containers. Amazon EKS offers fully-managed Kubernetes as a Service (KaaS). ECS and EKS both use Fargate-provisioned containers to automatically scale, load balance, and schedule containers.
AWS provides several native tools you can use to monitor and secure Fargate operations, including:
- Amazon CloudWatch alarms—lets you track any single metric over a predefined period of time. If you are using services with tasks that employ the Fargate launch type, CloudWatch alarms can use metrics, like memory utilization and CPU, to scale tasks in and out.
- Amazon CloudWatch logs—let you store, access, and monitor the log files of containers located in ECS tasks. To do this, you need to specify the awslogs log driver in your task definitions.
- AWS Identity and Access Management (IAM)—helps you to securely control access to AWS resources. IAM lets you control who can be signed in (authenticated) and have permissions (authorized) to use AWS resources.
- Amazon ECS interface VPC endpoints—can help improve the security on your VPC. You can do this by configuring Amazon ECS to use interface VPC endpoints powered by AWS PrivateLink. This technology enables you to privately access ECS APIs via private IP addresses.
Azure Container Instance
Azure Container Instances (ACI) lets you deploy containers in the Azure cloud without having to handle the provisioning and maintenance of the underlying infrastructure. There is no need to provision VMs or set up and manage a container orchestrator to deploy and run your containers, although ACI supports the use of orchestrators like Kubernetes. The service supports both Windows and Linux containers.
ACI lets you use the command-line interface (CLI) or the Azure portal to spin up new containers. The system then automatically provision and scale the underlying compute resources needed to run the new containers. You can use standard Docker images, including those from container registries like Azure Container Registry and Docker Hub.
ACI supports only public IPs and can work with both Windows and Linux container images. The service does not support all Windows container features—containers are limited to up to 14 GB memory and four vCPUs. You can find out more about service limits in the official documentation.
Integration with Other Services
ACI can integrate with a number of other Azure services, including Azure Kubernetes Service (AKS). AKS offers managed, cloud-based container orchestration based on Kubernetes. You can use AKS to manage, scale, and deploy container-based applications and Docker containers across a cluster of container hosts.
ACI can help improve AKS processes by providing fast, isolated compute that meets spikes in traffic. For example, AKS can use a Virtual Kubelet to initiate ACI-provisionioned pods that start in a few seconds. This type of use case can ensure that AKS runs with the minimum required capacity. Once an AKS cluster requires more capacity, you can use ACI to add additional pods with ACI without the need to manage additional servers.
Here are several aspects to consider when implementing security measures for ACI:
- Monitor and scan container images—you should scan container images for known and potential vulnerabilities, whether the image is from a public or private registry. This can help you detect threats and prevent security issues. You can find various scanning solutions in the Azure Marketplace.
- Use a private registry—private registries are publicly available to everyone, and can contain threats. Instead of a private registry, consider using private registries, like Docker Trusted Registry, which is generally considered more secure. You can install this registry on-premises or use it in your virtual private cloud.
- Protect credentials—containers are often scattered across multiple clusters and Azure regions. To protect credentials, you should use measures like tokens or passwords. You should also make sure that only authorized, privileged users can gain access. To keep track of access control, create an inventory of all credential secrets and use a secret management tool.
Google Cloud Run
Cloud Run is a serverless platform that enables you to run stateless and request-driven containers on a fully-managed environment. The platform lets you containerize applications, including stateless apps. Cloud Run can then automatically provision and scale the application.
Cloud Run uses the Knative serving API to ensure you can deploy Cloud Run on top of Kubernetes. This significantly increases the efficiency of this service.
There are limitations to Cloud Run:
- High latency requests—requests with custom domains can result in very high latency when originating from some locations, like us-east4 and asia-northeast1.
- Does not support HTTP/2 Server Push—it only supports HTTP/2.
- Does not support requests with HTTP methods like CONNECT and TRACE—these requests cannot be received by any service running on Cloud Run.
- Limited resources for containers—to learn about exact limitations, see the official documentation.
Integration with Other Services
Cloud Run can integrate with other services, including Google Kubernetes Engine (GKE). GKE
is an orchestration and management system for container clusters and Docker containers that run on Google Cloud.
Cloud Run on GKE provides an identical developer experience to that of GKE, with Cloud Run providing a managed operational infrastructure. Developers can use existing code they have written, existing command line scripts, and the same tools. The only difference is that they can take containerized workloads and move them seamlessly to any Kubernetes compliant cluster.
When using Cloud Run without GKE, each container gets a single vCPU, and this cannot be customized. Cloud Run on GKE gives you more flexibility, letting you choose VM sizes with more vCPUs or additional RAM, as well as hardware acceleration options. You can run Cloud Run workloads alongside non-containerized workloads on GKE. Containers can also directly access your existing VPCs.
Here are several aspects to consider when implementing security measures for Cloud Run:
- Using per-service identity—you should give each service a dedicated identity—this practice is recommended by Google. You can do this by assigning a user-managed service account instead of the default account.
- Optimizing service accounts with Recommender—this feature can automatically provide you with recommendations and insights about Google Cloud resources. It is based on machine learning, current resource usage, and heuristic methods.
- Using VPC Service Controls—this is a feature of Google Cloud that lets you set up a secure perimeter designed to prevent data exfiltration. You can integrate VPC Service Controls with your Cloud Run account, to add an additional security layer.
- Using customer-managed encryption keys—Cloud KMS customer managed encryption keys (CMEK) can help you secure your Cloud Run services and any related data. This functionality enables you to deploy containers with CMEK keys that protect the content of a container image.