Kubernetes is a system developed by Google, for managing containerized applications in a clustered environment. It aims to provide better ways of managing related, distributed components across varied infrastructure. This page gathers resources about the advantages and common use cases of using Kubernetes.
The architecture of Kubernetes provides a flexible, loosely-coupled mechanism for service discovery. Like most distributed computing platforms, a Kubernetes cluster consists of at least one master and multiple compute nodes. This page gathers resources about the Kubernetes architecture components like Kubernetes Nodes, Kubernetes Pods, Kubernetes Registry and more.
- Kubernetes Nodes — A node is a worker machine in Kubernetes, previously known as a minion. A node may be a VM or physical machine, depending on the cluster. Each node has the services necessary to run pods and is managed by the master components. The services on a node include Docker, kubelet and kube-proxy. This page gathers resources about how to create and manage Kubernetes Nodes.
- Kubernetes Pods — A pod is a group of one or more containers (such as Docker containers), with shared storage/network, and a specification for how to run the containers. A pod’s contents are always co-located and co-scheduled, and run in a shared context. This page gathers resources on what Kubernetes Pods are and how to create and manage them.
- Kubernetes Controllers and Control Plane — Kubernetes runs a group of controllers that take care of routine tasks to ensure the desired state of the cluster matches the observed state. Basically, each controller is responsible for a particular resource in the Kubernetes world. This page gathers resources about the Kubernetes controllers including information about replication controllers, node controllers and the Kubernetes controller manager.
- Kubernetes DaemonSets — Running a container on all cluster nodes is a common task. Aggregating service logs, collecting node metrics, or running a networked storage cluster all require a container to be replicated across all nodes. In Kubernetes, this is done with a DaemonSet. A DaemonSet ensures that an instance of a specific pod is running on all (or a selection of) nodes in a cluster. This page gathers resources on how to use and deploying a daemon to all nodes.
- Container Runtime Interface — The Container Runtime Interface (CRI) is a plugin interface which enables kubelet to use a wide variety of container runtimes, without the need to recompile. CRI consists of a protocol buffers and gRPC API, and libraries, with additional specifications and tools under active development. This page gathers resources about how to use the container runtime interface and how to build Kubernetes cluster using the CRI.
- Working with Containers in Kubernetes — Container orchestration is most commonly used for clusters that consist of many nodes. It is mainly used to deploy and manage complex containerized applications. Container orchestration can also be employed for simple clusters or for individual containers. This page gathers resources about how to work and orchestrate containers with Kubernetes.
- Working with Images in Kubernetes — This page gathers resources about how to create and work with container images (such as Docker images) in Kubernetes using different environments like Azure, OpenShift and more.
- Workloads in Kubernetes — As more and more enterprises adopt a container based architecture, a container orchestrator has become necessary in order to provide wide-ranging options to manage containerized workloads. Kubernetes provides many options to manage containerized workloads. This page gathers resources on how to run workloads in Kubernetes.
- Kubernetes Services — A Kubernetes service is an abstraction which defines a logical set of Pods and a policy by which to access them - sometimes called a micro-service. The set of Pods targeted by a service is usually determined by a Label Selector. This page gathers resources about the Kubernetes service types and how to create and work with them.
- Kubernetes Jobs — A Kubernetes job is a supervisor for pods carrying out batch processes, that is, a process that runs for a certain time to completion, for example a calculation or a backup operation. This page gathers resources about Kubernetes Jobs, including an introduction, tutorials,examples and more.
- Kubernetes and Microservices — Kubernetes supports a microservices architecture through the service construct. It allows developers to abstract away the functionality of a set of Pods and expose it to other developers through a well-defined API. This page gathers resources about how to use Kubernetes to create a continuous delivery configuration for building microservices.
Resources about the process of managing and maintaining production-grade, highly available Kubernetes clusters, including Kubernetes security, Kubernetes networking, Kubernetes load balancing and more.
- Installing Kubernetes — There are many ways to install Kubernetes and the obvious starting point is the setup section, but the installation process can sometimes be a challenge. This page gathers resources about how to install Kubernetes on various environments like Ubuntu, Windows and CentOS.
- Kubernetes Configuration — Kubernetes reads YAML files to configure services, pods and replication controllers.This page gathers resources about working with the Kubernetes configuration to deploy containers.
- Kubernetes Monitoring — At any scale, monitoring Kubernetes itself as well as the health of application deployment and the containers running them is essential to ensure good performance. Monitoring Kubernetes effectively requires to rethink and reorient all monitoring strategies, especially if using traditional hosts such as VMs or physical machines. This page gathers tesources about how to monitor Kubernetes cluster with tools like Prometheus and Datadog.
- Kubernetes Debugging and Troubleshooting — Kubernetes is a utility that makes it possible to deploy and manage sets of docker-formatted containers that run applications. This page gathers resources about how to troubleshoot problems that arise when creating and managing Kubernetes pods, replication controllers, services, and containers.
- Kubernetes Load Balancing — Load balancing is a relatively straightforward task in many non-container environments (i.e., balancing between servers), but it involves a bit of special handling when it comes to containers. There are two different types of load balancing in Kubernetes - Internal load balancing across containers of the same type using a label, and external load balancing. This page gathers resources about how to configure and use the Kubernetes load balancer feature.
- Kubernetes Security — Kubernetes provides many controls that can improve application security. Configuring them requires intimate knowledge with Kubernetes and the deployment’s security requirements. This page gathers resources about security best practices for Kubernetes, including best practices for deployment, sharing data and network security.
- Kubernetes Networking — Kubernetes does not provide any default network implementation, rather it only defines the model and leaves to other tools to implement it. There are many implementations nowadays like Flannel, Calico and Weave. This page gathers resources about how to set up highly available networked Kubernetes clusters.
- Kubernetes Storage Management — Storage is a critical part of running stateful containers, and Kubernetes offers powerful primitives for managing it. This page gathers resources about managing Kubernetes storage options and how to provision storage in Kubernetes.
- Kubernetes in Production — The default configurations for Kubernetes components are not designed for heavy and dynamic production workloads, characteristic of DevOps environments and micro-services based application deployments where containers are quickly created and destroyed. This page gathers resources about how to create a production-ready Kubernetes cluster, including examples and tutorials.
A Kubernetes cluster is made of a master node and a set of worker nodes. In a production environment these run in a distributed setup on multiple nodes. For testing purposes, all the components can run on the same node (physical or virtual) by using minikube. Kubernetes has six main components that form a functioning cluster: API server, Scheduler, Controller manager, kubelet, kube-proxy, etcd. This page gathers resources about Kubernetes administrative procedures such as configuration, resource
- Kubernetes Cluster Networking
- Kubernetes Cluster Policies — For enterprise production deployments of Kubernetes clusters, enforcing cluster-wide policies to restrict what a container is allowed to do is an extremely important requirement. This page gathers resources about Kubernetes Cluster Policies such as Pod Security Policies, Network Policies and Resource Quotas.
- Kubernetes Federation — Kubernetes Federation gives you the ability to manage deployments and services across all the Kubernetes clusters located in different regions. This page gathers resources on how to set up a Kubernetes Cluster Federation, including tutorials and examples.
- Kubernetes High Availability Clusters — Kubernetes clusters enable a higher level of abstraction to deploy and manage a group of containers that comprise the micro-services in a cloud-native application. This page gathers resources about high availability cluster components and how to set up a high availability Kubernetes cluster.
- Kubernetes Logging — Application and system logs can help you understand what is happening inside a cluster. The logs are particularly useful for debugging problems and monitoring cluster activity. Kubernetes provides two logging end-points for applications and cluster logs: Stackdriver Logging for use with Google Cloud Platform and Elasticsearch. This page gathers resources about Kubernetes logging architecture including tutorials and examples.
- Kubernetes Proxies — There are several different proxies you may encounter when using Kubernetes: kubectl, apiserver proxy, kube-proxy, a proxy/load-balancer in front of apiserver and a cloud load balancer on external services. Kubernetes users will typically not need to worry about anything other than the first two types. The cluster admin will ensure that the latter types are setup correctly. This page gathers resources about the different types of Kubernetes proxies.
- Kubernetes Serverless — The idea behind serverless computing is that it lets you, as a developer, focus only on writing your code. With serverless computing, you just upload the code somewhere, and it runs whenever you invoke it. Simply put, serverless computing frees you from the complexities of configuring and maintaining Kubernetes clusters. This page gathers resources about how to build a Serverless Kubernetes cluster.
Resources on building blocks of a container architecture, and architectural options organizations face when using containers for application development.
- What is a Container — Containers are a method of virtualization that packages an application's code, configurations, and dependencies into building blocks for consistency, efficiency, productivity, and version control. This page gathers resources about containers, including technical definitions and comparisons.
- What is a Container Image — A container image is a self-contained piece of software that has everything in it needed to run – code, tools, and resources. This page gathers resources about container images, including tutorials and container-related conferences.
- What is a Container Image Repository — A container image repository is a collection of related container images, usually providing different versions of the same application or service. This page gathers resources about image repositories, including tutorials and specific environments in which image repositories are used.
- Container Image Registries — A container image registry is a service that stores container images, and is hosted either by a third-party or as a public/private registry such as Docker Hub, Quay, and so on. This page gathers resources about container image registries, including tutorials and specific technologies or tools related to container image registries.
- Containers and Agile Development — Agile software development and delivery via containerization are tightly related. This page includes resources about the benefits of using containers in the agile development cycle.
- Containers and DevOps — DevOps is a set of cultural practices that emphasize collaboration between all parts of the IT organization and the “continuous delivery” of software. This page gathers resources about how containers fit into the DevOps ecosystem and how to implement DevOps with containerization.
- Containers vs Virtual Machines — A virtual machine (VM) is an operating system or application environment installed on software, which imitates dedicated hardware. This page gathers resources about the containers vs virtual machines comparison, including a comparison of strengths and weaknesses, application portability, security and isolation, and more.
- Containers vs Unikernels — Unikernels are application sized virtualization like a container but use a unique kernel and OS like with a virtual machine. They are an image that contains a library operating system that can be directly be run on a hypervisor. This page gathers resources about containers and virtual machines, including a review of their differences.
- Containers vs Traditional Application Model — The traditional application model is a model in which applications are executed directly on virtual machines or on bare-metal servers. This page gathers resources about the difference between containerized infrastructure and the traditional application model.
- Containers and Microservices — Microservices or microservices architecture describes a particular way of designing software applications as suites of independently deployable services. This page gathers resources about using containers to build a microservices architecture and the benefits of combining microservices with containers.
Resources about the advantages of containers for developers and ops, including immutability, utilization, portability, performance and scalability.
- Container Immutability — The principle of container immutability regards an image unchangeable once it is built, and requires creating a new image if changes need to be made. This page gathers resources about the container immutability principle, its benefits and implications.
- Container Resource Utilization — Container resource utilization refers to the process of making the most of the computing resources like CPU and memory, available in order to achieve the best container performance. This page gathers resources about how to manage resources to get the optimal container performance.
- Container Portability — Container portability means the ability to move an application, in other words, port it from one host environment to another. The new host environment could be a different kind of operating system, different version of the same operating system or a different type of hardware platform. This page gathers resources about the benefits of container portability.
- Container Performance — Container performance refers to speed-related factors such as container startup time, resource distribution, and redundancy (duplication of components), and how these affect the software delivery pipeline. This page gathers resources about container performance, including best practices, performance analysis, and academic papers.
- Container Scalability — Container scalability is the trait where a container application can handle increased loads of work. This can be achieved by reconfiguring the existing architecture of a single machine to increase available resources (Vertical Scalability) or by provisioning additional containers within a cluster of distributed machines (Horizontal Scalability). This page gathers resources about how to orchestrate container applications for high scalability.
- Container Operating Costs — Container's benefits are not just technical. Containers can also reduce costs - which is the big reason why companies are now adopting them. This page gathers resources about container operating costs and their influence on overall system costs.
Containers are quickly becoming popular as a way to speed and simplify application deployment. However, while developers often find it fast and easy to deploy containerized applications, experts say that enterprises sometimes run into unexpected challenges when deploying containers in production. This page gathers resources about some of the major challenges in container adoption and how to overcome them.
- Container Storage Best Practices — While a container keeps its bundle of software and dependencies wherever it goes, it doesn’t store data so it can maintain a light footprint. If a process stops or the container is rebooted, all the data associated with any applications within is lost. This page gathers resources about how to overcome this challenge and achieve persistent storage for containers.
- Container Networking Best Practices — Container systems need networking functionality in order to function properly and to connect distributed applications across the cloud. This page gathers resources about container networking best practices including, challenges and concepts of container networking.
- Containers and OS Compatibility — Most major operating systems have some sort of container compatibility and since the launch of Docker, there has been an explosion of new container-centric operating systems, including CoreOS, Ubuntu Snappy, RancherOS. This page gathers resources about the challenges in hosting containers on different operating systems.
Information technology infrastructure is composed of physical and virtual resources that support the flow, storage, processing and analysis of data. This page gathers resources about the combination of containers and IT Infrastructure like hybrid clouds, private clouds, data center and more.
- Containers and Hybrid Clouds — The growing number of hybrid cloud deployments is accelerating the demand for enterprise container infrastructure as companies seek a consistent application development environment. This page gathers resources about the combination of containers and hybrid clouds including benefits of this combination and tutorials on how to get started.
- Containers and Private Clouds — Private cloud is a type of cloud computing that delivers similar advantages to public cloud, including scalability and self-service, but through a proprietary architecture. Unlike public clouds, which deliver services to multiple organizations, a private cloud is dedicated to a single organization. This page gathers resources about the combination of containers and private clouds and how they can serve as a container management environment.
- Containers in The Data Center — Containers are being adopted in the data center in a rapid pace. Infrastructure managers must embrace this change to address the demands of bimodal IT, but in a controlled and tactical manner. This page gathers resources about the role of containers in data centers and the implications that container adoption will have for data center operators.
- Containers and Virtualization — As virtualization continues to increase in importance, containers will increasingly take center stage. This page gathers resources about running containers on virtual machines.
- Containers and Serverless Computing — Serverless computing is a cloud computing execution model in which the cloud provider dynamically manages the allocation of machine resources. While it seems obvious that containers should have a long shelf life, the serverless boom may actually serve to cut it short. This page gathers resources about containers and serverless computing, the benefits and disadvantages of each one and their impact on application deployment.
- Containers and Hyperconvergence — Hyperconvergence is an IT framework that combines storage, computing and networking into a single system in an effort to reduce data center complexity and increase scalability. This page gathers resources about how the combination of containers and hyperconvergence can help IT to achieve greater efficiency at all layers.
- Containers and Big Data — Until a short while ago, data analysts concentrated on algorithms, and containers were merely there to help. However in the era of Big Data, the choice of data containers is critical. This page gathers resources about how can containers contribute to improving Big Data.
Securing containers requires a different approach. Since containers run on a shared host and typically use multiple components to deliver a complete solution, there are many considerations that are required to secure container environment. This page gathers resources about managing security in containers including security considerations, security best practices and more.
Container-based deployments have become the preferred approach for managing the build and release of complex applications. Popular container technologies such as Docker enable developer velocity by providing a robust environment closely resembling production that can be constructed in minutes. This page gathers resources about container-based deployments, including overviews, tutorials and more.
Container monitoring is the activity of monitoring the performance of microservice containers in different environments. Monitoring is the first step towards optimizing and improving performance. This page gathers resources about the container monitoring process, tools and important metrics to watch during the process.
Software development groups realize that the only way they can make the development and tooling processes work at scale is by automating as much as possible to reduce the scope of manual process. This page gathers resources about how containers integrate in different aspects of automation like automated builds, automated tests and more.
Multi-tenancy is an architecture in which a single instance of a software application serves multiple customers. Each customer is called a tenant. This page gathers resources about the importance of multitenancy in containerize apps delivery.
Containers are like any other data source that needs to be protected. As your organization comes to rely on Docker containerization technology for critical IT functions, you need to ensure appropriate safeguards are in place to minimize disruptions to your business operations. This page gathers resources about container backup and disaster recovery methods, tools and guides on how to set it up.
If you're getting started with Docker, or want to go in depth, we have you covered with comprehensive reviews of the most important topics concerning Docker engineers.
- Docker Swarm 101 — Learn Docker Swarm concepts, architecture and basic usage, and go in depth with tutorials and videos from the community.
- Docker Networking 101 — Learn about Docker network types, how containers communicate, common networking operations, and more.
- Docker Images 101 — Learn about Docker images, running images, image registries, common docker image operations, best practices, and more.
- Docker Registries 101 — Learn Docker Registry concepts and basic usage, and go in depth with tutorials and videos from the community.
- Docker vs. Kubernetes - 8 Industry Opinions — Docker Swarm and Kubernetes are two popular choices for container orchestration. We collected 8 industry opinions on which orchestration tool is better and which is more useful for different use cases.
- Docker Alternatives - Rkt, LXD, OpenVZ, Linux VServer, Windows Containers — Learn about Docker alternatives, how each alternative differs from Docker, and discover the road ahead for Docker alternatives.
- Docker Tools — Learn about Docker alternatives, how each alternative differs from Docker, and discover the road ahead for Docker alternatives.
- Docker vs. Virtual Machines — Docker provides many capabilities of Virtual Machines, with added advantages. Learn how they compare.
- Docker in the Cloud — Learn about alternatives for running Docker in the cloud: Docker Cloud, AWS, AKS, and GKE.
- Docker in Production — Learn about running Docker in a production environment: strategies for scaling up, selecting a cloud-host vendor, orchestrating multiple clusters of containers, and more.
- Docker Deployment — Learn about using deploying Docker: microservices architecture, orchestration tools such as Kubernetes, Service Mesh for networking, security concerns, and more.
Resources about the basic docker operations such as running docker containers, working with dockerfiles, creating and sharing docker images, storing data within containers and more.
- Creating Docker Images — There are two ways to create a Docker image: manually using the "docker commit" command, or automatically using a Dockerfile. This page gathers resources on how to create a docker image and how to build and share them.
- Docker Image Registry — A Docker registry is a place to store and distribute Docker images. It serves as a target for "docker push" and "docker pull" commands. A registry is a content delivery and storage system for named Docker images. It can be thought of as a collection of repositories keyed by name. This page gathers resources about general information on Docker image registries and guides on how to use them.
- Docker Image Repositories — A Docker Image repository is a place where Docker Images are actually stored, compared to the image registry which is a collection of pointers to this images. This page gathers resources about public repositories like the Docker hub and private repositories and how to set up and manage Docker repositories.
- Working With Dockerfiles — The Dockerfile is essentially the build instructions to build the Docker image. The advantage of a Dockerfile over just storing the binary image is that the automatic builds will ensure you have the latest version available. This page gathers resources about working with Dockerfiles including best practices, Dockerfile commands, how to create Docker images with a Dockerfile and more.
- Running Docker Containers — All docker containers run one main process. After that process is complete the container stops running. This page gathers resources about how to run docker containers on different operating systems, including useful docker commands.
- Working With Docker Hub — Docker Hub is a cloud-based repository in which Docker users and partners create, test, store and distribute container images. Through Docker Hub, a user can access public, open source image repositories, as well as use a space to create their own private repositories, automated build functions, and work groups. This page gathers resources about Docker Hub and how to push and pull container images to and from Docker Hub.
- Docker Container Management — The true power of Docker container technology lies in its ability to perform complex tasks with minimal resources. If not managed properly they will bloat, bogging down the environment and reducing the capabilities they were designed to deliver. This page gathers resources about how to effectively manage Docker, how to pick the right management tool including a list of recomended tools.
- Storing Data Within Containers — It is possible to store data within the writable layer of a container. Docker offers three different ways to mount data into a container from the Docker host: volumes, bind mounts, or tmpfs volumes. When in doubt, volumes are almost always the right choice. This page gathers resources about various to store data with containers, the downsides like the persistent storage and information on how to manage data in Docker.
Resources about the Docker administrative procedures such as Docker configuration, collecting Docker metrics, Docker logging and more.
- Docker Configuration — After installing Docker and starting Docker, the dockerd daemon runs with its default configuration. This page gathers resources on how to customize the configuration, start the daemon manually, and troubleshoot and debug the daemon if run into issues.
- Collecting Docker Metrics — In order to get as much efficiency out of Docker as possible, we need to track Docker metrics. Monitoring metrics is also important for troubleshooting problems. This page gathers resources on how to collect Docker metrics with tools like Prometheus, Grafana, InfluxDB and more.
- Starting and Restarting Docker Containers Automatically — Docker provides restart policies to control whether your containers start automatically when they exit, or when Docker restarts. Restart policies ensure that linked containers are started in the correct order. This page gathers resources about how to automatically start Docker containers on boot or after server crash.
- Managing Container Resources — Resource management for Docker containers is a huge requirement for production users. It is necessary for running multiple containers on a single host in an efficient way and to ensure that one container does not starve the others in terms of cpu, memory, io, or networking. This page gathers resources about how to improve Docker performance by managing it's resources.
- Controlling Docker With systemd — Systemd provides a standard process for controlling programs and processes on Linux hosts. One of the nice things about systemd is that it is a single command that can be used to manage almost all aspects of a process. This page gathers resources about how to use systemd with Docker daemon service.
- Docker CLI Commands — There are a large number of Docker client CLI commands, which provide information relating to various Docker objects on a given Docker host or Swarm cluster. Generally, this output is provided in a tabular format. This page gathers resources about how the Docker CLI Work, CLI Tips and Tricks and basic Docker CLI commands.
- Docker Logging — Logs tell the full story of what is happening, or what happened at every layer of the stack. Whether it’s the application layer, the networking layer, the infrastructure layer, or storage, logs have all the answers. This page gathers resources about working with Docker logs, how to manage and implement Docker logs and more.
- Troubleshooting Docker Engine — Docker makes everything easier. But even with the easiest platforms, sometimes you run into problems. This page gathers resources about how to diagnose and troubleshoot problems, send logs, and communicate with the Docker Engine.
Resources about the the basic security considerations of running an application within a Docker container, including security best practices, Docker trusted images, isolating Docker containers and more.
- Docker Security Basics — Docker offers a lot of advantages, simplifying both development and production environments, but there is still uncertainty around the security of containers. This page gathers resources about the Docker Security model, its limitations, and how to maximize Docker's security.
- Docker Repository Security and Certificates — Docker runs via a non-networked Unix socket and TLS must be enabled in order to have the Docker client and the daemon communicate securely over HTTPS. This page gathers resources about how to ensure the traffic between the Docker registry and the Docker daemon is encrypted and a properly authenticated using certificate-based client-server authentication.
- Docker Trusted Image Registry — Docker Trusted Registry (DTR) is the enterprise-grade image storage solution from Docker. It is installed behind a firewall so that Docker images can be securely stored and managed. This page gathers resources about the benefits of Docker trusted registry and how to work with it.
- Docker AppArmor Security Profiles — AppArmor (Application Armor) is a Linux security module that protects an operating system and its applications from security threats. To use it, a system administrator associates an AppArmor security profile with each program. Docker expects to find an AppArmor policy loaded and enforced. This page gathers resources about Docker AppArmor security profiles and how to use them to enhance container security.
- Isolating Docker Containers — Docker container technology increases the default security by creating isolation layers between applications and between the application and host and reducing the host surface area which protects both the host and the co-located containers by restricting access to the host.
When containerization is implemented with good security practices, containers can offer better application security rather than a VM only solution. This is because there is an opportunity for the container to be an additional boundary between an application exploit occurring and the attacker getting access to the host. This page gathers resources about basic tips and best practices as to how to secure containers.
DevSecOps is an extension of the DevOps concept that emphasizes the integration of security teams into continuous delivery workflows. This page gathers resources about how DevSecOps makes for a more efficient and secure containers.
While containers are driving evolution in the management of network applications, which, although self-contained, are still vulnerable. This page gathers resources about container vulnerabilities like 'Dirty Cow' and 'Escape Vulnerability' including tips on how to secure containers from cyber threats.
A big part of any organization’s risk assessment process is to be aware of and gain visibility into vulnerabilities in the software being used. From an attacker point of view, having known vulnerabilities is akin to leaving the organization’s doors and windows wide open. Vulnerability scans are there to ensure that no such doors or windows are left open by mistake. This page gathers resources about the the importance of container vulnerability scanning including Docker vulnerability scanning and
In computing as in real life, a secret is information you want kept private, outside of the people and systems you want or need to share it with. In the application security realm, common examples of secrets are passwords, tokens, and private keys. This page gathers resources about managing secrets in containers including Docker containers, Amazon EC2 Container Service, Kubernetes and more.
The wide adoption of containers and the ability to retrieve images from different sources impose strict security constraints. Containers leverage Linux kernel security facilities, such as namespaces, cgroups and Mandatory Access Control. This page gathers resources about container access control - deciding and enforcing who gets access to which container resources.
Security and compliance are top of mind for IT organizations. In a technology-first era rife with cyber threats, it is important for enterprises to have the ability to deploy applications on a platform that adheres to stringent security baselines. This page gathers resources about audits and compliance of containers and their relationship to security.
Containers changed the adoption of public and private clouds. With a container image, a common package format can be run on premises as well as on every major cloud provider. This page gathers resources about how containers changed the world of cloud computing and how to run them in the cloud.
- Containers on AWS — This page gathers resources about how to choose a container environment on AWS. AWS offers two fully managed control planes to choose between: Amazon ECS and Amazon EKS. In order to run containers on AWS you need an underlying pool of resources that the control plane can use to launch your containers. There are two options for doing this: Amazon ECS Container Instances or AWS Fargate, which is a new service for running containers without needing to manage the underlying infrastructure.
- Amazon EC2 Container Service — Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers on a cluster. This page gathers resources about how to set up and run container images on Amazon EC2 Container Service.
- AWS Fargate — AWS Fargate is a technology for Amazon ECS and EKS that allows to run containers without having to manage servers or clusters. With AWS Fargate, you no longer have to provision, configure, and scale clusters of virtual machines to run containers. This removes the need to choose server types, decide when to scale clusters, or optimize cluster packing. This page gathers resources about the advantages and key features of AWS Fargate.
- Amazon EKS — Amazon Elastic Container Service for Kubernetes (Amazon EKS) is a managed service that makes it easy for you to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes clusters.
- Containers on Azure — Azure provides a lot of options to run containers in the cloud, each with their own features, pricing and complexity. You can run containers (such as Docker) on Azure in Azure Container Service, Azure Container Instances, Azure Service Fabric and Web App for Containers. This page gathers resources about all the container services of Azure and how to deploy and manage containers with these services.
- Azure Container Service — Azure Container Service (ACS) provided by Azure helps to simplify the management of Docker clusters for running containerized applications. ACS supports 3 Orchestrators: DC/OS with Marathon, Docker Swarm, and Kubernetes. This page gathers resources about how to deploy an orchestrator cluster in Azure Container Service.
- Azure Container Instances - ACI — Azure Container Instances makes it easy to create and manage containers in Azure, without having to provision virtual machines or adopt a higher-level service. This page gathers resources about the advantages of Azure Container Instances, including tutorials and examples.
- Containers on Google Cloud Platform — Google Cloud Platform (GCP) provides multiple ways to run container workloads in the cloud depending on how much infrastructure management is desired. The options range from spinning up one’s own VMs to fully managed container orchestration platforms (based on Kubernetes) that are offered as options to the customer. This page gathers resources about the different ways to run a container on Google Cloud Platform.
- Google Container Engine — Google Container Engine (GKE) is a management and orchestration system for Docker container and container clusters that run within Google's public cloud services. Google Container Engine is based on Kubernetes, Google's open source container management system. This page gathers resources about how to get started and run containers on GKE.
- IBM Cloud Container Service — IBM Cloud Container Service provides a native Kubernetes experience that is secure and easy to use. The service removes the distractions that are related to managing your clusters and extends the power of your apps with IBM Watson and other cloud services by binding them with Kubernetes secrets. It applies pervasive security intelligence to your entire DevOps pipeline by automatically scanning Docker images for vulnerabilities and malware.
Since containers package so many of the libraries and subsystems that once were part of the operating system into the container, there’s increasingly less need for traditional server operating systems. In their place have sprung up a bevy of lightweight operating systems that significantly reduce the footprint of the operating system. This page gathers resources about lightweight container operating systems such as CoreOS, Rancher OS, Atomic and more.