What Is Cloud Native?
Cloud native is a new development pattern and methodology that leverages cloud computing to build highly scalable and resilient applications.
Cloud native applications are loosely coupled systems that are easier to manage, maintain, and update compared to traditional applications. A central element of cloud native approach is automation—the application and its environment is fully automated and can be updated frequently and with low effort.
Cloud native applications are typically deployed in environments such as public, private and hybrid clouds. They are based on technologies like containers, Kubernetes, microservices, serverless computing, and immutable infrastructure defined declaratively using code, also called infrastructure as code (IaC).
In this article:
History of Cloud Native
While container technology emerged in the 1970s, the use of containers for cloud native development began with the addition of Jalis to FreeBSD in 2000. In 2003, Google released its Borg system, a precursor to Kubernetes, the predominant container orchestration framework for cloud native development. Borg runs Google services such as Gmail, Search, and Maps.
In 2007, the Belgian project manager and consultant Patrick Debois conceived the concept of DevOps for cloud native development processes. Then in 2011, software engineers developed the first 12-factor (stateless) application, launching Heroku as a PaaS (platform-as-a-service) offering. The principles of 12-factor applications are now an established part of cloud native best practices.
The next major contribution to cloud native technologies was the release of the Docker container software in 2013, which made container-based applications easier to build and run. . Then in 2014, Google released a public, open source evolution of Borg in the form of Kubernetes. The combination of Docker and Kubernetes drove widespread adoption of containers. Today, over 50% of organizations in the US use containers in production
Industry leaders like IBM, Google, and Intel launched the Cloud Native Computing Foundation (CNCF) in 2015, laying the groundwork for the wider adoption of Kubernetes. CNCF is driving adoption of Kubernetes, and reports in its latest annual survey that 55% of surveyed organizations in the US operate Kubernetes in production. The number is even higher in Europe at 63%..
In 2017, the Cloud Foundry Foundation (consisting of Google, Pivotal, and VMware) launched the Kubo project, enabling the integration of engineering, lifecycle management, and deployment capabilities. Pivotal then released its container service in 2018, further transforming the cloud native landscape and helping organizations secure their container infrastructure.
Cloud Native Principles
Designing a cloud native application requires a comprehensive plan that addresses dynamic, multidimensional systems. Here are some of the design principles for cloud native development:
- Automation-driven design—a cloud native solution handles many tasks automatically, including scaling and fixes. Organizations must design automated processes for deploying, scaling, repairing, and monitoring applications.
- Emphasis on stateless applications—cloud native applications are ideally stateless, even if this is not always achievable. Managing state is harder in distributed apps, so using as many stateless components as possible is recommended.
- Resiliency—developers should incorporate resiliency and redundancy into their cloud native designs. A cloud native application should use disaster recovery services and multi-region or multi-zone deployments to avoid a single point of failure.
- Micro-perimeters—each component should have a micro-perimeter to harden the internal security and enable deployment on public networks.
- Language-agnostic architecture—a cloud native application often includes components using different frameworks and languages. REST APIs allow these components to interact.
- Immutable components—using immutable components makes the infrastructure more flexible and agile. A common way to achieve this is to configure a VM or server to prevent modification after deployment. With immutable servers, it’s easy to replace a server when there is an issue.
What Is Cloud Native Architecture?
Cloud native architecture involves the design of an application or service created specifically to run in the cloud. A successful cloud native architecture requires next-generation cloud support that is easy to maintain, cost-effective, and self-healing. A cloud native architecture provides a higher level of flexibility than traditional systems.
Cloud native applications use a microservices architecture. In other words, it is decomposed into several independent services that integrate with each other through APIs and event-based messaging, integrate with external components, and each perform a specific function.
Cloud native applications scale dynamically. Container orchestration tools are often used to manage the lifecycle of containers, which can be complex. The container orchestration tool handles resource management, load balancing, scheduling restarts after an internal failure, provisioning and deploying containers to server cluster nodes.
Software companies rely on cloud native technologies and microservices to support DevOps, provide flexibility, and improve scalability.
Related content: Read our guide to cloud native architecture
A microservices architecture is an approach to designing an application as a collection of services. It breaks down applications into lightweight, easily manageable microservices. Each performs a specific business function, is typically owned by a single team, and communicates with other related microservices via APIs.
A microservices architecture is well-suited to the cloud native cultural practices of independent, agile, autonomous teams. It is also highly compatible with cloud native computing paradigms such as container orchestration and serverless computing, because each microservice can easily be deployed as a container or serverless function.
12-Factor Application Design
12-factor or stateless applications do not store historical information—every transaction occurs as if for the first time. 12-factor applications use web servers or CDNs to process short-term requests for each function. The 12-factor methodology allows developers to build cloud-optimized applications, giving special attention to declarative processes, automation, and portability between environments.
12-factor design is important for cloud native applications—it enables systems to scale and deploy rapidly in response to changing demands. The twelve factors are:
- Codebase—each microservice has a dedicated codebase in a separate repository.
- Dependencies—microservices isolate and package their dependencies to enable changes that don’t impact the entire system.
- Configurations—a configuration management tool externalizes config information.
- Decoupled services—an addressable URL exposes additional resources like caches and data stores, decoupling them from the app.
- Separate stages—each release stage (build, run, etc.) must be fully separate, with unique IDs to enable rollbacks.
- Isolated processes—microservices execute processes separately from other services.
- Port binding—self-contained microservices expose their functionality and interfaces on dedicated ports.
- Concurrency (horizontal scaling)—the application scales horizontally to meet increased capacity demands, with multiple copies of the scaled processes.
- Disposable instances—disposability is important for scaling down services and shutting down components without impacting the overall system (Docker helps achieve this).
- Development/Production parity—the dev and prod environments must remain almost identical throughout the application lifecycle (containerization helps achieve this).
- Logs—microservices should generate logs as event streams, which an event aggregator can process.
- Administrative tasks—each admin or management process should run as a one-off task using an independent tool to maintain separation from the app.
Stateful Cloud Native Applications
A stateful application is a one that saves client data from one session to the next. For example, in an online banking application, it is essential to record the user’s previous activity and persist the current account balance.
Statefulness requires that each client communicates with the same server every time, or with a different server that has an up-to-date version of their data. This breaks many of the assumptions of a microservices architecture, in which each instance of a microservice is supposed to be a decoupled, standalone component.
Cloud native technologies like Kubernetes are evolving to better support stateless application scenarios. For example, Kubernetes introduced the concept of a “persistent volume”, a storage volume that stays alive even when individual Kubernetes pods shut down, and a “StatefulSet”, a scalable group of application instances in which each instance has a sticky, persistent ID. These are mechanisms that can facilitate stateful processing in a cloud native environment.
Cloud Native Technologies
Containers are a basic component of cloud native applications. A container packages application code and all the dependencies and resources needed, running that code as a single piece of software. Containerized applications are more portable and use underlying resources more efficiently than the previous generation of virtual machine (VM) infrastructure. They also have low administrative and operational overhead.
Containerization makes applications easier to manage and deploy, and makes processes easier to automate using other technologies in the cloud native ecosystem. Another important advantage of containers is that they are platform independent, reducing integration issues and the need to test and debug infrastructure and compatibility issues as part of the developer workflow.
Containerized applications enable higher development productivity, higher development velocity, and new solutions for application scalability, security, and reliability.
Container orchestration technology automates many of the operations that run containerized services and workloads. These tools can automate container lifecycle management tasks such as provisioning, scaling (up and down), deployment, load balancing, and networking. Container orchestration tools offer a framework for managing microservices architecture and containers at scale. Popular container orchestration tools include Kubernetes, OpenShift, and Docker Swarm.
Kubernetes is a popular container orchestration tool that can help you build application services spanning numerous containers. You can automate many processes, schedule containers across a Kubernetes cluster, scale resources, and manage health over time. It eliminates many manual processes, letting you cluster together collections of hosts, including physical and virtual machines (VMs), and run Linux containers.
Managed Container Platforms
A container platform is a software solution that provides capabilities for managing containerized applications, including automation, governance, orchestration, security, enterprise support for container architectures, and customization.
Containers as a Service (CaaS), or managed container platforms, are cloud-based services that let you upload, run, organize, manage, and scale containers using container-based virtualization. There are many managed container platforms. Popular offerings include:
- Amazon Elastic Container Service (ECS) – this cloud-based service manages containers, letting you run applications in the AWS cloud without having to configure an environment for deployed code.
- Amazon Elastic Kubernetes Services (EKS) – this cloud-based container management service natively integrates with Kubernetes. It works on-premises and in the AWS cloud.
- Amazon Fargate – this serverless service lets you run containers on AWS without having to manage the underlying infrastructure.
- Azure Kubernetes Service (AKS) – provides container orchestration capabilities based on Kubernetes, optimized for deployment in the Azure cloud.
- Google Kubernetes Engine (GKE) – this fully-managed orchestration solution implements full Kubernetes API, release channels, multi-cluster support, and 4-way auto scaling.
- Rancher – this open source multi-cluster orchestration platform provides tools for deploying, managing, and securing enterprise Kubernetes.
In a serverless compute paradigm, there are still servers, but they are abstracted from application developers and operators. Cloud providers handle the day-to-day tasks of configuring, maintaining, and scaling server infrastructure. Developers simply package their code into serverless functions and run them on a serverless platform.
Serverless functions only run when actually needed, triggered by an event-driven execution model, and automatically scaling up and down as needed. Serverless services provided by public cloud providers are typically priced according to the number of code invocations and the actual resources used, so there is no cost when the serverless function is idle.
Dividing your application into loosely coupled sets of microservices increases the amount of communication between services. Cloud native applications consist of dozens or even hundreds of interconnected microservices.
A service mesh manages this complex network of services, enabling large-scale communication and ensuring it is secure, fast, and reliable. A service mesh works by decoupling the communication protocol from the application code and abstracting it as an infrastructure layer on top of TCP/IP. This reduces overhead for developers, allowing them to focus on building new features without having to manage networking and inter-service communications.
Related content: Read our guide to cloud native infrastructure
Cloud Native vs Other Cloud Models
Cloud Native vs Cloud Enabled
Cloud-enabled applications were originally developed for traditional data center environments, but can be modified to run in a cloud environment. By contrast, cloud native applications can only run in a cloud environment (whether as an on-premise private cloud or in the public cloud), and are built from the ground up to be scalable, resilient ,and platform-independent.
Cloud Native vs Cloud Ready
Cloud-ready has several definitions. Originally the term applied to services or software designed to run on the cloud (like the term cloud native today). However, as true cloud native technologies evolved, the term “cloud-ready” came to be used to describe a legacy application reconfigured or reworked to run on cloud infrastructure.
This is similar to the concept of “replatforming”, in which an existing application is retrofitted to operate in a cloud environment, for example by switching out some components, but without completely rebuilding it as a cloud native application.
Cloud Native vs Cloud Based
A cloud-based service or application is one that is delivered over the Internet, usually in a software as a service (SaaS) model. This is a generic term that applies to many types of applications, some of which may not actually run in a public cloud but on the provider’s own infrastructure.
Many “cloud-based applications” are not designed according to cloud native principles. For example, they may be built as monolithic applications, or might not be dynamically scalable.
Cloud native is a more specific term that refers to applications specifically designed to run in cloud environments. Some cloud native applications are not “cloud based” in the traditional sense, because they are not accessible to users over the Internet.
Cloud Native vs Cloud First
Cloud first is a business strategy that aims to put cloud resources first when an organization launches new IT services, updates existing services, or replaces legacy technology. The strategy is typically driven by cost reduction and a desire for operational efficiency.
Cloud native applications are designed to use cloud resources and take advantage of the beneficial properties of cloud architectures, so they can be an important part of a cloud-first strategy. However, in a cloud-first strategy, an organization will typically still have some legacy applications that are not cloud native.
What Are the Benefits of Using Cloud Native Technologies?
Resilience is the ability of a system to respond to failures while maintaining functionality. Cloud native applications are designed to be resilient. A well-designed cloud native application will not go offline if the infrastructure fails. This is because cloud platforms can detect and mitigate infrastructure issues, software issues such as crashed microservice instances, unexpected high loads, or connectivity issues. In all these cases, a cloud native environment can repair and restore microservices, or divert traffic to other instances or cloud data centers to avoid downtime.
Automation and Frequent Releases
Cloud native applications use DevOps automation capabilities. Cloud native also allows development teams to release individual applications or software updates when they are ready, without dependency on other components. This enables practices like continuous delivery and continuous deployment. Container orchestrators like Kubernetes also enable development teams to update software without downtime.
The modular nature of cloud native applications allows quick, incremental changes to individual microservices. Domain-driven design specifies boundaries between different microservices, which also defines the work of each development team. Multiple, distributed teams can own and iterate on their own implementations, as long as they honor the API contracts between their microservices. This makes cloud native development much faster and more agile than development of traditional applications.
Reduced Vendor Lock-in
Cloud native environments offer a wide variety of tools and allow organizations to avoid vendor lock-in. Each microservice can use a different technology stack, leveraging either open source or proprietary tooling. This makes cloud native applications more portable and often less expensive to develop than legacy applications. Cloud native applications also support a multi-cloud infrastructure, allowing organizations to run each workload in the most appropriate cloud environment.
In a cloud native paradigm, applications and configurations are source controlled to ensure audibility and repeatability of deployments and configuration changes. Immutable infrastructure is used to ensure that development, staging, test, and production environments are identical. This enables consistent, reliable releases, and prevents misconfigurations or environment issues that lead to production issues.
Cloud Native Security Considerations
Traditional cloud environments based on virtual machines (VM) have established security practices and tools, including agent-based monitoring, host-based security solutions such as firewall and anti-malware,
, and network security tools.
However, cloud native applications are a collection of small, loosely coupled microservices. In this environment, operations and security teams do not have access to security tools and techniques developed for a VM-oriented environment. In addition, rapid and frequent releases mean that the environment is constantly changing.
Before implementing a more effective cloud native security solution, security, operations, and developer teams need to understand the key elements of cloud native security:
- Service discovery and classification—cloud native environments undergo constant change. Accurate inventory and proper classification of all assets is critical for security operations teams to have a clear understanding of potential vulnerabilities. This must be done dynamically using service discovery or similar strategies.
- Compliance management—most compliance standards do not have explicit requirements for a cloud native environment, and it can be difficult to understand how to make a cloud native application compliant. Organizations must build a compliance management framework that defines how to enforce compliance requirements for cloud native components.
- Network security—in a cloud native environment, it is critical to achieve visibility over all network flows, and ensure that all cloud native components have a secure network configuration that follows central security policies.
- Identity and access management (IAM)—IAM systems limit cloud resources to specific individuals or service accounts. It is the foundation of access governance and privilege management in a cloud environment, and can be used for behavioral analysis of user entities.
- Data security—due to the distributed nature of data, cloud native data security requires dynamic data classification to identify sensitive data, and careful control over the security posture of cloud storage resources.
- Vulnerability management—identifying and preventing vulnerabilities throughout the application lifecycle requires continuous scanning and monitoring of container images, hosts, serverless functions, and infrastructure as code (IaC) templates.
- Workload security—it is important to improve workload visibility, identify security misconfigurations, perform vulnerability scanning and runtime monitoring at the workload level.
- Automated investigation and response—due to the complexity of the cloud native environment, automated identification and response to threats is an important complement to human security teams.
Cloud Native Security with Aqua
Aqua Cloud Native Application Protection Platform (CNAPP) is the most comprehensive and deeply integrated enterprise platform for cloud native security. By delivering holistic end-to-end security in a single unified solution, Aqua secures the build process, the underlying infrastructure, and running workloads, whether they are deployed as VMs, containers, or serverless functions. Cloud native applications are protected up and down the stack, all the way from development to production, and across multi-cloud and hybrid environments.