Cloud Native Infrastructure 101

Learn about key cloud native infrastructure components, growing adoption of cloud native technology, and key challenges of cloud native

February 24, 2021

What is Cloud Native Infrastructure?

Cloud native is now becoming the norm for modern IT, and a growing number of organizations are running large-scale cloud native applications in production. 

Cloud native infrastructure relies on the ability to turn resources into a commodity that can be scaled, automated, and manipulated at will. 

Cloud native infrastructure takes many shapes and forms, but has the following common characteristics:

  • Driven by APIs and easy to automate
  • Hardware functions hidden by abstractions
  • Auto scaling, high availability and failover built into the environment
  • Supports CI/CD and continuous deployment cycles
  • Enables repeatable deployment and infrastructure as code (IaC) patterns

In this article, you will learn:

Cloud Native Infrastructure Adoption

The CNCF Survey 2020, carried out by the Cloud Native Computing Foundation (CNCF) shows that cloud-native tools and technologies are rapidly growing. Key findings include:

  • 30% of organizations use serverless in production.
  • Use of containers in production increased from 84% in 2019 to 92% in 2020, and 55% of production containers run stateful applications.
  • Kubernetes is used in production by 83% of organizations, up from 78%.
  • Overall utilization of all CNCF projects grew by 50% year over year.
  • 82% of respondents use CI/CD pipelines in production for cloud native applications. 
  • The biggest benefits of cloud native, according to survey respondents, are scalability (51%) and higher availability (44%).

Cloud Native Infrastructure Components


Containers package software together with its dependencies, letting you consistently deploy software systems. They are more lightweight than full virtual machines (VMs) because they share the underlying operating system kernel. Containers are easy to deploy and configure, and can be managed at large scale using a mature ecosystem of tools.

Container Orchestration

Containers are extremely useful, but can be difficult to manage at scale. Container orchestration platforms, with the de facto standard being Kubernetes, allow teams to manage clusters of containers. Orchestrators handle the container lifecycle, including auto scaling, failover and healing, resource provisioning, storage management, networking, access control, and many other activities critical for an enterprise environment.


Serverless is a computing model that does not require users to manage server infrastructure. Simple applications, structured as one or more functions, can be loaded to a serverless runtime environment and run automatically in response to event streams or user requests.

Compute Instances

Infrastructure as a Service (IaaS) solutions like Amazon EC2 and Google Compute Engine are still a common choice for cloud native applications. You can spin up compute instances, also called cloud virtual machines, using pre-configured images, and easily manage scalability, load balancing and storage. A compute instance is a fully-fledged operating system, analogous to an on-premise server or virtual machine. 

Platform as a Service (PaaS)

PaaS lets you develop and run applications more easily by leveraging cloud-based services. PaaS solutions can make development easier by providing DevOps and CI/CD infrastructure, data pipelines, analytics and AI capabilities, ready-made development frameworks for complex business applications like CRM, and more.

Infrastructure as Code (IaC)

IaC makes it possible to automate infrastructure on the cloud using simple configuration files, which can be checked into source control and managed just like regular source code. These configurations allow you to stand up complex systems in the cloud in a consistent and repeatable manner.

Distributed CI/CD

Cloud native applications typically use a microservices architecture. Instead of having one release pipeline for the entire application, there are numerous CI/CD pipelines, one for each microservice. Distributed CI/CD infrastructure makes it possible for small teams working on independent microservices to release their service to production, without dependence on any other part of the larger application.

Auto scaling

Auto scaling is a technique supported by all types of cloud native infrastructure. It enables infrastructure to grow and shrink, provisioning more resources, or paring down resources, depending on a predefined requirement or actual application loads. Auto scaling allows cloud native applications to provide consistent performance (by scaling up to meet increased loads) and high resilience, since failed nodes can easily be replaced by new ones.

Load Balancing

In a microservices application, load balancing is essential to managing client requests. The user of a microservices application is not aware of multiple services running within it. A load balancer is a reverse proxy, responsible for routing application requests to the appropriate microservice, and ensuring load is balanced correctly between instances. Load balancers can also trigger scaling events when there is insufficient capacity to meet incoming requests.

Application Monitoring

Cloud native applications need new types of monitoring to ensure they are operating correctly in a dynamic environment. Cloud native relies on health checks, which identify failures in infrastructure components and allow them to be automatically replaced. In addition, load and utilization metrics are critical to automating scalability in cloud native environments.

Challenges of Cloud Native Infrastructure

Cloud native infrastructure can be tremendously beneficial, but also has its challenges. Here are some of the common challenges facing organizations who are making the transition to cloud native development:

  • Persistent storage—containers and serverless functions are immutable infrastructure, meaning they typically cannot store state data. Many applications do require persistence, and this requires integrating cloud native components with external storage volumes. 
  • Cloud lock-in—cloud native applications are heavily reliant on cloud infrastructure. This raises the risk of becoming overly dependent on a specific cloud provider or service. Organizations should avoid developing systems tightly coupled to specific cloud services, and prefer technology that enables easy migration between clouds.
  • Cloud-native delivery pipelines—organizations are finding it is not enough to adopt cloud native infrastructure; they also need to move CI/CD pipelines to a cloud native model. Testing, deployment automation, and other processes can be more complex and require new tools in a distributed microservices environment.
  • Rapid technology evolution—in the cloud native world, technology moves fast, new platforms and frameworks are introduced, and older players are rapidly updated. It can be difficult to find the right technology stack to build your applications, and the stack can frequently change, requiring changes to applications and learning new skills. 
  • Security—cloud native technology requires a new approach to security. Organizations cannot simply build cloud native applications and then start thinking about security when pushing to production. This is why cloud native security has shifted left, and is becoming an integral part of software development, from planning to development, testing and deployment. Specialized tools are needed to automate security and perform security testing and validation at every stage of the cloud native development cycle.

Learn more about cloud native security ›

Cloud Native Security with Aqua

The Aqua Cloud Native Security Platform empowers you to unleash the full potential of your cloud native transformation and accelerate innovation with the confidence that your cloud native applications are secured from start to finish, at any scale.

Aqua’s platform provides prevention, detection, and response automation across the entire application lifecycle to secure the build, secure cloud infrastructure and secure running workloads across VMs, containers, and serverless functions wherever they are deployed, on any cloud.

Secure the cloud native build

Shift left security to nip threats and vulnerabilities in the bud, empowering DevOps to detect issues early and fix them fast. Aqua scans artifacts for vulnerabilities, malware, secrets and other risks during development and staging. It allows you to set flexible, dynamic policies to control deployment into your runtime environments.

Secure cloud native infrastructure

Automate compliance and security posture of your public cloud IaaS and Kubernetes infrastructure according to best practices. Aqua checks your cloud services, Infrastructure-as-Code templates, and Kubernetes setup against best practices and standards, to ensure the infrastructure you run your applications on is securely configured and in compliance.

Cloud Security Posture Management (CSPM) ›

Kubernetes Security ›

Secure cloud native workloads

Protect VM, container and serverless workloads using granular controls that provide real-time detection and granular response, only blocking the specific processes that violate police. Aqua leverages modern micro-services concepts to enforce immutability of your applications in runtime, establishing zero-trust networking, and detecting and stopping suspicious activities, including zero-day attacks.

Container security ›

VM security ›

Serverless security ›