AWS Fargate eliminates the need to manage and provision servers. Instead, it lets you pay for actual resources used per container. It also automatically scales according to application loads, making it easier to optimize costs and performance.
Technically, Fargate works on top of Amazon Elastic Compute Cloud (EC2). When you run containers on Fargate, you are actually running them on EC2 instances. However, Fargate manages these EC2 instances in a transparent manner, and you are only billed for the fraction of resources your containers actually use, not for an entire instance.
This is part of a series of articles about container platforms
In this article:
How Fargate Works
AWS Fargate uses the following key concepts:
- Clusters: In AWS Fargate, a cluster is a logical grouping of tasks or services. Think of it as an environment where your containers run. While you don’t manage the physical servers in a Fargate cluster, the concept of a cluster helps organize and consolidate your containerized applications.
- Tasks: A task is the basic unit of deployment in Fargate. It represents a running container or a group of containers. When you launch an application in Fargate, you’re essentially running a task. Each task has its own configuration, such as the container image to use, CPU and memory requirements, and network settings. If you run Kubernetes clusters on Fargate, instead of tasks, Fargate runs Kubernetes pods.
- Task definitions: This is a blueprint for your task. A task definition specifies everything your application needs to run, like the container image, CPU and memory allocation, environment variables, and other settings. You can think of it as a recipe that tells Fargate how to run your containerized application.
- Services: Services in Fargate are used to run and maintain a specified number of instances of a task definition simultaneously. If a task in a service stops or fails, the service scheduler launches a new instance of the task to replace it, helping to ensure your application remains available.
Learn more in our detailed guide to aws containers
Lifecycle of a Fargate Task or Pod
The process begins when you or an AWS service launches a task or pod and uses Fargate as its launch type. AWS Fargate then schedules the task or pod on a server, provisions the necessary amount of compute, and executes your application.
Once the application has finished executing, AWS Fargate stops the task or pod and cleans up. It’s a streamlined process that ensures your resources are utilized optimally, minimizing waste and maximizing productivity.
Task and Pod Execution Roles
In AWS Fargate, tasks and pods have execution roles. These roles are IAM roles that you can create and manage in your AWS account. They provide permissions that determine what other AWS service resources your task or pod can access.
Execution roles let you adhere to the principle of least privilege. You can fine-tune the permissions to ensure that your applications have only the access they need.
Integration with Other AWS Services
AWS Fargate integrates with a variety of other AWS services, enhancing its functionality. Services such as Amazon ECS and EKS, Elastic Load Balancing, and Amazon RDS can be used together with Fargate to create infrastructure solutions.
AWS Fargate vs. EC2
EC2 is a traditional cloud computing service that offers flexible, scalable compute capacity in the cloud. It allows you to run applications on a virtual server of your choice, providing complete control over the underlying infrastructure. This level of control is essential for specific applications or scenarios where custom configurations, specific types of hardware, or direct access to the server are necessary. However, this also means you’re responsible for server management, including provisioning, scaling, and patching.
AWS Fargate is a serverless compute engine specifically designed for containers. It abstracts the server and cluster management away from the user, offering a more straightforward approach to running containers. With Fargate, you don’t need to select server types, decide when to scale your clusters, or optimize cluster packing. It simplifies the process by allowing you to focus on designing and building your applications rather than managing the infrastructure that runs them.
Options to Run Containers on AWS Fargate
Here are the main two options you can use to run containers on Fargate:
Amazon Elastic Container Service (ECS) with Fargate Launch Type
Amazon Elastic Container Service (ECS) is a highly scalable, high-performance container orchestration service that supports Docker containers and allows you to easily run and scale containerized applications on AWS. Running ECS with the Fargate launch type lets you run containerized applications without the need to manage the underlying server infrastructure. Alternatively, you can run ECS directly on EC2 instances, which provides more control.
With the Fargate launch type in ECS, you define your applications within a Docker container and specify the CPU and memory requirements. Fargate then automatically manages the scaling and provisioning of the compute resources for you. This setup simplifies the process as you no longer need to select server types or monitor server capacity.
Additionally, when using ECS with Fargate, you can easily integrate with other AWS services like Elastic Load Balancing for distributing traffic, AWS Identity and Access Management (IAM) for security, and Amazon CloudWatch for monitoring.
Amazon Elastic Kubernetes Service (EKS) with Fargate Launch Type
Amazon Elastic Kubernetes Service (EKS) is a managed service that makes it easy to run Kubernetes on AWS without needing to install and operate your own Kubernetes control plane or nodes. Using EKS with Fargate launch type combines the power of Kubernetes with the simplicity of serverless computing.
With the Fargate launch type, EKS automatically provisions and scales the Kubernetes pods as needed, abstracting away the underlying servers. This means you can focus on deploying your Kubernetes applications without worrying about the infrastructure. Fargate takes care of ensuring that the compute resources meet the demands of your applications.
EKS with Fargate eliminates the need to choose server types, decide when to scale the cluster, or optimize cluster packing. It is particularly beneficial for applications that have unpredictable workload patterns, as Fargate automatically adjusts the resources to meet the current needs, offering flexibility and cost-efficiency.
Best Practices for AWS Fargate
When using AWS Fargate, there are a few key best practices you should consider to ensure optimal performance and cost-efficiency.
Determine Appropriate CPU and Memory Requirements
One of the first steps when setting up a Fargate task is to specify the CPU and memory values for your containers. It’s crucial to get these values right, as they directly impact your application’s performance and cost. Over-provisioning can lead to unnecessary costs, while under-provisioning can result in poor performance or even application failures.
To determine the appropriate CPU and memory values, start by understanding your application’s resource consumption patterns. Monitor your application under different load conditions and use the data to estimate the necessary resources. It’s usually better to start with a bit more resources than you think you’ll need and adjust downwards as you gather more data.
Clearly Define Roles and Permissions for Fargate tasks
In AWS Fargate, you define IAM roles and permissions at the task level. This provides a granular level of control over what each task can do and which resources it can access. It’s essential to follow the principle of least privilege and grant only the necessary permissions for each task.
When defining roles for your tasks, consider what AWS services the task needs to interact with and what actions it needs to perform. Also, make sure to regularly review and update your IAM policies to ensure they’re still appropriate for your tasks’ requirements.
Use Virtual Private Clouds (VPCs)
VPCs are an essential component of the AWS security and network architecture. They allow you to isolate your resources in a private, secure environment. When using Fargate, it’s important to correctly configure your VPC settings.
Ensure that your tasks are launched in a private subnet, not a public one. This shields your tasks from direct access from the internet. Also, consider using security groups and Network Access Control Lists (NACLs) to further control inbound and outbound traffic to your tasks.
Use CloudWatch Metrics and Alarms
AWS CloudWatch provides detailed metrics and alarms for all your Fargate tasks, allowing you to keep an eye on resource utilization and detect any performance issues early.
It’s a good practice to set up alarms for critical metrics like CPU utilization, memory utilization, and task count. This way, you will be notified immediately if something goes wrong, and you can take quick action to mitigate any problems.
Keep Container Images Lightweight
The size of your container images directly impacts the start-up time of your tasks. Larger images take longer to pull and start, which can slow down your application’s responsiveness. Therefore, it’s crucial to keep your container images as lightweight as possible.
Avoid including unnecessary files and packages in your images. Also, consider using multi-stage builds to separate your build-time and runtime dependencies. This can significantly reduce your image size and improve your start-up times.
AWS Container Security with Aqua
Aqua provides the most complete security across the application lifecycle, from development to production, protecting all cloud native applications running on AWS including, Amazon ECS for container orchestration, Amazon EKS for Kubernetes-based deployments, AWS Fargate for on-demand container scaling, AWS Lambda for serverless functions, and Amazon ECR for storing and managing container images.
If you are running cloud native workloads on AWS, Aqua can help with:
- Image vulnerability scanning & assurance
Preventing unauthorized images from running in the AWS environment Aqua Continuously scan images stored in Amazon ECR to ensure that no vulnerabilities, bad configurations, or secrets are introduced into container images.
- Protecting workloads running on Amazon EKS and ECS
Prevent unvetted containers from running on Amazon ECS, EKS and Fargate environments. Automatically create security policies based on container behavior and ensure that containers only do what they are supposed to do in the application context. Detect and prevent activities that violate policy, and defend against container-specific attack vectors.
- Securing applications on AWS Fargate – Aqua embeds the MicroEnforcer into your containers to ensure that workloads are only performing their intended function, while detecting vulnerable or compromised containers.