Transcript
Good morning. Good afternoon or good evening depending on where you are in the world and welcome to this learning experience brought to you by aqua security.
My name is Cody, and welcome back to Tech Strong learning.
So our conversation today is about serverless, not defenseless. Overcoming Fagate security challenges. And I'm joined today by Nidhi Mohta, container services, or go to market strategy leader at AWS.
And we're also joined by Amit Sheps, Technical Cloud Security Director at Aqua Security. So Nidhi and Amit, thank you both so much for joining us here on Tech Strong Learning. Nidhi, do you wanna get us started?
Absolutely. Thank you, Cody. As Cody mentioned, my name is Nidhi Mohta.
I've been in AWS for two years now, and I love talking to customers about, AWS Barbies, and I'm kinda excited to be here to talk to you about the capabilities and, you know, what we really mean, when we talk about serverless containers with AWS partners.
So with four g eight, we really like to describe it as a serverless compute engine to container. And and the the real the the meaning behind the server please is really talking about shifting responsibility from yourself so that you can focus on your applications to AWS.
Shared Responsibility Model Explained
For portions of, you know, of your, of your work. So if you're familiar with probably your lambda function, you know, or you've used Aurora maybe and you've used other AWS services This probably resonates that you do not wanna manage your ECTO instances. And it's the same thing here for Fargy and we're gonna talk more in-depth about what that shared responsibility model means But for you, as as the application developer, as a platform operator, we are taking away that those easy to instances, the requirements for running and operating with access to the virtual machines. And instead, with a simple operational model, we let you focus on your containerized application.
One thing I wanna point out is when we say that it's a compute engine, this is this is this is the only reporting to the compute, but you can't have to navigate via, you know, API.
What we're referring to is that interface is to one of our container office space. So if you're familiar, it could be either ECS or Elastic container service or EKS, Elastic Kubernetes.
So far beyond its own is the serverless compute engine for one of our managed serverless orchestrators, and it's a capability that's expressed you, do both of these, both of which have the capability of deploying, operating, and scaling from you. Alright. What do we mean when we say it's secure? And how much of that responsibility is, is partly to what is AWS responsibility.
We're gonna talk about that. There's a very unique contract with, with parties, which is a one on one mapping between, your tax, your ECS task of all as well, and the microbeams. And so to achieve compliance easily, Target makes that TV article. And finally, what customers appreciate about AWS target in a simple pricing model, that kind of scales beautifully with your application needs.
Common Use Cases for Fargate
Billing begins the minute your container image downloaded from your favorite repo. It could be an active container registry or maybe artifact to you or maybe any of those public registries. And you get those and you're charged by two computer resources, which is VCU, and and then. Alright.
So, onto the next slide, what we see a lot of our customers deploying on fire gate of production workloads.
It's really a general purpose platform. So if you can run-in a container, Frugate uses the same standard OCI containers, it's gonna work just fine and partly. The most common, use case is elastic or scalable with that. Right? Need to dynamically change the amount of capacity based on the user, and that's where we see a large majority of our workflows running, you know, microservices and and web APIs. There's also data processing.
You wanna utilize a container to be able to read off a queue and do those kind of compute workloads.
Batch processes. If you're using the run task API, you can run several batch processes on party network. In addition to that, what we are seeing over the last studios is net new workloads, such as EIML, generative AI, gaming, entire gaming platforms, game servers, as well as, you know, target is super popular, in your lower environments to be used for your CIPD pipeline for your dev environment and combined with on demand and spot, you really start seeing, a lot of value being driven by parties.
Alright? So I spoke about the common workloads. I did wanna kinda give a feel for here at AWS, the scale at which we see customers adopt Target. So if you go to the next slide, we see billions of tasks launched every week using this launch site.
Tens of thousands of API requests fell for a second. And I really like to think of Target as part of this, you know, region blocking service where we wherever there is an AWS region you're likely to have, AWS target there. For thirty two AWS regions, six continents of a hundred and two, availability zones. So we are very global in outreach.
And interestingly, more than seventy percent of all new ECS container customers, they run workloads that's on AWS party. So when you when you use ECS of your container, orchestrator, you have a choice of running it, on the launch type easy to work on it, and we kind of see customers more and more gravitate towards that managed offering which targets, offers in terms of taking away all of that infrastructure management that you have onto the next slide. All of this performance and scale is not gonna be, you know, be able to you're not gonna be able to achieve this performance and scale. Unless
Scaling and Performance Optimization
you have, some kind of integration with ASG or with auto scaling groups to kind of scale your workloads, scale in as well as scale down, your workload seamlessly.
So ECS, the Elastic container orchestrator integrates with application auto scaling to automatically scale your service, it captures aggregate metrics, and these metrics kind of trigger an application or a paying policy. And if yes, then response to that policy, and there is a whole plethora of, of policies including schedule and tracking, and basically the ability to be able to fine tune your containers based on your production traffic, is is what, with what, produce kinda makes really easy. And a big benefit of this is that, you know, we can easily we can easily scale it either you can use the Kubernetes, horizontal corridor scale if you're using EKA.
Or you can you utilize the ECS service ability to do a variety of different order scaling algorithms, that I alluded to either. Previously. And there's a different ways to aggregate this metric. You no longer just have to scale on DCP or memory.
You can have fine grain metrics, custom metrics that you can inject confirmative too.
And and, you know, scale yourself is behind behind more granular, traffic, traffic packets.
Alright. So I did wanna talk about how if you go to the next slide, I did wanna talk about how Fargate is simplifying the challenge of operating at AWS, we like to kind of think about, partly just partly does that managed service that allows you to do more so that we can do. So So that, you know, I'm so I'm so sorry. You do, less, and we do more of the management.
Simplifying Operations with Fargate
Right? So on that front patching, Right? Packing and security updates are integral, parts of achieving compliance. As, as you probably know, including Right?
DS requirements and a and a plethora of other compliance requirements. So further it takes care of that automatic thing. It deploys security updates and patches based on the on the platform version. Right?
So you no longer have to deal with the patching, of your OS.
And one thing I I really like to emphasize to customers is that Fargate has one of the tenants where we don't necessarily introduce platform versions very well. Right? If there isn't a very, very critical CDE that needs to be addressed or a very critical security We try to keep the versions, the version release as minimal as possible so that customers don't have to deal with kinda managing this, and it's seamless. It's under them. The second thing which I had kinda mentioned earlier is this key concept of task, I believe.
So if you look at the so kinda we kind let's go let's go and kinda revisit the Fargate data plan stuff. Right? So you have an easy to hypervisor that uses your trusted, hardware virtualization to isolate instances running on the same physical cell. And then that ECDO instance could run, say, AL two, Amazon Locks two, a fire gate agent, and a container run time.
So the container isolation boundary is typically composed of extractions like figures, namespaces, ecomm policies, you know, you're probably familiar with that. However, even though they provide some isolation, we b as a, you know, as a as a tenant of, of AWS Target decided to avoid collocating those tasks. So there is a one on one mapping of tasks to instances. And this execution model offers multiple layers of isolation.
So Fargate is never gonna collocate two tasks on the same PC two instance or micro VM even from the same customer. So each instance or vCPU runs have only one task.
Right? And so what this immunity follows is that each task of a dedicated infrastructure capacity because Target runs each workload in an isolated working environment and workloads that run on target. They don't share network into interfaces. They don't share ephemeral storage, CPU or memory with other clients. Right?
And, of course, you can run multiple containers within the task. You can run on your main essential application container and a five star container or simply five So that's one of the key key tenets of the task isolation, piece which is different from, from the institute.
For networking, unlike if you do formulate requires the AWS VPC network mode, and that provides greater security for your containers, by enabling you to use security groups that acknowledging tools at a more granular level within your task. So each task, gets its own ENI. Right? And then you can use other easy to network features that your PPP flow log.
To monitor, track it from your task. Customers asked us, well, all this is great, but if I have a Target container, is it a black box? Can I can I sort of securely escalate into it? And so we released, a ECS exec, which kinda integrates with the AWS systems, session manager or s s m, and all control is secure and audited using I'm policy.
So not every person in your organization needs to have access to your Fargate container via ECSitec on your Q1.
And finally, we also encrypt the SMS storage currently, the Fargate supports two hundred gigabit SMS storage. It's it's it's it's it's incorrect.
Security Responsibilities in Fargate
Alright? So with that, I kinda wanted to talk about the shared responsibility model and then hand it off to Ahmed as to where we really see customers find value in Farfetch. So with Furgis, AWS manages the security of the underlying instance in the route and the runtime that's used to run your task. And as a customer, your responsible for securing the application code and the So if we contrast this picture with what we would see in traditional e c two instances, right, where interviews take care of the physical data center, but From that point onward, the VM is your responsibility.
So you've got to select an OS. You will need to be responsible for managing the patches but it's time to, to upgrade that OSAP from you. And then specific to containers, you have these different components to contain your run time, like, the docker daemon or container that becomes your responsibility as well. All of this goes away with parking.
Another additional part is the availability plugin. If you have the ECS agent or you have the Kubernetes q q qblit, all of these components are deployed onto the ECP node. You have to version them, update them and upgrade them. And and you're doing all of these res you're spending all of these cycles and resources into managing this, and you've not even gotten to your containerized application from the top.
Right? So this is what we hear from customers that running these containerized applications is what they wanna know, wanna do. And they wanna offload that undifferentiated heavy lifting, what we like to call it, here at AWS, is is taken away from So it's a more managed experience where you bring in your application and everything underneath the AWS's responsibility.
And so and and and and and so when you think of, like, deploying all of these containers at scale, it's easy to do ten containers on your laptop for when you're talking about multiple different clusters in different accounts, that's when we really start to see those utilization benefits kick in for AWS target vis a vis a AWS, e c two. And if you if you're the kind of if if you have a specific use case, you need very granular content to your EC two instance. That's great, and that's a good use case for, AWS, for, for, running PCS or ETS on EC two. But, when we have all of those, you know, benefits of just making sure that your operational, you're uploading all of OpEx to AWS, that's where it managed service, like, like, for virtual relations. Right?
And another thing I wanna mention is that entire bottom layer, that interest to layer management, one key part of it is plus dollar scaling, and that completely goes away, with with, with database So with that shared responsibility model in mind, I wanted to talk about I wanted to kind of, hand it over to Amit so that he can talk about how cost solutions, implement target, and providing that end to end, security solutions so that you can securely deploy your containers as well as as well as maintain that security through runtime. So with that, after you comment.
Transitioning to Fargate Security Solutions
Thank you, Niti.
As Niti said, I will take on, the security part.
Just introduce myself. My name is Demice.
I'm also two years in Aqua. Leading the technical, product marketing. So going with what maybe just explained about the shared responsibility model, trying to simplify it and look look about it from the angle of the customer of the the cloud engineer, the cloud administration, So in the beginning, we had full responsibility. So we had whether in orchestration layers, we had the workers, and we had the application, in which we have to handle. Now, and so the first step is actually moving away from the orchestration and going with AWS to whether it's EKS or ECS and now we have narrowed security focus.
Going to Fargate, as Nina just said, reduce the effort that we need to actually put on the workers or any re reduce the effort that we need to, put or allocate in the infrastructure side. And therefore, we can actually focus only on the security layer. That means that now we don't need actually to have all that, servers, OS. This is being done by AWS.
We need actually to take care only on the security.
So what that means, basically, So normally when when we in aqua look at it, so to say, from, calling to running, a container, We have the first stage is actually image scanning. So we're taking the image, scan the image, and see that we don't have misconfiguration.
Image Scanning and Pre-Deployment Security
We don't have things that or vulnerabilities or any other stuff that we don't want to have in our production environment.
So we can actually prevent it before it goes to production.
The work side is actually test or sandbox the the the container and actually see that it behaves as we expect. And we don't see any malware or any unrecognized behavior, which can be done in a staging environment because it act before it's actually going to be a, so to say an application. So we have the chance actually to test it before, it runs.
And when it runs, it runs on the production and therefore we need a runtime protection.
So securing that life-site. Now, a customer only having Fargate, that means that we can actually focus on that side of security and we don't need to focus on the other pitching, that we require to do before fogging.
Runtime Security Challenges in Fargate
So what is the challenge that we see when we are moving into, foggy? So regardless if we have whether it's Amazon ECS or ETS and we are working and we have an application normally, runtime application, runtime security requires an agent. The agent is being deployed on the host. And they're also actually monitoring the traffic between the application using the, host kernel.
Known using EDPF technologies.
And if something bad happens, then we actually actually first detect it, and then we can prevent it.
So when we are moving to Fagate, the main challenge is that there is no host. So we have, the host are, not there, and we can act we do we can see them actually, and we have only four gate.
And it's all to say the subs service obstructions, and agents cannot be installed.
So, therefore, we need actually, a way to secure containers which are using FOGGate, allowing the application actually to be used without compromising the security.
So how we are actually securing, the Fargate containers?
Securing Fargate Containers
So there are two ways to secure containers.
The first one is using, a sidecar.
Sitecar is a well known configuration or so to say, an architecture in the Kubernetes world in which for each container, we have a brother. We have a fellow container which actually allow allow us to inject stuff into the into the container. Into the application container.
That means that the sidecar will hold all the stuff that we need. We'll have the agent. And once the application container will be initiated, then the sidecar will actually has, will be initiated as well, and inject the agent into the application container.
The other way around is embedded container agent, which means that the agent is actually embedded within the container image. And once the container is up and running, that means that basically you have an agent which is running.
This is, let's say, more advanced or more, this is a use case which is being used for more advanced users.
Evaluating Security Approaches: Sidecar vs. Embedded Agent
So going forward, the main question that we always ask is pros versus cons. So why do I need site or where? When do I need site count? When do I need to use the embedded agent or what are the pros and cons?
So the best thing about sidecar is that nothing is being changed within your current deployment.
So your process stays the same with R and D. You don't need to change anything.
If you want to upgrade the agent, You need you can do it outside, or easily outside of, so to say the R and D processes.
It's very isolated and it's very, external to the internal processes of the organizations.
However, as always, we need to pay price for a price for it. And the price is complexity.
Normally, when you or not normally, when you are actually using a sidecar. So for each container, you have an additional container. That means that It doubles the amount of the containers within a deployment. So if you have ten containers for an applications, that means that now you have twenty. If you have thousand, you would have two thousands. So that actually increase the complexity of your, of your environment.
In some cases, it refers to networks. It refers to the access to data. It will refers to many issues which actually concern the architecture of application, and now we need to take care of it.
On the other way around, working with the embedded agent, So embedded agent actually allows you a simplified architecture.
That means that you actually maintain the same architecture that you had before you can scale, you can do whatever you wish. There is no change in the application, for market chapter wide once you are using the embedded agent. So everything stay the same. Nothing is changing.
Besides dialing the flows. So if you want an embedded agent, that means that you need actually to change the existing R and D flows. So the agent is now a part of the container image and there is a need, to embed it you need R and D to take the agent and actually put it within the image. And for each agent upgrade, for each agent update, you need now R and D efforts.
So I'm not saying that sidecar is better or agent, agent is better. These are the pros and cons And I think, any organization actually makes his decisions about what he needs what he wants, what is his journey with Falcon.
The Journey to Adopting Fargate
And for that purpose, I I created that slide in order to show that the journey. So we are gonna I am an an enterprise an organization who actually decided that we will adopt Fogate.
That doesn't mean that on the first day, all of our application will refogate There will be a journey, towards being a foggate, company. So on the first day, we will probably will be experimenting with fogging. That means is that first we will put it on the lab. We will take I don't wanna say they're not significant, but we will take a few applications, and we will start to deploy them on Foggi.
We will not do it on our core business. We will not do it our main application. We will do it gradually.
But for the experimenting, we don't want to break anything. We don't want to make major changes. We still want to to understand the technology. And be confident with it. So that means that if we want to, apply security So the first stages will probably will use stage go.
Side go. Sorry.
So now we are confident and we are moving gradually application by application, and moving them to solve it. So this is the transition phase. So now we are starting both on the to change, let's say, the technology stack, but I will processes, our R and D, our cloud architecture is being transport being transformed as well to fargate. So that means that we will take the existing, Fargate, application, which are using sidecar, and we'll move them to embedded.
And we will have a transition between Fargate and Kubernetes that we had before. And the security technologies because again, we are embedding the technologies. We are embedding changing the processes. So it will be a hybrid hybrid moment, hybrid stage that multiple technology will coexist.
And therefore, we will need we will have two solutions.
And then we will get to the adoption of Fargate where I don't wanna say everything, but most of the application are using Fargate and therefore that our processes are now in place actually to have this security as an embedded stuff within our processes.
So again, this is a typical journey that we can see within customers.
Each organization has its own journey, but this is what we see from our customers. And from the security requirements. You need to start small, but you need to be secured from day one. Not leaving your application without any secure not secured.
Securing Fargate Containers
So aqua actually came with, an agent with a tool to secure the Fargate containers.
So the content, the this is an agent which actually runs within the containers themselves.
We actually got a patent, for this one. The patent was submitted four or five years ago, and it was recognized, just lately.
It doesn't mean that now aqua is going to conquer the wall but it means that we are we we are recognized by our innovation.
And for the fact that we actually, were able to anticipate and predict that Fargate will be, a a big thing.
And therefore, we actually started to work on it, many years ago, which means that it's also deployed on big enterprises and it's actually proven in field. So, it's something which actually works. It's something that aqua's customers are using.
In order to provide our customers the full, so to say journey without any compromises of the security, Aquhar can provide the support for both deployments, whether it's a sidecar or an embedded, which means that we can actually offer our customers to choose their own journey and we will be there for them as they wish to to move forward.
And of course, securing Foggate So, of course, detection of malicious activity.
We can actually prevent from, unauthorized behavior, from happening. And provide all the data in order to investigate what happened within, this behavior.
So securing the journey for Fargit. Now, as I said before, you are going actually to move. You are going to create a situation in which you have an applications which are learning over your existing Kubernetes deployment, whether it's any application. And now you move you want to move them gradually from the EKS and actually use Fugate as an example.
So the first, let's say, wave of application, the experimenting one will be done with with side cost. Now bear in mind, that there is no need to change any security policy There is no need, to do anything. You can actually use the same security, the same security mechanism, the same rule that you are using today and actually apply them on Fargate, on the new Fargate. So you don't even have to actually make any security efforts. Everything is actually seamless.
And then you actually want to move additional application and then use the embedded agent. So aqua actually allows you to do it, with the, with the same mechanism with the same rules, And actually, again, you have an eye bleed, environment.
Within the fog gate, you have a few solutions of side calls and embedded agents. And also, you have the, application which might not go to, the foggy. So in that situation, we're actually covering, your journey.
You choose where you start, You can actually choose where you end this journey, and accrual will secure you along this journey, providing all the rules and all the policies, to allow you to secure the deployments on runtime, providing the the the security, the prevention and detection that you need.
Demonstrating Security Policies
So in order to illustrate that, I actually create a small demo. This is a recording. So what we can see over here is we can see actually, target environment that we created for the demo.
And it has one container. You can see over here, with the sidecar. We use the sidecar for this demo. And for the first stage, what we do what we did is We actually made, I hope that you can see it. We made a, an SSH, command It's a help trying actually to illustrate that this is, this is actually the command. Normally, SSH is not allowed. You cannot SSH form the container.
So what we are going to do now is we're going to configure policy that will prevent from SSH command, or from anything which is not allowed to run. On that image from happening. So in order to do it, we are using, let's say the most common rule for that, which is the Drift to Vention. Drift to Vention will prevent anything which is not authorized based on the image of the container from earning. That means is that we will monitor everything that, was defined on the image.
And if it not authorized action, which is not recognized or needed for the containers in order to run, then it will be not be allowed. So over here, we can actually see the drill intervention, as a policy. And what we can see over here that I will push that and it's disabled. We can either audit or enforce.
That means that we can either generate an alert for the, I will stop it for a second. We can either generate alerts in the case of, something bad happens.
Or in the other way around, we can actually enforce, enforce mode. That means that we actually can block and prevent from things to happen. So if we'll see once we will get this policy up and running, what we will do is go back to the crime scene and now try actually to do this, that command again.
So we will do the SSH command again.
And what we can see is that the permission is denied. So what we did is we prevented from unauthorized behavior from happening. Now, this is the first phase. Okay? So something happened in my environment.
Someone tried to do something, and now I want actually to investigate and see exactly what happens in my environment. So for that, we have two mechanism.
One is Odiclog in which we can actually log all the actions that we are doing in the container.
The second is incident instant response tool, which will allow us to investigate and actually see what happened in the container.
What was initiated, what was done before, create a timeline in order to investigate, not only the command itself, but what happened and who actually try to access this container.
So going for the first stage, is the audit.
So over here, we have the audit screen. And now what we can see over here is the entire commands that will actually will done by a user, on the container.
So if we'll open this one, what we can actually see is the entire data that we need in order to actually identify, investigate, what what is the, what was, the container, what was done, pure audit, data, in which we can actually identify exactly what was happened on which resource. And again, it was blocked.
So we know what happened. Now the second phase is actually the incident.
Now, when we are going to the incident, there are two phases. First, the event summary. Over here, we can see that this is an SSH.
So someone tried to initiate an SSH.
Over here, we will have the raw data of anything that happened. So if someone tried to initiate the malware or do something which is more complicated, complicated, we can actually show the raw data and provide all the relevant information that happened on the on that command. And over here, again, we will we will, we will show all the relevant data of the the resource, which means the container, stop it for a second. So we have the container.
We have the policy. We have the rules. We have everything that we need in order for us to understand what was the source of this what was done, which were initiated it. All the chain of events of the system, which actually led to this alert.
So we will be able to first understand what is this alert and what what is the meaning of it?
The second phase is actually the timeline.
So over here, we have a very short timeline because we did we did only one action. But in some cases, attacks are not only one event. So you will have a file which was copy, access to the container, you will have a few steps that actually came before this initiation, this command that was given.
So the timeline actually provides all these events which happened before. And giving all that information, giving all that data, you actually have the capability.
Sorry.
In order to investigate, all the chain of events.
Again, at the end of the day, understand from where the attack started, what we have here and when it continues.
So it might actually continue to different, containers. And again, using our system, you can actually investigate it, investigate it more and more.
So over here, we can actually show the entire, example. This is the same one. So to conclude, we actually show that, we can actually stop and prevent attacks as they happen.
And actually prevent them from happening and create damage when you are using Fargate.
So one thing, I would like to add is actually, again, proven in battle.
So, one of our customers, spotana, is actually using aqua.
They started, their journey with and I remember because I did I I I made the conversation with them when they bought when they when they purchased aqua. So the first step was actually with containers. So they started with containers.
And their first, sentence or one of the first sentences is that we want to move to validate someday. So we are thinking about it.
And with aqua, they started using containers.
And moving forward, now they are using aqua to protect their environment, and use aqua for both the containers and for fogged.
So, again, with aqua, they have the most up to date security.
Our detection capabilities are coming from our research team. Which is Nautilus. Nautilus is a pure cloud native security research team, which fully focused on cloud events, cloud security, attacks. So again, it's a dedicated threat detection for cloud only. Of course, best in class component to security given the fact that aqua was built in order to secure security deployments. And of course, the depths of one time one time control as I just demonstrated. So we are not only detecting.
We can also prevent, attacks from happened, from happened, infargate containers. So everybody can do today, containers, but Vargate is much more, complicated because of the things that I mentioned, and aqua is also better proven within container environments.
Benefits of Using Fargate with Aqua
So to conclude, AWS Fargo. I mean, I'll I'll I'll summarize and need this part. So AWS Fargo allow you to grow, allow you actually to reduce the which were allocated to your infrastructure, for maintenance, for secure, for using or focusing only on your application. So moving to Fargate is actually reducing all these efforts, which are invested in the infrastructure.
And with aqua, we have the superior superior security for the containers running over fogging recognized and, patent, which actually, again, this is, on your recognition. So there is no need to change everything. You actually can take your existing deployment your existing containers, and actually add the, the environment, the container environment to the existing world. As I showed before, and that's it.
Your container secured. You have the same policies for the same environments, whether it's compliance, whether it's on time. You actually can maintain both of them using a unified set of tools. So you have one policy for your, entire container deployment for your entire application set, which give you or allows you a good good night's sleep.
Q&A Session on Fargate and Aqua
And now well, can we move to the questions?
Thank you so much, Ahmed. I've been been trying to answer. First of all, thank you so much. There is a lot of goodness in the questions that I've been trying to answer them. Okay. I will I I I think one thing I did wanna address, and I think a lot of folks are very familiar, with the kind of differences between ECS PC two and ECS target. And one thing that kind of sticks out in their mind is pricing.
So a long answer, it's harder to type in chat. So, I'll kinda just, you know, touch upon that.
Is that it's true that if you look at the sticker price, Fargate one VCPU, compare that with your EC two instance, there's a twenty percent, there's a twenty percent costanza. Right? And that's because it's a managed. So this is a lot that AWS is assuming responsibility for. That being said, I've spoken to hundreds and hundreds of customers over these two years, who are not able to ask that easy to practice kind of grow in scale. Their utilization drop, and then there's a lot of wastage.
So short of, like, a long answer to this question, think in terms of utilization, You are gonna be better utilized by nature, on your ECS target, all your EKF target process.
And so that's when you start really seeing the price benefits. And I'm not even talking about, you know, the OpEx or the managed part, and now you have fewer networks and fewer on the operator's time to provision infrastructure and management capacity.
What I've seen successful in, you know, kind of serverless champions at their organization what what resonates, I would say, with the two leadership, or, you know, getting that, getting target, at their organization. Using a variety of levels. So use compute settings and look at your net new workload today. It's your application code. Doesn't have to be compiled and doesn't have dependencies and you know that you can kinda just move it to gravitons. Do that. Let me start on gravitons too.
And so that's just, yeah, kind of long way to answer to there are multiple levels of cost next we'll have to kind of play, and just looking at sticker price we use the most obvious answer that our data is more expensive than.
And I can add to that that we we also support the graviton. So if if you're deploying graviton, aqua can support that as well.
What's others?
What other questions came through?
Does Akwa provide the same service to other cloud providers?
Yes.
Awesome.
I think I answered the Lambda versus Fargate. I answered some monitoring questions.
Security and Permissions Discussion
And then commit kind of built on that. Oh, yeah. The security, how flexible and securely can permissions and access be delegated for Fargate and containers using IIM Road security groups and tokens.
Now I've I've referred you to that doodle. Take a look at it, and yes, there are multiple ways to do it. I will say that, currently image scanning is only supported through inspector. So kind of, you know, our quality does complement what what what kind of, solutions we offer natively and a lot of customers, you know, like that, like, like, that tooling because they use it across their e commerce questions as well. So some things to keep in mind this.
Alright. Well, if we do happen to get any more questions, I'll stop mid mid closing, and and we'll tackle them if if we'd like.
So Amit and and Niti, thank you both so much for joining us on tech strong learning has been such an in-depth session. And, I know we had a couple of people who are interested in that video. So we'll be we'll be sending the video of that demo out to a couple of those people.
Feedback and Future Engagement
I'd like to thank aqua security for sponsoring our program today and to our audience. Thank you so much for your time. We really appreciate you spending your afternoon, your evening, or perhaps your morning with us. And we would love to hear your feedback. So at the bottom of the chat, you'll see a link to our post webinar survey. If you wouldn't mind filling that out for us,