The New Container Economy: Vendor Rankings

The container economy has transformed enterprise IT, accelerating adoption and introducing new risks. In this exclusive webinar, Chris Ray, Field CTO and Security and Risk Analyst at GigaOm, unpacks the latest GigaOm Container Security Radar Report.

You’ll gain an inside look at:

• The explosive growth of containers and their role in AI workloads
• Why traditional security tools fail in ephemeral container environments
• Real-world attack scenarios from supply chain compromises to poisoned AI models
• The table-stakes and emerging features defining modern container security solutions
• Where leading vendors, including Aqua, rank in the 2025 GigaOm Radar

Whether you’re evaluating solutions, securing AI workloads, or planning your cloud-native strategy, this session gives you the unbiased analysis you need to make informed decisions.
19:25
Presented By:
Chris Ray, Analyst, GigaOm
Transcript
Hello, everybody, and welcome to the webinar. Today, we're gonna talk about the new container economy.

Now before we go too much further, I want to introduce myself and also introduce GigaOm a little bit. So about me, I am a practitioner and a field CTO here at GigaOm. Throughout my career, I've been in the trenches, architecting and implementing systems, leading security operations teams, and, unfortunately, experiencing the chaos and stress of major security incidents firsthand.

This practical background has given me a unique perspective on the gap that often exists between security theory and operational reality.

Having personally felt the three AM pressure of making critical decisions under fire, I bring both battle tested experience and empathy to my analysis of emerging technologies.

My focus is on translating complex security challenges into actionable insights that help organizations build resilience before they face their own security events.

Now to introduce GigaOm, if you're not familiar with us, GigaOm is an independent technology research and analysis firm. We're dedicated to helping organizations navigate the increasingly complex landscape of enterprise IT. Now unlike traditional analyst firms that focus primarily on market forecasting, we take a practitioner first approach, delivering research and advisory services that address the real world challenge facing technology implementers and decision makers.

Now let's talk about what we're here to learn today. Let's lay out the agenda. We're gonna first touch on what is the container revolution and how does that impact you as an organization. Then we'll touch on the container AI security concerns.

I'll give some real world examples, and then I'll give an overview and an introduction to the Gigaohm radar report to help you make sense of what you find in this market. The container revolution has transformed enterprise computing at a breathtaking pace. With organizational adoption soaring from just twenty three percent a few years ago to an overwhelming eighty five percent, while Kubernetes has become nearly ubiquitous jumping from fifty eight percent to ninety six percent adoption.

This explosive growth is evident in the staggering eighteen billion container images pulled monthly from Docker Hub and the hundred and thirty three percent increase in average customers' containers per organization.

What makes this ecosystem particularly challenging from a security perspective is its ephemeral nature.

Nearly half of all containers exist for less than an hour. This creates a fundamental visibility challenge for traditional security approaches.

The financial stakes continue to rise dramatically.

With the container management market surging from four hundred and sixty five millions to one point six billion, driven significantly by AI innovation.

Perhaps most telling is that ninety two percent of all AI and ML workloads now run-in containers, making them the defacto infrastructure for AI deployment.

This convergence means organizations face a critical reality. You cannot secure AI without first securing the container foundation that they're built upon. The explosive growth of container adoption has dramatically amplified security challenges as well, which is effectively ten x ing risk across the enterprise landscape.

As organizations rapidly embrace containers without corresponding security expertise, a dangerous skills and visibility gap is emerging. Security teams cannot keep pace with developers deploying thousands of containers weekly. This adoption surge has created a perfect storm for supply chain vulnerabilities with each container potentially introducing dozens of unvetted dependencies.

Meanwhile, technical security debt continues to accumulate without a clear remediation path as teams prioritize deployment speed over security fundamentals. The situation is further complicated by the overwhelming alert fatigue plaguing security teams who now face thousands of container vulnerability notifications daily, creating a dangerous blind spot as critical issues get lost in the noise.

Many organizations have retreated to checkbox compliance approaches that create a dangerous false sense of security while leaving true risks unidentified.

Perhaps most concerning though is how the ephemeral nature of containerized workloads with nearly half of them let existing less than an hour fundamentally breaks traditional security monitoring and forensics capabilities, leaving security teams unable to investigate after the containers are terminated.

This perfect storm of rapid adoption, technical complexity, and ephemeral infrastructure demands a fundamentally new approach to container security.

And I'll just put this out there. The way I think about this is most organizations implementing AI security are starting at the wrong end of the problem.

While everyone focuses on prompt engineering guardrails, the container infrastructure running those AI workloads represents the primary attack surface.

Container adoption has far outpaced security knowledge by at least three, four, maybe five years. Most security teams are still securing containers like their virtual machines.

Now let's move on to some real world examples.

Let's talk about poison based images and how they can make a poisoned model. So imagine your data science team has just deployed a new LLM using a popular CUDA optimized Docker image they pulled from a public repo.

Unknown to them, that image contained a compromised version of x z utils with a hidden SSH backdoor.

Within days, attackers silently access your GPU cluster, begin harvesting proprietary training data, and subtly manipulate model weights during fine tuning operations.

By the time anyone notices, your AI has been compromised for weeks, potentially producing biased or harmful outfits while leaking sensitive data.

The business impact of this scenario can be devastating on multiple fronts. Beyond the immediate data breach implications, which could trigger regulatory penalties, there's the harder to quantify damage from a poisoned AI model making flawed decisions across your organization.

Companies have spent millions remediating such incidents with recovery cost averaging four point two million per AI related data breach according to recent studies.

Most concerning is still is the erosion of trust. Once customers discover AI outputs were manipulated, rebuilding that confidence can take years.

The entire attack vector could be prevented with proper security controls that verify image integrity, detect unauthorized modifications to base images.

Let's talk now about supply chain attacks, which is a very popular topic today. In today's rapid development of environments, CICD pipelines automatically build and deploy AI models dozens of times daily, pulling in hundreds of dependencies with each build. Here's the scenario we're increasingly seeing. During an automated build, a seemingly innocent update to a nested dependency, perhaps three or four levels deep, introduces malicious code that activates only during runtime.

This time delayed attack remains completely invisible to static analysis tools that scan images at build time. The code might wait for a specific condition before exfiltrating data or subtly altering model behavior in ways that benefit the attacker while remaining difficult to detect.

The fallout from these sophisticated supply chain attacks extends far beyond the immediate technical compromise.

For AI systems specifically, the damage multiplies because of the black box nature of many models. You may not even realize your AI is compromised until it's been making corrupted decisions for months.

Organizations have faced legal liability when compromised AI systems made discriminatory decisions or exposed sensitive data.

What makes these attacks particularly insidious is their scalability.

A single compromised dependency can potentially affect thousands of downstream AI deployments.

This is precisely why continuous runtime monitoring of containers has become essential. It's the only way to detect behaviors that manifest long after the build process has completed.

Now the popular prompt injection protections that may exist. Consider this scenario. Your organization deploys a customer facing Gen AI service that interfaces with internal systems. Popular choice, I'm sure. Attackers launch a sophisticated prompt injection campaign using carefully crafted inputs designed to manipulate your model into revealing proprietary information or executing unauthorized commands.

Without a properly configured service mesh or container security integration, the abnormal network communication patterns go completely undetected when the compromised model begins accessing unauthorized resources and exfiltrating data.

The absence of container level controls means there's nothing to identify or block the suspicious activity, allowing customer records, proprietary algorithms, internal documentation to steadily leak out of the environment for weeks.

By then, the damage is done. Sensitive data has been exposed, regulatory violations have occurred, and your organize organization faces both reputational damage and potential fines that could have been prevented with modern security controls.

The financial and reputational consequences of prompt injection attacks against AI can be substantial.

There are cases where a single successful prompt injection led to the exposure of customer sensitive data, resulted in regulatory fines exceeding five million dollars and accelerated customer churn. What's particularly noteworthy is that even organizations investing heavily in AI specific security measures like prompt filtering or output scanning still remain vulnerable without a foundational container security control.

Container level security provides the critical last line of defense, stopping attacks that bypass application level protections.

This layered approach is essential because prompt injection techniques evolve rapidly, and new variants appear almost weekly that can bypass even the most sophisticated prompt validation system.

Now that we have talked about the challenges in this space, let's get into the overview of the radar and how to use it to learn more about security solutions.

As I'm building reports here at GigaOM, as all of the analysts are putting the reports together, we need to define what is the space. We need to draw a circle around the technologies and say, either you are or you are not these things. In order to do that, we use table stakes. Table stakes are the features that must exist within the container security solution in order to be considered a container security solution. In this case, the table stakes are standalone or add on container security capabilities, access control and identity management, CICD pipeline and DevOps integrations, compliance monitoring and enforcement, image scanning and vulnerability management, runtime protections and threat detection, and then finally, audit logging and forensics.

Now that we've established what the table stakes are, meaning the container security solutions in this space will have these features, we now need a way to differentiate them.

And that's where that's where key features and emerging features come in.

Particularly in this space, there are numerous key features, which are how you can differentiate the solutions and help to select the best capabilities for your organization.

Those key features are AI and ML based event correlation to obviously help with security operations, security event detection, and threat hunting, drift analysis and response, looking at the base images and making sure they are what you expect them to be, contextual risk analysis, which brings in all of the context of the environment that is pertinent to the container and then adding that into security event to help you make a better decision on what to do, life cycle security management, deep image threat analysis, network segmentation, registry scanning and monitoring, secrets management, and then we move into emerging features. And now the best way to think about an emerging feature compared to a key feature is key features are proven to to be valuable. We score them on a scale of zero to five, zero being the the solution doesn't offer it, five being it's exceptional.

These are features that have existed for a while, and vendors are investing in them. Buyers are asking for them. Emerging features are on the cutting edge of what's possible. These are the bleeding edge technologies. They may not be proven yet, but they have the same score that we apply. So zero to five scoring. We look at, in this space, particularly, we're looking at code to cloud vulnerability management, so shifting left, context driven shift left security, again, folding in the context from the environment from other sources other than the container itself, integrated threat intelligence to, again, enhance the context and give you better decision making capabilities, integrated zero trust models, and then finally, policy as code as code.

Now that information is useful to you to determine what the solutions are capable of doing, but how do you evaluate them? Well, that's where the business criteria comes in. Functional features that you use to evaluate the technical solution to determine if this suits your needs. In this case, we've identified configuration capabilities, flexibility, interoperability, manageability, observability, scalability, support, and cost as important business criteria to consider.

Now, again, these are scored on a scale of zero to five. So as you're looking at the radar report, you can quickly determine how is a, how scalable is a solution. Scored zero to five, where does it land on that scale? How interoperable is it with my existing environment?

Again, scored zero to five. So you can work through the radar. You can identify business criteria that are important to you and then find the scores for the vendors that you're looking at. Now although deployment models are not scored, they are very important pieces of information.

And in this report, we have identified on premise deployment models, cloud based, private, public, SaaS, and hybrid, and then finally, a completely offline and air gap deployment model.

Now with all of that said, as I mentioned, we have the table stakes. We have the key features. We have the emerging features, the business criteria, and the deployment models.

We take all of the scoring, and what you see is a representation of each vendor's score charted out on a graphic that you see in front of you. This is called the radar graphic.

Let's first talk about the left to right and the top to bottom access. So left to right, we're talking about feature versus platform. For a vendor to land on the left side or the feature side, that indicates that they are focusing on solving, specific use cases. Maybe it could be for a specific vertical such as, IoT or, you know, gas and electric. It could be a utility. Maybe it's manufacturing. Maybe it's health care.

They focus on those areas almost to a detriment of maybe other areas. Because of that, that shifts their use cases to focus on the the use cases itself for those verticals. Additionally, it could be that the solution doesn't execute on some key features, which then reduces the quantity of use cases that are available to them, again, pushing them to the left. Now what's unique about the radar is it's not like a, I won't mention any names, but it's not like a, another graphic you may you may see where up into the right is where you find the leaders.

That is not the case with the radar graphic. A leader can exist top to bottom, left to right, in any of the quadrants based on how well they execute in the market. So now we've covered feature versus platform. Let's talk about the top to bottom, maturity versus innovation.

When we talk about maturity in the GigaOm radar report, what we're talking about is how much will the solution change over the contract's life cycle, Whether that's one year, two years, three years, what we're considering here is is how much does the solution look and feel how much does the solutions change, look and feel over that contract life cycle. Solutions that are mature, they it doesn't mean that they're not bringing new features. It doesn't mean that they're not bringing new capabilities to the market. It means that the way you use it today, the training you receive today will likely apply and still be useful in year three of your contract.

If we look at the bottom half of the radar, innovative vendors, what we're talking about there are vendors who have they're bringing change into the market, new capabilities, new features, and it impacts how you use the solution.

So, essentially, when you're considering maturity versus innovation, an organization may choose maturity if they want a solution to operate, look, and feel much like it does today as it does on day nine hundred. Whereas with an innovative solution, they are okay with some change or some turnover in feature or menu and UI changes, in order to to capture some of the most cutting edge features.

So with all of that said, we then look at how quickly are the vendors developing, how quickly are they adding features. And and to do that, we bucket them into three categories. We have forward movers, which are the light blue. We have fast movers, which is the dark blue, and then we have outperformers.

A forward mover is a vendor that typically is slower to develop and slower to release features. You can consider a fast mover, one that is keeping pace with the market and its peers. And an outperformer is one that is demonstrably releasing features, capabilities. Its roadmap is aggressive, and they're, executing well on that, and they're delivering on what they say they're going to deliver at a rapid base as compared to its peers.

Now taking all of that information, and I know that was a lot, talking about feature versus platform, maturity versus innovation, to help you better understand what that means for Aqua in particular, as you can see, Aqua in the red box there, they are close to the, dividing line between maturity and innovation, but they land in the innovation quadrant, which means you can expect some change over the contract life cycle as they implement new features, as they develop new UI, and, other such features that that may impact how you use the solution. Now landing inside of the leader ring means that their scores, those key features, emerging features, and business criteria indicates that they are going to be a leading solution in this market.

Now, most importantly or I can't say most importantly, but something important to consider is their placement on the left to right access. As you can see, they're over in into the platform quadrant quite a bit. That means that they are bringing a comprehensive solution that addresses numerous use cases. It's not targeted at any one vertical. It's not targeted at any one geography.

It's going to be broadly applicable to many organizations.

I know that was a bit of a speed run, but I hope that I've I've helped you learn a little bit about the container security landscape today. I hope I've given you the information that you need to go out into the market and make good good decisions for your organization.

With that, I'll say goodbye.
Watch Next