Aqua Blog

From Prompt to Production: Runtime Protection for AI Workloads

From Prompt to Production: Runtime Protection for AI Workloads

The moment your app calls an LLM, it stops being just containerized software, and becomes an entirely new attack surface. Artificial Intelligence is transforming how applications behave and how they need to be secured. As generative AI and large language models (LLMs) become embedded into everything from customer support bots to internal data tools, security teams are being asked to protect an entirely new kind of workload, one that doesn’t follow traditional patterns and often isn’t visible to traditional tools.

At Aqua Security, we have been preparing for this shift. In 2024, we introduced foundational AI security capabilities in the Aqua Platform, detecting LLM usage in code, analyzing how data is handled, and mapping risks to emerging guidance like OWASP’s Top 10 for LLMs. That was our first step into this space. Now, we are unveiling the next major step with Secure AI, which provides runtime protection for AI applications, a capability that sets Aqua apart in a rapidly evolving landscape.

Introducing Runtime Protection for AI Workloads

As organizations scale their use of LLMs and autonomous agents, they expand the runtime attack surface in ways traditional tools were never designed to detect. Threats like prompt injection, insecure output handling, and unauthorized use of AI models may originate earlier, but they often become visible only after deployment, when real inputs and runtime behavior expose risks that static analysis can’t catch. These are dynamic threats that do not live in static layers of code or configuration; they unfold at runtime.

This is where Aqua leads.

With Secure AI, Aqua delivers real-time detection and response for AI workloads using a patent-pending method of container interception. Aqua observes both the application layer and the underlying operating system, giving teams deep insight into how AI models are used, what data they interact with, and whether behavior deviates from organizational policy. This visibility extends across all AI workloads without requiring any changes to application code or the use of SDKs.

At runtime, Secure AI also enables prompt-level defense, analyzing the actual inputs sent to LLMs to identify attacks such as prompt injection, code execution, and jailbreak attempts. By inspecting prompt interactions directly within the containerized environment, Aqua can detect high-risk behaviors before they escalate.

Beyond threat detection, Secure AI provides comprehensive runtime governance. It identifies which AI models are in use, what platforms and versions they belong to, and whether their behavior aligns with security policies. This context is mapped to the OWASP Top 10 for LLMs, giving teams a reliable framework for understanding and controlling AI risk. And as applications evolve to include agentic AI, where agents operate independently and make decisions across services, Aqua helps security teams monitor these autonomous actions and flag activity that falls outside expected patterns.

This is runtime security built for AI-powered applications, purpose-built to provide control, context, and protection at the speed of modern development.

More Than a Feature. A Complete Platform Approach

While runtime protection is the newest addition, it is part of a much broader solution. Aqua is not offering a standalone tool. We are delivering this functionality as part of our complete Platform, securing AI-powered applications from code to cloud to prompt.

It begins in development, where Aqua scans application code to detect LLM usage and validate the secure handling of inputs and outputs. In infrastructure, Aqua continuously monitors the configuration of AI services like Google Vertex and Azure AI Studio to ensure alignment with policy and best practices.

Before workloads even reach production, Aqua adds another critical layer of protection. Aqua detects malicious or unsafe AI model behavior inside container images, evaluating them in a patented sandboxed environment before deployment. This ensures that any models embedded in the application are analyzed for abnormal or dangerous behavior early in the pipeline, preventing them from ever reaching production in the first place.

Once those workloads are live, Aqua’s runtime capabilities take over, monitoring for threats, detecting real-time misuse, and enforcing policy with full context across code, infrastructure, and workload behavior.

All findings are unified in Aqua’s AI security dashboard, which maps activity to the OWASP Top 10 for LLMs and gives teams a single view of AI risk across the organization.

With this platform approach, Aqua enables you to embrace AI innovation without introducing blind spots, governing usage, enforcing policy, and stopping threats at every stage of the application lifecycle.

Watch Aqua AI security in action:

Why This Matters

AI is no longer experimental. It is powering customer experiences, driving internal automation, and reshaping application logic in production. But it also brings risk, especially at runtime, where traditional controls fall short. Secure AI gives security teams the visibility and control they need to manage this risk without slowing innovation. It is a solution designed for modern, fast-moving cloud environments, combining AI-aware protection with the maturity and scale of a trusted Platform.

“The rise of AI is redefining how applications are built, with most of these workloads deployed in containers,” said Amir Jerbi, CTO and co-founder at Aqua Security. “Aqua has spent nearly a decade protecting cloud native applications and this is the natural extension of that leadership. We’re bringing the same deep runtime protection that made Aqua the gold standard in container security to the next generation of AI-powered applications, with AI-first capabilities designed to address the unique risks and complexity introduced by LLMs, autonomous agents, and evolving AI-driven workflows.”

The Road Ahead

Industry analysts predict over a billion new AI applications will be built by 2028. That’s more than ten billion containers running AI workloads, each one a potential risk without the right security controls in place. At the same time, attacks targeting runtime layers have already increased by over 400 percent, often bypassing static defenses entirely.

Secure AI will be showcased at RSA Conference 2025, South Hall Booth 1727. This marks a major evolution in Aqua’s vision, one that reflects how modern software is changing and the role security must play in enabling safe, scalable innovation.

Aqua secures every cloud native application everywhere, that includes GenAI apps.

AI
Erin Stephan
Erin Stephan is the Director of Product Marketing for Aqua's Cloud Security portfolio. Erin has more than 10 years of product marketing experience in data protection and cybersecurity. She enjoys connecting with people, helping to articulate their challenges, and bringing products and solutions to the market that help solve those challenges. In her free time, you can find her catching a flight to a new city, shopping for new home décor, or taking a spin class.
Guy Balzam
Guy Balzam is a Director of Product Management for Data & AI at Aqua Security, focusing on data platforms and AI solutions. With over a decade of experience in product management and consulting, Guy has helped startups scale as both a product leader and architect. As a former security leader at ELAL, he brings valuable cybersecurity expertise to his current work in AI and data platforms.