Aqua Blog

Securing LLM Apps with Aqua: Beyond the OWASP Checklist

Securing LLM Apps with Aqua: Beyond the OWASP Checklist

It started with a single LLM API call buried in a product prototype. By the time the team noticed, that call had turned into a full-blown feature. Customers were using it, business logic was being shaped by it, and the security team had not even seen it. That is the pattern we are seeing across industries. Generative AI is moving faster than governance, and security teams are left trying to retrofit guardrails onto systems that think, respond, and learn in ways traditional applications never did.

The OWASP Top 10 for LLM Applications provides a useful way to understand the types of risks that can emerge. But identifying those risks is only the first step. The challenge is knowing how to address them.

Understanding LLM Risks in Production

Modern AI applications share common patterns. They run in containers, scale with Kubernetes, and operate across complex environments that blend public cloud and on-prem infrastructure. These apps either call external LLM APIs or run their own hosted models. In both cases, the container is where everything comes together: prompts, inputs, APIs, logic, and outputs.

Consider an LLM-enabled feature embedded in a cloud native application. It receives a user prompt, builds a response, and then uses that response to trigger actions such as writing to a database or calling an internal API. In development, everything appears to be scoped and controlled. But once deployed, a crafted prompt causes the model to generate a command that slips past validation and reaches a backend service. This is not just a model issue. It is a runtime problem that unfolds inside the container, where the application logic and model responses interact with real systems.

The OWASP Top 10 provides a structure for recognizing how risks emerge during live interaction, not just during development or testing. It encourages teams to think about model behavior, architecture, and system integration together.

See Aqua Secure AI in action from visibility to protection across your organization,  covering every model, every AI service, and every runtime environment.

Where the OWASP Top 10 Shows Up

The OWASP top 10 risks for LLMs 2025 version reflects the real risks teams are seeing as generative AI becomes part of production systems. These risks are often connected. A model that performs safely during isolated tests might behave very differently when it is connected to production data, user prompts, and live systems. Here are a few examples:

Unexpected actions by the model

An LLM has been given access to internal services in order to automate tasks. A prompt injection or a misleading input causes it to trigger an unintended function. This is a case where excessive autonomy and a lack of runtime validation lead to unexpected consequences.

Exposure of sensitive information

The model has access to internal documents or inherited context. A prompt designed to extract information succeeds, and private data is returned. Without appropriate access controls in place at the container level, this kind of disclosure can happen silently.

Model-generated output causes harm

The model produces a response that seems valid but is actually incorrect, biased, or unsafe. When that output is used by downstream systems, such as in configuration changes or as part of a script, it can result in system level failures.

These OWASP risk categories are especially relevant to platform, security, and engineering teams responsible for building, deploying, and maintaining LLM powered applications in containerized environments:

  • Prompt injection: Malicious or unexpected inputs that alter model behavior.
  • Sensitive information disclosure: Leaks that occur when models reveal data they have seen or can access at runtime.
  • Improper output handling: When unsafe output is accepted as valid and used without checks.
  • Excessive agency: When models are allowed to act with too much autonomy.
  • Supply chain risk: Hidden vulnerabilities from third-party models, datasets, or tools.
  • Training data poisoning: When compromised data introduces hidden logic or bias into model responses.

These risks take shape inside the container where the model receives inputs, executes logic, and returns results. It is at the application layer where observability and control need to exist.

Why Runtime Visibility and Control Are Essential

Knowing that a risk exists is different from being able to stop it. Many of the most serious LLM-related issues only emerge after deployment, and static analysis and pre-production tests are not enough. Security teams need the ability to observe what models are doing in real time and to intervene when necessary.

This is where Aqua focuses its efforts.

  • During development: Aqua scans code and configurations to identify insecure usage of LLMs, such as risky prompt patterns or unchecked output paths.
  • In production: Aqua monitors containerized environments to detect unsafe model behavior, prevent data leakage, and block harmful interactions.
  • By applying a security policy: Aqua provides a set of controls that align with OWASP’s LLM categories, so teams can apply consistent guardrails and stop unsafe behaviors before they reach users or systems.

Because Aqua operates inside the container at the application layer, it can see how LLMs interact with their environment and apply enforcement based on actual runtime behavior.

Learn More About Securing LLMs in Production

Understanding the risks is a starting point. What matters is being able to take action.

Aqua helps teams apply the OWASP LLM guidance in practical ways, from development through to production. With real-time monitoring, policy enforcement, and visibility into containerized workloads, Aqua supports organizations looking to adopt generative AI without compromising on security. Learn more about Aqua Secure AI

see
Guy Balzam
Guy Balzam is a Director of Product Management for Data & AI at Aqua Security, focusing on data platforms and AI solutions. With over a decade of experience in product management and consulting, Guy has helped startups scale as both a product leader and architect. As a former security leader at ELAL, he brings valuable cybersecurity expertise to his current work in AI and data platforms.
Erin Stephan
Erin Stephan is the Director of Product Marketing for Aqua's Cloud Security portfolio. Erin has more than 10 years of product marketing experience in data protection and cybersecurity. She enjoys connecting with people, helping to articulate their challenges, and bringing products and solutions to the market that help solve those challenges. In her free time, you can find her catching a flight to a new city, shopping for new home décor, or taking a spin class.