AI security is often discussed at the edge, through firewalls, proxies, SDKs, or prompt filtering. These approaches serve a purpose, but they overlook where many of the most serious risks actually take place, inside the container.
In our recent webinar, Aqua’s threat research team, Nautilus, shared firsthand insights from running hundreds of honeypots designed to simulate AI environments. These aren’t theoretical conversations. They are based on real-world attacker behavior, observed in live containerized AI workloads. The conclusion is clear: to secure AI, you need to focus where the workload actually runs.
What Honeypots Reveal About AI Threats
Nautilus honeypots span a range of application types, from misconfigured websites to exposed databases to AI orchestration tools like MLflow. All of them run in containers, which have become the default for deploying AI and machine learning workloads due to their portability, speed and scalability.
Despite the benefits, these environments carry familiar risks. Vulnerabilities, misconfigurations and malicious supply chain components continue to be the top entry points for attackers. For example, a poisoned plugin or a compromised model server can introduce malware that operates at runtime. If that component is running inside a container, security controls must be applied there.
Why the Container Layer Matters
Security strategies often begin at the perimeter, but stopping attacks before they reach the model is only one part of the problem. A successful prompt injection might not just return an unintended response. It could trigger deeper issues like privilege escalation, lateral movement or even a rootkit deployment.
These are not risks that can be fully addressed through static scans or proxy layers. Instead, organizations need visibility into the runtime behavior of AI workloads. This is where container-level monitoring proves essential. An agent running inside the container provides the context needed to detect suspicious activity without modifying the model or requiring changes to the codebase.
TechStrong Webinar: Where AI Security Really Happens: Inside the Container
This approach is particularly valuable because it works regardless of whether the AI is built in-house, managed through a platform, or sourced from a third-party model provider.
Visibility Is the First Challenge
One of the most striking observations from the webinar is how few organizations can confidently answer a basic question: where are your AI models running?
TechStrong Webinar: Where AI Security Really Happens: Inside the Container
In many environments, the answer is unclear. Teams often lack a complete inventory of deployed models, do not know which workloads are interacting with external APIs, and are unaware of shadow AI usage initiated by individual developers or business units. This lack of visibility is a growing concern, especially as models become more deeply integrated into production systems.
Containers can obscure this activity, but with the right tooling, they can also reveal it. AI-specific telemetry from inside containers makes it possible to detect outbound calls to services like DeepSeek or Gemini, flag sensitive prompts and map the use of AI across production environments.
Leveraging What the Security Community Already Knows
The good news is that the container security ecosystem is mature. The techniques and tools developed over the last decade for cloud native protection, such as runtime detection, drift prevention and escape detection, are equally effective for securing AI workloads.
Whether it’s a plugin executing malicious code during build time or a model container attempting to connect to an unknown external service, these are behaviors that container-aware security platforms are built to detect and stop.
In short, the experience gained from securing containers translates directly into securing AI. What’s needed now is to apply those same principles with an understanding of how AI introduces new patterns of risk.
Aqua Secure AI: Protecting AI Workloads Where They Run
Aqua Secure AI was purpose-built to address the challenges organizations face when deploying AI in containerized environments. It provides deep visibility into how AI is used, which models are running and how they behave without requiring changes to code or infrastructure.
From early detection of unsafe logic in development to runtime protection against prompt injection and policy violations, Aqua delivers full lifecycle security for AI workloads. It maps activity to the OWASP Top 10 for LLMs, enforces policies at the application layer, and supports all model types, whether SaaS, managed, or self-hosted.
Most importantly, Aqua Secure AI secures the layer that other tools overlook. While SDKs and edge firewalls help control inputs, Aqua defends the containers where AI applications actually run and interact with sensitive data. That visibility and control closes a critical gap in the AI security stack and gives teams the confidence to move forward without slowing down innovation.
Watch the full webinar: Where AI Security Really Happens: Inside the Container