Aqua Blog

How to Secure AI Workloads from Code to Cloud to Prompt

How to Secure AI Workloads from Code to Cloud to Prompt

Aqua protects AI workloads against new and emerging attacks, such as code injection and jailbreak, with a complete lifecycle approach. It starts with the code, where Aqua embeds AI security directly into your secure coding practices. This includes SAST checks that identify insecure usage of AI/ML packages, all aligned with OWASP Top 10 for Large Language Model Applications. From there, it expands to configuration, where you can set GenAI-specific assurance policies. Aqua establishes guardrails and risk thresholds, preventing the unsafe use of known-risky LLMs and blocking compromised code from being deployed into production.

Step-by-Step Guide

1. Create Assurance Policies

In the Supply Chain Security module, click > Assurance Policies > Choose New Assurance Policy.

2. Name the Policy

Enter the Name of the policy, add a Description, and select either Fail PRs or Fail Builds.

3. Determine Severity 

In the Controls section, check AI and ML Severity and determine the severity level.

Additional Runtime Protection capabilities are coming soon! See the demo below.

Who’s Afraid of Virtual Needles? The Similarities Between SQLi and LLMi

The good news is that there are currently very few real-world attacks targeting AI applications. The bad news? They are coming, and they are coming fast. Just like in apocalyptic zombie movies, a wave of LLM injection attacks is approaching, with the digital equivalent of eerie sounds and ominous warnings.
LLM injection (LLMi) attacks are often compared to SQL injection (SQLi), one of the most common and widely exploited vulnerabilities. Today, no CISO would ignore a SQL injection issue in an organizational app. But was that always the case?

The Rise of SQLi: A Cautionary Blueprint

SQL injection first appeared in the late 1990s within security researcher circles. Articles in publications like Phrack demonstrated how malicious input could manipulate SQL queries. Initially seen as an academic concern, SQL injection eventually spread into real-world environments. The early 2000s marked a turning point, with attacks becoming more common. Early techniques were simple, like injecting payloads such as ‘ OR ‘1’=’1 into form fields or URLs, mainly targeting data-driven sites like login pages and shopping carts. As awareness grew, SQL injection earned a place in the OWASP Top 10 by the late 2000s. Yet despite years of warnings, it remains a serious and damaging threat, often leading to major data breaches.

LLMi in the Wild: What We’re Seeing Now

Now we are beginning to see signs that LLM injection may follow a similar path. A recent academic study showed how real-world AI applications could be exploited. Researchers randomly selected ten AI-powered apps from the Rundown website, which reviews AI-driven tools, and found that applications like Notion AI were vulnerable to LLM injection attacks. These attacks led to information leakage and even remote code execution through chatbots and code-assist tools.
Building on that research, we developed a realistic AI-powered banking chatbot using the same underlying technology. After a brief exploration of open-source tools and dark web forums, we built a complete attack vector using publicly available techniques. The outcome? We were able to exploit the bank’s chatbot application easily. Our exploitation included information disclosure as we fetched data of other customers. Eventually, we were also able to run arbitrary commands and compromise the entire server.

Watch this demo of an AI agent-based attack using jailbreaking techniques and the exploitation of a compromised LLM model.

Don’t wait for an attack to reveal the gaps!

Contact your Aqua Sales Representative or Customer Success Manager today to learn how you can strengthen your container security and prevent real-world attacks.