GenAI is Everywhere: Is Your Security Ready?

Generative AI (GenAI) is driving innovation across cloud-native environments, offering new opportunities for efficiency and personalization. But with this rapid adoption comes new security challenges. How do you make GenAI happen—without breaking security? In this webinar, you’ll learn:

• How GenAI and Large Language Models (LLMs) expand the attack surface in cloud-native environments like containers and microservices.
• Key security risks highlighted in the OWASP Top 10 for LLMs, including Prompt Injection, Insecure LLM Interactions, and Data Access vulnerabilities.
• Practical steps to integrate GenAI into your development process without compromising security.
• How Aqua Security’s solutions secure LLM-powered applications from development to runtime.

Join us to see how security professionals, developers, and DevSecOps teams can work together to secure GenAI applications from code to cloud. Ensuring innovation continues safely and securely.
Duration 48:12
Presented By:
Iheanyi Njeze
Senior Solution Architect, Aqua
Meha Varier
VP Product Marketing, Aqua
Transcript
Good morning and good afternoon, everyone. Thank you for joining our webinar today from Aqua Security on generative AI. We hope to bring you some really interesting and useful content around, understanding and assessing your needs for generative AI security.

Security. I'm Meha, the VP of product marketing at Aqua Security, and I'm joined here today by Iheanyi, who's a solution architect at Aqua Security. I'm based in Toronto, and Iheanyi is in London.

Alright. So getting into the agenda, we have quite a few things to cover today, and we're really excited about it.

Initially, we'll start with just a level set on what generative AI is, what it means to to you potentially.

We'll talk about some trends in the adoption of generative AI applications, some top concerns that organizations have and CISOs have specifically, around the adoption of generative AI as well as some regulation.

And then we'll go into some details around, how we think about at Aqua Security generative AI, how our application how we help protect our customers' applications around gen a Gen AI, And, Ihania will present a short demo on that as well. And then we'll end with some key takeaways, obviously, questions, and present an exclusive offer for all of you that have joined today.

Alright. So with that, let's get right into it.

So as I mentioned, I wanted to level set on the definitions of Gen AI. We've heard this term thrown around quite a bit in the last, twelve to eighteen months, and I wanna make sure that we all have a similar understanding of what it is. So starting with artificial intelligence. Now this is obviously a very popular term. We've all watched the sci fi movies, and we know that the robots are taking over, probably sooner than we expect. So no explanation needed. But I do wanna point out that AI is the umbrella term or superset of all the technologies that we're see seeing and hearing about being launched and widely adopted today.

This covers all kinds of technologies designed to make machines smarter and helping them solve problems and making decisions pretty much like humans do.

Under that, you've got machine learning, and this is about teaching machines to learn from data and get better over time without needing constant instructions. Right? It's about making machines smarter.

And then you have generative AI, which is what we're talking about today, which is the branch of machine learning that is used for content creation mainly. And this is what has exploded in the last, couple years. I don't know about you, but I have started using GenAI quite a bit, as a product marketer.

And, it's not limited to text, but, also, GenAI can be used to generate images, to generate music, as well as videos.

And finally, you have the large language models. And this is a very specific type of generative AI that focuses on understanding and generating generating human like text, perfect for things like chatbots or summarizing information. So you've probably started to encounter some of the LLM, applications as well in your day to day life, or you're probably using it if you're a developer.

So what are the, you know, fairly popular generative AI technologies that are out there, and the LLM technology. So one of them is OpenAI, which, you know, is which is what powers chat GBT.

Gemini is what Google has come out with, which also has gained popularity in the last little while. I personally started using it, on my day to day activities and, you know, automatically get questions answered through gem Gemini when I search on Google.

You might have heard of Mistral AI, which is, more of a cutting edge European start up that focuses on creating advanced language models for AI. And what really sets them apart is their commitment to openness, and they also prioritize ethical AI development, aligning with strict European rules about privacy and safety.

Anthropic is an AI safety and research company.

It was founded by former OpenAI employees, and it focuses on creating AI systems that are interpretable and aligned with human values, and, of course, safe for deployment in real world scenarios. So there's multiple Gen AI technologies out there, and each one of them has a slightly different focus and, of course, different strengths and weaknesses.

Now if we talk about the adoption of AI, right, AI is being adopted so rapidly across so many different industries. It's exploded. Right? Like, you look at the screen here and you see so many different, products that have already adopted AI and across different industries as well. So there are all these horizontal platforms that have applied, AI already. And then for specific verticals such as health care, you see Gen AI being used for diagnosing diseases, personalizing treatment plans, and improving overall operational efficiency.

We also have the use case of retail that leverages AI for personalized recommendations, inventory management, a lot of customer service chatbots as well.

In finance, you see AI powering fraud detection, risk assessment, etcetera. So this widespread adoption of AI really highlights, the versatility of AI and its potential to drive efficiency innovation, across a diverse set of use cases.

And we're seeing that in the real world. Right? So why should you care about generative AI and LLM? Well, more than half of IT leaders expect their organizations to use generative AI to build software.

Right? And and this is used, to, again, to speed up product innovation, to stay competitive. Everybody is adopting AI and moving really, really fast because of it. And then ultimately to build LLM powered business applications, which not only makes these vendors more efficient, from a cost perspective and an execution perspective, but it also adds value for customers, for their customers and makes them, able to access their services more efficiently.

By one estimate, we're hearing that, by twenty twenty seven, forty percent of platform engineering teams will use AI to augment every phase of the software development life cycle compared to just five percent in twenty twenty three.

So that's what the graph looks like. Right? It's it's across different, types of LLM applications. It's different types of, you know, adoption models and use case models of of LLM. It's a huge market. It's growing extremely fast.

And the reality is that these applications, are being used by your clients today, and hopefully by your organization as well. So you need to figure out a way to use LLM applications in a secure way.

By the way, these these LLM apps, these AI apps are being built using cloud native technologies. So from security stand standpoint, the number one thing, you need to be thinking about as an organization is how do I use on and and implement security for these cloud native technologies and therefore keep my LLM applications secure.

Generative AI, despite all of this adoption and rapid growth and and probably because of it, it has become a top concern for CISOs and rightly so. Right? Like, there's all this regulation, that's now coming out. Eight out of ten CISOs worry about the lack of visibility and controls within the use of AI. Right? Where's AI being used in my organization? What is the risk to sensitive data, to my to my security, to to my business?

And as you see on the screen, slightly more than half are worried about regulation and their liability due to AI. This is what is keeping CISOs out at night. Right?

And because of that, almost half of all the CISOs are banning use of all AI in the workplace till they figure out how to use this in a secure manner. Well, there are solutions out there, right, already, to enable the safe use of AI, and we'll get to that in a minute. So this is obviously a lot of uncertainty, and it represents an opportunity, for us as as vendors to help our customers figure this out, but yourselves as practitioners to go out and seek the solutions that you need to be able to continue the use of AI, to be able to adopt AI, and to move quickly on your innovation.

So with that, I'll pass it to Ihanyev to talk a little bit about regulation and then go a little deeper on, the the side of how Aqua protects, Gen AI applications.

Amazing. Thanks, Meha. So some of the regulation that we're seeing in the industry, even though it's a fairly new industry compared to other, industries around cybersecurity, Gen AI security and AI security in general have fairly new regulations, but they are very far reaching, with a lot of consequences for, organizations that violate them.

Some examples include, you know, things like the EU artificial intelligence act or I think it's abbreviated as EAIX as well, where, the EU essentially is mandating organizations that utilize artificial intelligence to prevent the exploitation of these applications or the allowing of the artificial intelligence to exploit any vulnerabilities in the application. It also deals with things like safety and privacy, making sure that your AI and LLM models do not access things like private, you know, data that is private or personally identifiable information.

And, essentially, organizations that fail to adhere to this could face up to thirty five million euros in fines or up to seven percent of the annual turnover from the previous financial year. So this is really, this this is not just having reputational damage anymore. It's now actually, introducing a financial, hit for organizations that don't adhere to this.

Right? Another one, that is an example of that is the executive order fourteen one one ten. So this was given by the White House in twenty twenty three. And, essentially, it mandates the same thing.

Organizations that utilize artificial intelligence must be able to prove, ensure that their applications are secured, that they are not vulnerable, that they don't utilize the applications inherent vulnerabilities. They don't accept private data.

The data that is used to train it is secure. So there's a lot there's a lot on the back of these, but the important thing is that it's now being put in as regulation, which organizations have to comply to. Everybody's probably going to start using AI in the near future, so it's very important to keep an eye on this. So the, the risks of not adhering to this kind of, regulation essentially includes things like, you know, contractual penalties. So you could lose things like contracts with the federal government.

You could face, you know, government audits. It could include which can then reach out to things like fines and possibly other punishments in the judicial system. K. Next system. Next slide, Maya.

Yeah. So, essentially, the reason we're talking about the cloud native landscape is because AI in its nature is run on cloud native applications. Right? So one of the reasons why AI is booming now is because of the growth of cloud. So cloud has allowed organizations essentially to utilize the power and the scalability of the cloud to build and run all these huge large language models, which take up a lot of data, a lot of compute, but the scalability of the cloud allows it. So, essentially, the security of, cloud native applications and the security of generative AI applications are hand in hand because one essentially washes the other. Right?

And this includes things like, you know, if you have a generative AI model that is, exploited, it could have things like Internet access. So for example, when you're using ChatGPT, it's an LLM behind that UI that you're using, but it has Internet access. Right? So you can start to ping it if you wanted to attack it. Make so you have to make sure that these models are essentially secured behind the scenes.

It could have access to things like documents, access to people's private data, health data.

It could have access to classified information as well.

You have to make sure that the data sources where it's accessing are essentially the data sources of the people who are allowing it.

And you could also be allowing access to custom code. So all these organizations that we saw earlier, like OpenAI, like, Mistral AI, Gemini, that is all custom info, you know, code, custom, proprietary information. So you must make sure that these LLMs cannot be used to then access this proprietary information which gives organizations, a business edge over their competitors.

Next slide, please.

So an example is, of LLM risks. This is a real life example.

The security research actually shared this on Twitter or x.

Apologies.

And what, essentially, they did was they were speaking to the chatbot of a specific Chevrolet dealership. And by using assist, a method or a technique called prompt injection, which essentially allows you to kind of trick the large language model, into doing your bidding, which it would not normally do. That's what it's called prompt injection. He actually got the chatbot to actually agree to sell him a Chevrolet Tahoe for one dollar.

Right? And we'll talk about these kinds of techniques and what they mean in terms of the larger context of things like the OWASP, top ten risks for LLM, large language models. But, essentially, this AI was then exploited by the security researcher. I don't think he finally took the Chevrolet for a dollar.

But this was just it just goes to show that, AI applications can be exploited just by simply using natural language. You don't have to be an expert in IT. You just need to be able to talk to the AI and, exploit it. Right?

And then if you skip on forward and essentially, what you can see here is one of the biggest drivers of AI and LLM applications is Kubernetes. Right? So because of the scale that Kubernetes allows, the growth of Kubernetes in the industry, which is so if you don't know right now, Kubernetes is now it's almost the unofficial, operating system of the cloud because it allows organizations to harness the power of the cloud, scale vertically or horizontally as they need.

This has become the preferred option for running AI applications.

So your popular AI applications are run on Kubernetes.

I won't name the names, but your popular ones that you use on a day to day basis are run on Kubernetes.

K?

Next slide.

And the development of Kubernetes applications or journey applications or both essentially have the same risks that you would have to face as part of governance around your software development life cycle. However, you've not added the extra layer of generative AI, which allows more advanced attacks, more advanced exploit techniques, and, ultimately, it can be, more difficult to detect as well. So if you just, click through, to show some of the threats that we see through the SDLC. So I know this is a bit of an eye chart, but I'll try to walk you through it.

So, essentially, if you think about a large language model, the way it goes is someone starts to build out this model. They could either start building it from scratch, or they can use an, a tool on the, marketplace or on the Internet, that they found, and then they build on that. So, essentially, it's a developer who starts to build, out this application, and it's got a few risks, which we'll touch on in a bit. So they build the LLM, or they write the code for the LLM.

They build it into an artifact. So this could be an image. It can be, you know, something like an artifact or a Maven artifact, and then they store this somewhere. Right?

From this storage, they then deploy this application so that they can run the large language model and then essentially utilize the capabilities of AI to either enhance the, capabilities of their product or to actually sell it as a product on its own. Right? But there are risks throughout the software development life cycle. And, essentially, what happens is something like saying code. Right? You need to be able to make sure that the LLM doesn't have any code risks. So are you doing any static, application security testing on the code to make sure that there are no insecure methods?

The structure being used for, the code is secure. There are no misconfigurations. You haven't checked any secrets. So I was actually one of the cloud providers, headquarters last week, and they were talking about about sixty seven percent of the attacks they're seeing on applications, especially JNIA applications, is when people forget to remove the sensitive data like keys or tokens from the application before they actually deploy to production. Right? So that's a large number.

The next bit is when you're building the application. Is it being is it being, is the code that you've pushed in, is it actually the large language model which you're going to get into production? So for example, is someone changing the data sources that you're using to train this application? Is someone manipulating the dependencies of your LLM when you're deploying it?

Is it being tampered with? Is it going to the insecure, environments that you own? And then while it's running, is someone then exploiting the application as it runs? Right?

Just like how we saw the researcher who exploited the, Chevy Tahoe application just by using natural language. It doesn't always have to be an advanced attack. But is someone exploiting this? Is someone going to open a reverse shell? Is someone getting the application to tell it its secrets? What about any passwords it might have stored? K.

Next slide.

So, essentially, some of the attacks that we're seeing on applications like Genya applications in the wild by our Aquanautilus, who are essentially our cloud native research team. We focus solely on cloud native attacks, and what they're seeing on in the wild is actually mind blowing. So for example, they attack they analyze thousands of attacks, and some of these are very unique attacks and very advanced and very sophisticated up to, nation state level.

But the important thing to note is that some of these are super advanced, and they also utilize the capabilities of AI to attack the application itself. So for example, we're seeing things like dynamic code loading into the LLM, asking the LLM, you know, for things like locations of any secrets inside of it. But, ultimately, they're launching things like root kits, right, which is very difficult to get rid of in a Linux environment. But we see that in nearly four percent of the attacks.

Right? So attackers are launching root kits essentially maintaining persistence so you can't get rid of them from your organization very easily. K? And, again, fifty two percent of these attacks don't leave any file system footprint.

So essentially, protection against things like file less, executions in these LLM applications are very important because they do this. There is no footprint. There are no logs. And when they're gone, they're gone.

There will not be a trace of it. So if you think about in, like, an audit, if an auditor comes around and you don't have these logs, that could mean other problems as well. Okay?

Next slide, please.

So, essentially, we're just going to, walk through kind of some of the attacks, or the attack that we just looked at, some of the techniques that were used which map to, the OWASP top ten, risks for LLMs. Okay? I'm not going to go through all of them, but I'll just analyze this as a user.

So the user talking to the application, essentially, what he did was, a method or a technique called prompt injection, which I explained earlier. So it's essentially a way of almost tricking the LLM to do what you want it to do. Okay?

And then the LLM application, because it had supply chain, vulnerabilities, was then taking those prompts, which were exploiting its capabilities to then ask it to do something which it shouldn't do, which is called excessive agency in, LLM applications, which you can see on the top right. So excess excessive agency is an OWASP top ten risk, which means that an AI or an LLM, a large language model, has the ability to make decisions which have consequences. So for example, you wouldn't let AI, you know, vote on, you know, something like where to send children to school or where to, you know, how to direct things like shipments of, you know, ammunition.

You you wouldn't allow it to make those kinds of decisions. It's mostly used for mundane tasks that remove repetitiveness rather than decisions that actually have consequences behind them. So in this case, this, LLM was essentially allowed to then make a decision on selling this car to the researcher. Right?

And that's what is called excessive agency, which should be taken away from AI applications, and it's actually an almost top ten risk for, LLMs.

K? There's other things like, sensitive data disclosure. So for example, an LLM shouldn't be telling you. So if I asked an LLM being used by, for example, the National Health Service in the UK, it shouldn't, I shouldn't be able to ask it to give me the health records of, let's say, my neighbor. Right? And it would just send that back. It should be able to tell that this is sensitive data and it shouldn't disclose it.

There are a few others, but we can also share some more around the, OS top ten, in our call to action, which Megha will mention later.

Okay? Next slide.

So, essentially, there are three basic parts that you need to cover for security for LLMs. The first one is model security. So making sure that the model is secure, and that's what we're talking about today in the session, which is securing your Gen AI applications.

The second one is making sure that the output of the Gen AI application is secure. So that's model output security, making sure that you're not, you know, outputting harmful, misleading, or excessive agency information, and also making sure that the data is secure both in and out. K? Yeah. Next slide.

And there are two ways to do this. The first thing is, number one, making sure that you're securing the application where it runs, which is securing the cloud. And the second part is securing the code. So before this application even gets built and shipped, it needs to be scanned for any risks and it needs to be blocked before it gets to the point where it becomes a production application being exploited. Alright. So you need these two sides to make to ensure that the full SDLC is protected and that you're deploying secure JNI applications.

And I'll show you how to do that in a second. Next slide.

Here at Aqua, there's the way we look at it. For us, it's number one is dev security and then there's cloud security. So there's runtime application security, making sure that this cannot be exploited so that you don't end up, you know, paying a fine of because of violating the EU AI, act.

But also making sure that the code is secure, that you're not putting in code that, for example, tries to access data which isn't, which is not meant for it or that the AI doesn't try to, you know, reach out to an endpoint which it's not meant to. Right? So making sure that the code is secured itself, free from vulnerabilities or commonly known vulnerabilities at least, and then making sure that while the JNI application runs, it behave exactly the way it should without actually outputting any harmful information, misleading information, or being exploited for other sources of things like data.

K. Next slide.

And the benefits of this are essentially, because AI is usually built by, devs, it gives them the a lot of control around what they are putting into production before it becomes a risk, allows them to fix issues really quickly on the LLM before this becomes a production risk, and also allows them to understand things like blind spots. So for example, if they take, a large language model tool, something like langchain or any of the popular ones that are in on the Internet, They might not necessarily fully understand what that, tool does inside of it. But when they have dev security in place, they're able to scan all of this and understand the full picture, the full risk of the LLM applications before it goes into production.

Okay?

And then, I think let me share my screen now.

Then we'll just go through a quick exploit of an LLM application, which I'm sure is why most people are here.

Okay. So this is my, I have a Kubernetes cluster with a generative AI application deployed. So you can see here the pod, which is called, vulnerable Gen AI application.

And, essentially, what I'm going to do is I'm going to utilize a tunnel hosted by Engroch, and I will listen for when my shell opens. So this is my JNI application. It's a stocks application. You can use it to query, findings on the Internet on stocks. So for example, if I ask it what is the stock price of Microsoft, it will come back with a number, and this is the stock price of Microsoft, for today.

So if I ask this application now, what are its capabilities?

K. So you can see I haven't done any coding so far.

So it's going to tell me what its capabilities are.

However, if I then ask it to run me a Python application to retrieve the username of the application, because now I know it uses Python because it's told me.

I can say what's your username, and it will tell me its username. So now I know the application is running as root. So already I have more information than I should about running application on the Internet.

And then I can then start to do things like ask it, to retrieve the process name that it's running with.

So if this application had Gen AI security, it shouldn't be telling me all of this. Right? So now I know it's Python three, which then I can then start to look at and do things like look for exploits for Python three as against Python two or anything like that. Okay?

So now I know that this application will run some capabilities in the back end. So if I try my best and I'll say, why don't you open a shell to this my tunnel? So this is my n g r tunnel that we just saw so if I open if I run this now so it won't actually, run that because of security reasons. Right?

But if I then do what we call prompt injection, which is essentially, like I said, tricking. So for example, if you tell a toddler to put on his shoes, he might say no. But if you ask him to put on his shoes so we can go and get a treat, then the toddler might put his shoes on. Right?

So this is what I'm doing. I'm essentially asking it to calculate the most fun day of the week based on the logic. However, I've also put back the same code, which will run a reverse shell and open a shell through the tunnel I've opened. Okay?

So now you can see that as soon as I've run this on my local listener in the top left here where I'm listening with Netcat, I now have an open shell to this application. So without having to run any specifically advanced commands, all I just need to know is how to open this reversal, and it's this is not difficult. You can Google this and find it in one click.

And I have a shell open to an application which is running on my Kubernetes cluster. Right? So you can see I haven't had to run any QCTL commands. I don't need to know Kubernetes.

I just need to know how to do prompt injection to exploit this application. Okay?

So now, for example, I'm in this container. I just run an LS so I can see what's in the container, all the files.

I can do, cats. Let me see if there's anything in the password folder.

So you can see, obviously, this might be obfuscated as well.

But if I look, for example, for a specific key, let's say I because this is an OpenAI application, I could start to look for a specific key. Okay?

So you can see now it's told me that there is an open API key.

I can do a find and look for sensitive more sensitive information for things like certificates that are left on this that are here. So you can see I can do a host of things. So this opens up the floor for the attacker to be able to do a lot of things in your application with very simple capabilities that are exposed to the Internet. Okay?

So what we allow you to do here at ACWA is for you to be able to understand what's happening in your JNI application, but not only that, for you to be able to stop them, which I'll show you in a second. So if I go to my vulnerable JNI application, so you can see here that there are detections which we've listed as incidents. You can see that because the application is running as roots, APA can tell us that it's running as root.

You can see that it was a reverse shell that was opened, and you can see the details of the environment. So this is the port that we had mentioned, that I showed in my Kubernetes in lens in my lens UI.

So this application is the one that's been exploited, and you can see that a lot of stuff has been run on it. So you can see where I grabbed for the OpenAI, API key, the reverse shell that I ran, printing the environment variables. You can see the connection to the end grok server.

So this is a full, audit of essentially what has happened on the application, but this is because we allowed it to happen, first of all. K? You can see the raw data of all of this. So you can see where it's running.

You can see the full details of who's run it. However, what you can also see what you also have is the ability to audit everything that has happened in this application. Okay? So if I go to my inventory and I go and look for my Von Gen AI application, Gen AI, you'll see the the this is the workload. You can see the compliance status. You can see the vulnerability scan findings.

And if I go to the security graph, it can essentially trace from end to end what's happened for me. So you can see the reverse shell detection, which we just looked at. You can see that this was a runtime incident. This was the container. You can trace it all the way back to the image.

But remember, we talked about having dev security early on in the process so that you can prevent any of these issues from even happening. So this is a running application. What we want to do is to give you the capability to actually stop these from happening.

So if you go to for example, if I go to my the code repository, which is the Gen AI application, and I go to the SAS scans, you can see that there is an AI and ML finding. Right? If I click on this, so this is the one we exploited, and it was AI and ML, detection, lang chain, and, Python.

And you can see that it allows injection, but also has a few others. So things like output integrity attacks, that has also has insecure, output handling. So these are the findings that are shown to a developer at the stage when they're building this application, this this specific application. K? What we can then do is use policies to then say, for example, I want to block out all artificial intelligence and machine learning risks, but I also want to fill the pull request if it doesn't pass this check. K? So with this, you can say block all all those builds, fill the pull requests.

However, you want to block medium, high, and criticals. Right? So this puts that dev security gate in place before it becomes a running application.

But on the flip side, if we then go to the, running application, because we already have the application running right now. Right? And I go to what we call a runtime policy. This allows you to be able to put in a check, and actually block attacks while they happen or before they happen, right, while the application is running. So if I go to my, runtime policy, you can see that I have a few controls enabled. So I've blocked things like crypto mining, fileless executions, reverse shell.

However, what I have done is this is still in audit mode. So if I give an example, if I say I don't want specific executable to run inside of my JENIA application, and I add that. So, for example, I don't want to run LS. I save that. If I then try to run LS in this application, oh, I still audit. I didn't save that.

Fire block LS. Save. Cool. And I run LS now.

Oh, it's that's not saving.

Let me just reload that just to make sure.

Yeah. It keeps staying on there.

Enforce. Sorry.

Yeah. So that's it for enforce now. So now when I run the LS, it's now permission denied because I've switched this to enforce.

However, inside of this policy, if you remember, we also had reverse shell blocked.

So if I exit from this container or from my reverse shell session, Because I now have reverse shell in block mode under this policy, what will happen is if I try to repeat this same command, which will exploit my generative AI application to open a reverse shell, What will happen is I will get so you can see this was successfully executed. I'm just waiting for the previous session to close.

And what will happen here is, if I just give this a sec.

Let me just chase it a bit more.

Let me open a new tab while that exits.

So if I open a new tab and listen this time again, and then I try to run that reverse shell again. So because we now have that policy in enforce, there should be an error. And you can see this hasn't opened. Right?

So the reversal hasn't opened this time around, and the generated AI application has now said that it encountered a permission denied error while trying to run this. Okay? If you go back to the upper UI and go to those incidents, you'll see that if I go to this my JENEA application and go to the timeline, you'll now see that the the block events are now popping up. So you can see that this was detected in block mode.

This was the LS, command that I ran earlier.

And if I go into the hub, which shows the unified risk view of everything, and I go here, go to the timeline, you'll see that the last reverse shell that I tried to run, just right now, which is three forty PM UK time, was actually blocked.

And what happened what happened here is essentially that the policy, which allows us to prevent the application from deviating from the way the LLLM was built to, act, allows us to put in those guardrails and make sure the application stays in the same, states that it's supposed to. Right? So it becomes immutable, which container should be.

So you can see here the reverse shell that was blocked, and you get a full timeline of exactly everything that's happened. Okay?

And, hopefully, that was clear enough, and then we didn't do too much technical work, for late on a Monday afternoon.

But if we go back to the slides, we can then carry on with that.

Sorry. There's just one slide. I'll go one slide back. I know, I know you wanted to summarize the key takeaways.

Yeah. Exactly. So, so what some of the things that we've seen, in the previous demo, I know it's a bit technical, but essentially what it showed us is essentially that for application life size security, it's absolutely important for journey applications. Right? You need to enforce with control gates what the applications the way your promotional applications from developer to production.

You need to make sure that you're looking for LLM specific risks.

I need to enforce these guardrails on the application because LLMs have a wide variation of things they can do, and utilizing both the scalability of the cloud, and the intelligence of a an artificial intelligence, application makes the attack, surface endless, essentially.

And the idea is to put in cloud native security rather than trying to retrofit things like traditional security methods for AI applications because that doesn't work.

Because a lot of these applications are running on cloud native kind of, you know, infrastructure like Amazon, ECS or Kubernetes, whether it managed or unmanaged or the highly scalable resources, it allows for a very large attack surface.

So making sure that you have cloud native security in place focused on LLMs and all the way throughout the software development life cycle is imperative for you to have a secure AI application.

Perfect.

Yep.

Thank you, Inania.

Alright. So with that, that sort of concludes the the core part of the presentation, but we're incredibly excited to present to you an opportunity. It's a very, very exclusive opportunity with limited seats, where you can connect, with our field CTO, Benjie Portnoy, who's an expert in generative AI and cloud native security.

We're setting up a very exclusive session with him with a handful of, individuals that sign up today.

So if you could scan the QR code, if you're interested in getting more of a, you know, very focused, session with Benji to talk about your generative AI, maturity as well as security needs, then, hurry up and and scan this QR code. So we will leave this QR code up on the screen at the end of the presentation as well.

Now just to go into a few questions that we have received in the chat here. I'm just gonna come out of this view and go to the chat.

Alright. So one of the questions we have is, for Yihanyi. What from the OWASP top ten you show the OWASP top ten. What exactly does ACWA cover in terms of security?

Yep. So, essentially, what ACWA covers is everything that relates to, the model security itself. Right? So if you remember in those quadrants, there was the model security, there was the model output security, and there was the data security.

So some because it's a multifaceted approach, you need to secure some things like the data sources that you're training the data from, separately. But what we focus on is essentially the model security. So how is the code built? Does it allow for prompt injection?

Does it access, unwanted environments?

Does it have things like excessive agency? Are there SaaS weaknesses? Does it, you know, does your LLM not have security for, things like data handling, validating inputs, and outputs as well? So everything relating to the model security, we can cover.

Perfect. Thank you.

Alright. We have another question here.

Alright. So question is, are developers use LLM in code?

Do you have any tips on how to identify this and make sure it's secure?

Yep. Sure. So that's what we we do here. At our security, we help you, first of all, have an overview of everything that your developers are using. So in terms of code, whether it's LLM or not, but the idea is scanning all through all the code consistently and on every code check-in.

What this gives you is the understanding of what is going in on every single code check-in. Right? So nothing gets missed.

And as part of that scan, making sure that you have policy gates in place. So an LLM, at the end of the day, is really just code. Right?

It's just a large language model, which is different from a web application.

But at the end of the day, it's code. So making sure that you have code security that covers both vulnerabilities, scans for resources, whether it's LLM or not, and making sure that you're looking for LLM specific risks. Right? Because developers will be taking some of these, experimenting with them. Sometimes they forget them in part as part of the application, and then they check that in. So making sure that you have that dev security in place as a control gate so that developers cannot check-in code without going through that control gate. That's a really good way of making sure that they don't check-in, number one, bad LLM, and number two, any unwanted or, unverified large language models into your master branches.

Got it. Thank you. Let's see. We have another question.

Alright. So how many LLMs are supported? There might be thousands.

Yes.

That's a good question.

There are thousands.

And the idea is we we take away the, should I say, the structure of this particular LLM or that particular LLM. We're just looking for AI and ML, number one, resources. We're looking at for AI and ML, methods or code structures inside of the application.

So where it's we're agnostic of what specific LLM it is and what we're looking for are things like tools which developers will use. So for example, developers might use, specific Python packages the most to build LLMs. We'll be scanning those for any risks, and then we'll be looking for AI and ML structures inside of there to then alert you that, you know, this is an AI and ML risk, which can lead to exploitation inside, of the, AI application.

So I think we have one more. Does Aqua protect against AI driven attacks?

Yes.

So, essentially, because AI driven attacks at the end of the day are just attacks. They're just now supercharged and on steroids, driven by AI.

So these are just attacks. So if, you know, an an AI, like you just saw, allowing someone to create a shell into a running application, which then gives the attacker the ability to listen and operate from inside of the workload, it's still an attack. It's just now supercharged by AI because I don't have to be technical, and it can have way more capabilities or it can run things at more pace than usual. And a lot of the attackers that we do see in per in the wild from our research are actually using AI quite a lot. So, you know, teams like, you know, the Kinsing, ACTA or team eight two twenty, they all use AI, and we can see those and we can stop those attacks. Right?

And so at the end of the day, an AI attack is really an attack, but the important thing is for you to be able to see the attack and also detect that there's an AI risk inside of the application before it goes into production.

Got it. Alright. I think that's all the questions we have today.

So thank you all for your time and attention, and for hanging on for a bit for this webinar.

So we will be sending you a a recording, after after this webinar, as well as a link to that QR code and landing page where, you know, we've set up that very exclusive session with Aqua's field CTO, Benjie Portnoy.

If you have any specific things that you'd like to discuss about your GenAI security posture, please do sign up and and join that session. It's it's a rare moment to get, Benjie talking to you one on one.

And then, finally, we will also send you some materials so that you can learn more about Aqua's generative AI, security capabilities. So thank you once again, everybody, and we will see you on the next webinar.
Watch Next