Operationalizing Zero Trust: Securing Containers in Runtime

In the fast-moving world of cybersecurity, AI-driven threats are reshaping how organizations defend their environments. The Zero Trust model ensures no user, process, or workload is trusted by default and that verification is continuous. This ATARC webinar explores how to apply Zero Trust principles to live container workloads, where vulnerabilities and active attack vectors often emerge. Experts from NIST, CISA, TSA, and Aqua Security share practical strategies for securing containerized applications in government environments, including runtime controls, policy enforcement, image signing and validation, and operationalizing immutable infrastructure while meeting federal requirements.
Transcript
Hello, and welcome back to ATARC's webinar series. Today, topic experts will share strategies that agencies can implement to alleviate the concern of the unknown and make operations more efficient. My name is Kirsten Kapsaruba, and I'll be measure moderating today's panel discussion.
ATARC stands for the Advanced Technology Academic Research Center, and ATARC is a non is a organization that facilitates collaboration between government, industry, and academia in order to accelerate technology modernization initiatives.
ATAR provides ongoing opportunities for cross agency collaboration through on-site interaction, learning, and market research.
I would like to welcome all of our attendees who who are joining us right now, and a special thanks to Lisa Ragusa Fadala and the entire Aqua security team.
This afternoon on the agenda, we are going to hear from each of our panelists. We will have time for some q and a, and we're gonna pop in a few poll questions. So that being said, don't forget to submit questions and comments of your own in the q and a section. We love for these webinars to be as interactive as possible.
And be sure to answer the poll questions in order to receive your CPE credit for attending today's event.
So with that all said, I'd like to welcome our panelists. Come on camera. Looks like they're all here. And we're gonna begin with a quick round of introductions from each of you. So who are you, what's your role, and the agency or organization that you work at.
Alright. Let's start with Martin, please.
Hi, everyone. Kirsten Bankford. Thanks again for having me on one of these. So I'm Martin Stanley. I'm the, emerging technology branch chief at CISA, and I'm currently on detail to NIST where I'm working on the trustworthy and responsible AI program. So looking forward to chatting with you today.
Thank you. And, Philip, let's go to you next.
Yeah. Hi. Hi, everyone. Phil Pearson. I am from Aqua Security. I'm Aqua's field CISO.
I've been developing zero trust frameworks for around since around twenty seventeen, mostly in the private sector, but some public as well. Also noteworthy that I work, part time on the implementation content from the CSA.
Today, I'm eager to share my lessons and insights on how Zero Trust offers practical solutions for protecting your modern applications, obviously, today with a specific focus on public sector.
Thank you. Trevor, would you like to go next?
Certainly.
Hi. Hi, all. My name is Trevor Bryant. I'm the system security officer for the dot gov top level domain or TLD program over here at CISA.
I'm really excited today about this topic because I was fortunate to be on some of the very first teams early and early experience in adopter with container technologies, in the government. And in my personal professional capacity, I actually coorganized the DevOps DC meetup and the DevOps days annual DC conference where we invited our government to, come and learn about the type of technologies we're gonna talk about today and share their own adoption stories. Here in my professional capacity, I had, contributing language to the DOD DevSecOps reference guide, the Defense Security Cybersecurity Authorization Working Group or DSOG as it was commonly acronymed as and where I coauthor the continuous ATO papers.
And, again, about this particular technology and this topic, I'm really excited to be here.
Fantastic. Thank you. Let's go to Adam next.
Thank you, Arsen. Good afternoon, everyone. My name is Adam Colon, and I work at TSA as a senior cybersecurity specialist.
What I do here at TSA a lot of times is focused on the operational technology, technology that actually at the checkpoints.
But we're also doing a lot with our development life cycles. We're building on the environments, and so we're getting more into a container. We're trying to figure out how we can containerize things and building out our zero trust, framework internally. TSA actually recently had a zero trust day, to try and provide more information to our, people here at TSA, explain what we're doing, kinda set framework and the pathway on how we're gonna ultimately get to our final goals. So I'm really excited to be here today, kinda talk more about, how we're starting to implement our containerization process.
Fantastic. Thank you. And, Edmund, let's hear from you.
Good afternoon, folks. Edmund Cucco, Naval Information Officer Center Atlantic. I am a, software developer in heart. I've done, DevSecOps for quite a while right now. I am detailed to the Department of Navy, chief information officer as well as the op nav in two one six in all things related to DevSecOps cybersecurity.
I'm also the process owner for the rapid assess inform rapid assessing incorporated software engineering, the RAISE process, for leveraging containerized applications, you know, to take a more, drastic approach on how we authorize, those applicate containerized applications into an operational environment, as well as I was the, developing the zero trust implementation guide, helping at the Department of Navy and the DOD ZPFMO reviewing for all of the programs and the DOD components on their ZTA implementation plans. Over to you.
Thank you so much.
So jumping right into our first question, Trevor, I'd love if you you could kinda lead us on this first one here just to start off.
But I wanna talk about what strategies should be employed to secure containerized applications from runtime threats, while also adhering to the zero trust model, especially in dynamic and scalable cloud environments?
Sure.
In previous roles, I was actually on the infrastructure and platforms team as a provider of capabilities and technologies. But if you don't mind, I'd like to kind of flip it around to where today in my capacity, I'm actually an application owner. So it's the very first time where I actually get to be a potential customer of zero trust optimal environments.
And so I like to offer my my perspective on that of what the value is and some of the strategies I'd like to see as a customer.
So, of course, the the model is gonna differ across organizations, but, or it's gonna be similar across organizations, but the implementation may may differ. We wanna design these environments to be hyper scaled architectures, while also emphasizing the importance of our our general basics these days, authentication, encryption, micro segmentation, and continuous monitoring.
I think part of these elements to an overall strategy can help with our, creation of our system security plans at a much more expedited rate for continuous authorizations and all while deliver delivering actual value to what the business or mission of, business or mission objectives are. Excuse me.
And from my customer perspective, while I don't typically see environments having or sort of a common shared infrastructure, organizations will typically have multiple cloud account organizations and generally lots of virtual machines that aren't necessarily leveraging the cloud models or the cloud characteristics.
I like to remind us all that cloud computing technologies can be multi, public, private, or hybrid.
And while meeting these specific characteristics, it's actually the type of technology it is that makes it cloud computing and not necessarily where it lives. So as as part of our some of our strategies, I I would like to see a lot more of these reminders and sort of the the user or customer centered experience of, am I going to be part of a zero trust optimal environment?
So a little bit of a flip side, but I I wanted to share that perspective there.
Yeah. Thank you. I appreciate that. And based on what Trevor just shared, would any of our other panelists kinda like to weigh in on, how they would respond to the same question as well?
You can just jump off mute if you feel like sharing your thoughts.
Martin, you look like you wanna share some thoughts.
Okay. Sure. So, first of all, I think that was a a great initial answer. And, I think one of the things that, you know, I'm I'm here for is to talk to the zero trust community about the fact that CISA feels very strongly, and has has really, jumped behind the administration priority on secure by design. And I think we believe that zero trust is really integral in in establishing and and maintaining these environments.
And in particular, the project that I'm currently working on, which is trustworthy and responsible use of artificial intelligence in the federal government, you know, in particular, these infrastructures that we're gonna be building are, AI systems on top of require zero trust model. They they're gonna leverage all those controls in order to assure these, services and and help to protect against some of the risks, not all, but some of the risks that are gonna be, that are gonna be taken in order to, you know, realize the benefits of artificial intelligence. So, you know, hopefully, we'll we'll talk a little bit about that. But, you know, again, I think understanding just this pervasive nature of these practices and how far and wide they reach, as far as their, you know, potential impact is is really important for the community to understand, I think.
Thank you, Martin. And we have time for one more panelist comment before we move on to our next. Philip, do you wanna weigh in?
Yeah. Yeah. So, there's a couple of things. I think as a CSO when you're when you're building these environments, I think you have you have to be able to answer a number of questions.
For me, the first question is, are my workloads and resources free from vulnerabilities?
Are they free from misconfigurations?
And can I defend against a live attack in production?
And I think if you can answer in the positives to all of those three questions, then you're on your way to zero trust.
They're fairly simple questions, but very complicated and difficult to answer in the positive.
And that's where the frameworks come in. So the frameworks are where we can start to piece this together.
And, again, I think great answers so far, but really interested specifically in the the dynamic side, the automation, the ability or or everybody's ability to create baselines in which they can build on and enforce policy.
I think that's really key. The ability to be able to enforce a policy is is true security. Right? If if you're not enforcing, then it's it's it's just simply information. It's not security.
That's my opinion.
Thank you. Thank you for sharing that. Anyone else, or should we move on to our next question in our set?
Okay.
Great. Well, with that said, Philip, you're actually the person that I wanted to start with on this next question first, which is considering the principle of least privilege in zero trust architectures, how can agencies implement effective access controls and segmentation for containerized workloads to minimize the attack surface?
That's actually a follow on really from what I just mentioned. So I I taught I taught there about policy and policy enforcement. I think most of what we we're talking about is the cloud, or at at the very least, digital.
So in this space, everything is in code. Right? And if it's in code, then it stands to reason that you should be able to defend in code. And the way to do that is through policies.
So policy enforcement, I think, is the first thing that I wanna mention there. And, actually, in modern applications, and we're talking about orchestration capabilities that we get from technologies like Kubernetes, for example, they give us a really good starting point. So within Kubernetes, there is something called the pod security standard.
And the pod security standard's got a number of levels. So privileged, is one level, for example, baseline, and then also restricted.
In zero trust, you can ignore all of them but restricted.
Now it's a little bit technical in terms of the answers, but it's actually a really important point, and I have a nice mnemonic to remember this by.
But the first things first, when you look at restricted and what does that mean? So it means that, first of all, for all of your deployments, all of your deployments will be required to be introspected via an admission controller.
Okay? These things can be free, or you can have a paid version.
So ACRA is an enforcer, cube enforcer, and the access and admission controller, or you can you can use OPA. There's there's lots of options out there for this.
But what the admission controller does is it introspects the the post request.
So if you did kubectl apply and then apply the the the deployment file, then that will be introspected, and it will be looking for policy violations. Now the requirements for restricted that meet all of the principles of zero trust, are quite stringent. So for example, you're required to use, to turn on SecComp.
Now SecComp is all about system calls. It's all about minimizing the system calls, that your application will or will not use. So, for example, if you run the LS command in Linux, out of the box, that uses around two hundred system calls. You turn on SecComp, and it's down to eleven.
You know, open, close, print of the screen, etcetera.
So you want to ensure that you're meeting principles like least privilege, within Zero Trust to ensure that you're only using the system calls that you need. Secondly, it's going to enforce things like, like a UID. So in a restricted mode for zero trust, you want an a UID greater to or equal than one thousand, meaning there is a non privileged unit user. So even if the code itself can't be trusted, if there's no access to the code or the user has no permissions to be able to do anything with that code, then that gives you a better chance of being able to defend against that attack in production.
And then there are other, configurations as well. For example, read only root file systems. So it enforces that. It enforces, that you drop all Linux capabilities.
And then there are other principles as well, like the, minimalism, in a in container deployment. So this is the idea that you remove all unnecessary capabilities, say, curl, for example, which could be used as mechanism to download a payload from a command and control server, for example. If you remove that capability, then there's no way to do that or no easy way to do that. So being able to remove all of those capabilities, be able to reduce the the permissions down to the absolute minimum, and allow no access to the underlying code that's being protected in this container configuration is is the way that you want to be able to architect towards zero trust.
So just to reiterate that, because that was quite a long answer, but you want to look at something like the pod security standard in restricted mode and then enable all of those configuration items via a policy and then enforce.
And this will ensure and you'd be surprised that how many of your applications will operate just the same, but there will be no underlying access or it won't be over permissive, access to the underlying code that that, that's being carried by that container.
Thank you for leading us with that question.
Is there any other comments from the other panelists? Probably have time for a couple more. Adam, we haven't heard from you yet. Where does your head go when you hear this?
I could probably easily just say whatever Philip said. Right? Because he answered it perfectly. But in my mind, typically, kinda goes, probably to even more simpler, explanation. It's more of a role based asset access control, having like you said, kinda defining the role specifically to what it needs to be.
Not just, allowing by design, but allowing, you know, as an exception. Right? So that's kinda like the idea we kind of think about. And then, you know, we talked to you mentioned it earlier, but containerization of the segmentation of the containers. Right? So if you can actually segment them, kinda provide more strategy, really isolate them, that micro segmentation strategy, which is really key to, zero trust. It it goes in line with that least privilege mindset to prevent that lateral movement, to prevent the adversary if they were to have access to code or you were to have that, malicious code that's in there from being able to propagate throughout, which is part of what we see a lot of times, in environments that don't implement either proper role based access or microsegmentation.
You know, once access is obtained, you can never prevent fully prevent, unauthorized access, but you can limit the exposure once it's been obtained. So having the mindset of role based, containerization, and just kind of utilizing measures to to isolate and segment what an adverse could do. So assume a code is can be compromised.
And if it's compromised, look at what are the secondary and tertiary effects of that. Right? And if you can if you can go further down the rabbit hole and think as an adversary and and and look at what can I do after I've, compromised it, now you can start looking at how to control that and isolate it from further spreading?
Thank you, Adam.
And, Edmond, do you want to weigh in on this as well? Or we also have an audience question specifically?
I have, I just wanted to add a little bit of a government flavor to it. So during our reviews at the DOD level with the z p DOD ZP FMO group, there is this, this this construct, right, of activities for which a program, a project, environment, infrastructure, whatever we wanna call it, right, that must align with the zero trust policies under the DOD guidance. And, they call them activities, right, zero trust activities. There is this construct of target activities, which is about ninety one of them, and there is the advanced sixty one.
So in a in a cohesion in total, we're looking at about a hundred and fifty two zero trust activities. And to align how these activities are applicable to some of these, you know, platforms, environments, products, applications, any flavor for that matter, these products need to align with these expectations. Right? So the requirement from DOD is that by FY twenty seven, we need to be in align with the DOD ZTA, you know, activities requirement, which is, as I mentioned earlier, at the target for, at the minimum, to be compliant with the policy is ninety one of these zero trust activities.
Some of you folks have not seen, what some of these activities are. Just do note, for example, 1.7.1
talks about deny user by default rules, right, or enterprise identity of life cycle management, or having a monitoring capability, which is one six three, I think, for for all of the user. Right? From a zero trust perspective, everything is considered a node or an edge device. Right?
Even from a a user perspective. Right? You need to have that relationship from the moment a user accesses some trusted resource to the moment that connectivity to that resource is then terminated and how you terminate, how do you get rid of, you know, nonpersistent accounts. So there is a lot of work, you know, on our side from a government perspective.
So building that relationships with the partners from the from the industry when we develop these solutions, right, we need to keep that in mind that for us to align with DOD requirements, we need to look at these, you know, zero trust activities that DOD has published for.
Thank you, Edmund.
So unless we have any other comments, flag me down if we do. I'm going to ask for our first poll question to please come on screen.
For our audience, again, this is for you to receive your CPE credit. So I wanna respond to this.
To what extent is your agency adopting immutable infrastructure practices to enhance cyber resilience in containerized environments? So please take a moment to respond to that.
And for our panelists, moving right along to our third question.
Martin, if we could start with you, I'd love to ask how can agencies harden their container workloads against vulnerabilities and ensure configurations are validated against strict security policies before deployment?
Boy, what a great question.
Thank you so much. So, I think one of the, one one of the things that, CISA has done to make this easier working, obviously, across, civilian government and and with industry is to, refine and we actually issued the second version of the zero trust maturity model. It's available on the CISA website. It's been actually I guess it's a it's about to about to celebrate its one year birthday.
And and within that maturity model, it talks about, the different ways to make determinations about the application of all the different kinds of, protections we've been talking about based on the risks that you're trying to, prevent and and near the impacts that that you're concerned with. Obviously, one size doesn't fit all, and so we wanna make sure that we don't just, you know, apply the exact same structure to every single, every single workload, but it gives you an opportunity for flexibility, but but without, making, you know without without omitting, the kinds of controls that you wanna have in place.
So I think the the first part is please, you know, check out this really important work, that was put together by my my good, colleagues, Sean Connolly and John Sims. And then, I think, you know, more more broadly, I think, you know, folks have to recognize and understand that this is something that's ex that's expected.
From the, you know, vendor side, when you're trying to, sell products to agencies, understand that they're gonna want answers to questions that, provide assurance that they're acquiring solutions that have these kinds of protections in place, and they're gonna want evidence of that as well. And then, you know, for those of us that are still maintaining our our own infrastructure, we're gonna be able to demonstrate these capabilities. So I think it's a long kind of a long roundabout answer, but there's a lot of, resources out there, and there's a lot of, guidance on how to do this.
And, and, hopefully, your agencies are getting the right resources in order to, you know, implement, these these capabilities. I know the CDM program has absolutely been uplifted to assist agencies and to adapt the the control sets that they've been, you know, provided through that program. But hopefully, also, agencies are aware and and we're looking, you know, into our our budgets to make sure that we augment that with all the things that we need in order to meet our obligations.
Thank you, Martin. Adam, would you like to respond to that question as well? Sure.
Kinda picking up, right with where I left off with the CDM. You know, having a process in place to continuously scan and monitor the containers, checking for misconfigurations, making sure they're adhered to strict security standards. That's kinda what you need to continuously do, and that's what the CDM process is gonna play. It's a continuous monitoring, a continuous effort.
A lot of times when we're doing evaluations of containers or your evaluations of of, code, it's a snapshot of time. It has it's built. So if an organization does not have a process in place to continuously check as that code is updated, as perhaps maybe new, code are released on, say, GitHub or something like that, monitoring for that, checking vulnerabilities, having a process of pipeline in place that whenever something's gonna get deployed, you're you're looking at it holistically as well. Right?
You don't want to just look at it piece by piece. How does it affect everything altogether?
Making sure the dependencies are are also kept in line, to avoid any any mistakes as well. So that's kinda what we're look what you wanna see for hardening containers. You wanna minimize also wanna minimize the attacks.
Right? Your code needs to be in place to perform a function. You don't want excessive code. Right?
But you need just enough for it to function. If it shouldn't be there, take it out. Right? The more you have, the more you need to fix.
The more you need to monitor, the more to pass, the more chances are that some mistake is made. And then while that's where we do wanna get to where perhaps, you know, AI could help us with that, but until organizations, especially in the government agencies, starts really adopting these AI models to do a lot of this automated validation of code, Sometimes we're having to use certain tools and still manual checks with the tools, that can lead to mistakes. So minimizing the attack surface is key, to really helping to improve the container security as a whole.
Absolutely. Thank you. Trevor. Yes.
Yeah.
I love this question. Thank you for asking it.
To compliment Martin and and Philip's points on not just policies in place at the technical level and system level, but also, you know, the the wonderful guidance out there that our colleagues have put together.
An example I'd like to give is something fantastic that platform one did, and that is layering, container images. Containers are very good and and are meant to be layered. And what I mean by this is no. This is a zero trust, model motto is, no trust, no problem. And I I hold that very closely to what I call, YOLO Internet polls or you only live once, that that acronym. And it that pulling from the Internet does scare me. So what platform one does very well at is pulling from its trusted, container or registries rather.
And then it pulls down a base image, and what it does in there is further layers it down and it adds its, DOD specific CAs, specific packages for common network services, some other potentially, required things by by organizational policy. And then it further layers down an image. So say I have a Java application and I need Java eighteen. I know that I can pull down an image and trust that it's layered multiple times for Java eighteen.
But Philip may have a dot net six application, and he can do the same thing and have that layer of trust pulling down so that he can develop and deploy his dot net six application.
And then something else that I don't actually see spoken about often is sign the container images.
Containers are simply just a packaging format, and like all other software, it it does need to be cryptographically validated before use.
So thanks for letting me share that.
Of course. Thank you. Edmund, go ahead.
Yeah. I just wanted to emphasize, Trevor's, you know, which is, spot on. Right? It's this necessity for auditability, meaning that, great. You've developed a you build a container image and you deploy it you deployed it, you know, as a container.
But then how do you connect what's in production versus what was developed? Right? With the raise process, one of our security gates in order for us to leverage automation for risk assessment, one of our security gates is exactly what Trevor said, is that in order for this container application now to deploy in an operational environment, it must be signed to enable that zero trust principle, as in that if it was developed by who said it was developed and what's operating in production is exactly that signed container for which was produced by that given target, you know, to that deployed, target audience.
Over. Thank you. And Phil, go ahead.
Yeah. I'm I'm just going to add what's just been said. It's a it's a really important point. In fact, one of the first things I do when I do threat assessments is is look for the signing of images.
And you won't be shocked to know that I almost never see it. Almost never.
And so one of the one of the second points I wanted to make there that once you've signed it, then you have to have another job that validates the signature.
Okay? So it's not enough just to sign it. You have to validate the signature.
Now the benefit as well is that you can then use that hash, as a a label or a tag on the container itself. So when you push the image to your production ECR, you should if it's immutable, you'll be give that hash. That very same hash can then be used as a tag on the image so that when you pull the image, to as part of your CICD process, you're gonna be pulling it based on that target, and then you can track and trace that image with that unique ID throughout its entire life cycle.
So it's so it it's more than just validating and gaining trust that the images the image that you built.
You can also then use that as a technique as part of your continuous monitoring and validation in production, which is very much a zero trust principle.
Thank you.
Okay. So we are going to call for our second poll question before we jump into our next question. So if we could get the poll up on the screen. There we go.
For our audience, this is your poll. Your second poll, which aspect of operationalizing zero trust poses the most significant challenge for your agency? So please take a moment to respond to that. And then moving right along to my next question, I'd love to start with Edmund on this one, and then we can hear from a few other panelists.
Edmond, in the pursuit of a zero trust architecture, what role does immutable infrastructure play in enhancing the security of containerized environments?
And along with that, how can agencies implement it to prevent drift and unauthorized changes?
Oh, that's that's a very question a good question. And specifically when it comes to the Department of Navy, right, we got airplanes, we got ships, You know, afloat as in weapon systems.
We got, you know, like a mobile units. How you like MRAPs and garrisons.
And when it comes to how do you make your infrastructure immutable, the the challenge we have right now is that collectively from a DOD level perspective, we need to start thinking about a reference architecture as to what constitutes immutable infrastructure on this multi dynamic ecosystems of different environments. Right? And a float environment for a ship is different than a shore environment. On the shore environment, you also have something like a business, you know, enterprise applications.
You also have, you know, information environments. You have, you know, weapon systems that are both on the ship and the shore perspective. So it is important to have immutable infrastructure in context that if there is something bad in production, you should be able to provision a new infrastructure, right, with a, as we talked about, as Trevor mentioned earlier, right, a signed immutable infrastructure for which can be deployed and ensure some level of security is associated with it. And whatever threat actor had access to your infrastructure in an operational environment, you have a high level of confidence that what you're deploying right now, any, vulnerabilities or any issues that were associated with your previous deployment, right now it's a clean environment for which in support to that operational mission.
But again, it's really challenging because from a, let's say, using, you know, playbooks or Terraform or infrastructure as code to deploy these environments, we have this dynamic ecosystem. And, you know, we also have to start thinking about, you know, the the the sizing, you know, component of it, like and the connectivity perspective. And I'll give an example.
Within a couple of the last couple of years, we had to deploy, a Red Hat OpenShift infrastructure, right, from an infrastructure as code using the Red Hat OpenShift platform as a service solution. Well, if you deploy that into a, you know, high performance network, if I may, The whole thing was with, you know, complete in forty five minutes. We wanted to make sure that was signed, you know, was the right configuration model and everything. Then we go into, like, a DDL environment, which is the disconnected, you know, you know, environments or in a low bandwidth environment.
And the whole freaking thing would fall apart because, you know, we would go from, you know, six megabyte downstream into our, you know, execution model to, like, twenty five kilobytes. And now the containers that were, like, two gigabytes, you know, some of these large containers trying to pull over the network, you know, it would fail. So the whole process fail. So, again, I'm bringing this up as an example to emphasize the necessity that even though immutability is important and ensuring, you know, a more rigorous process for security in an operational environment, we also have to keep in mind that, sometime when we create this, this infrastructure as code models, we need to keep in mind the connectivity, the networking challenges, you know, the the access control associated in different levels environments.
But, again, there are different conversations that can take place within the immutability of an infrastructure. But as a goal, it's definitely what what we are pushing towards to ensure that we successfully have an just in time deployment of this infrastructure when someone identifies that there is some vulnerability or threat actor within your existing infrastructure. Aurea.
Thank you, Edmund. Phil, go ahead.
Yeah. I I think I wanna give some, guidance from the field here on this one, particularly in the private sector. So we've been doing this for a long time in the private sector, specifically in banking, because mostly because we don't want to forward fix.
Okay? No imperative changes. So no changes in production.
Everything needs to be, redeployed, subject to scrutiny, pipelines, policy as code.
So even even minor changes, should be subject to that level of scrutiny, specifically in zero trust.
So we we've had to figure out how to do this. Now, you know, along comes the cloud, microservices, where we have replicas. We can do rolling upgrades.
It gives us the ability for the first time to be able to, you know, use load balances to point to live working resources whilst we upgrade small single executable binaries.
I think what Edmund's talking about there is larger infrastructure with with complex monolithic applications, where actually having an engineer to be able to get access to them in live is is part and parcel of operating in that environment.
The the trouble with that is that you can never really meet zero trust in that environment. So one of the single biggest challenges I fear for the government, specifically, you know, for the DOD and in the environments that they work in, those hostile environments, is how do you how do you build that with the current, you know, mothball infrastructure?
I mean, the answer is that you've got to move to digital, to the cloud, to microservices.
All of the advantages that come with that will enable us to be able to get to zero trust cleaner and and much quicker.
But but it's a complicated answer and and one that I fear has not been solved yet.
Thank you, Phil. And, Trevor, go ahead.
Yeah. I'd really like to, touch a little bit deeper into, the challenges Edmund, brought up. Some of these Kubernetes based distributions are intentionally resource greedy. We're not gonna be able to use our older hardware, like spinning rust disks to and expect high performance or even a level of performance that's just okay out of these types of technologies. Like, we need to have the ability to have fast reader rights, fast caching, and fast distribution of these environments.
My experience with the type of environment Edmund, touched upon is those those day zero installation operations, super easy.
When we start implementing and trying to communicate in networks that are intentionally disadvantaged by some sort of design, We kinda remind ourselves, oh, maybe we didn't think about that. These these three gigabyte container images, we need to lower down, maybe layer these containers further, make them smaller so that these these environments have a sizable execution to be able to pull and mesh and deploy and handle all these all these, different applications.
But also because they're disconnected, have them in a state where they can be self healing, self correcting, and whenever they do connect back, they're able to pick up where they left off but also maintain independence from its managing infrastructure.
So so it's really but it's follow it's following the principles of Linux, right, which is, you know, do one thing and do it well, and then store that one thing in a container and run that as a microservice in a container and connect them via network.
And that and that really does work. Right? And that's how you create immutability. I think that the challenge, that maybe even some of us on here can't conceive is how do you refactor, you know, that monolithic application.
All those bloated resources that you talk about into those single, microservices in the time scales, which is just two short years away from being able to achieve that is it really is a challenge and needs to be you know, we need more think tanks to discuss this through. Because I think right now, if you've got a clean, new digital, cloud application built with modern programming languages, it's easily achievable.
And as you say, it's super easy, but it it's it's those old applications that worry me.
Philip, it brings me so much joy that you brought up the the principle of Linux. Do one thing and do it well. I Yeah. It's my favorite part of today.
Kirsten, what what I like to think about for immutable, infrastructure is is these are sort of designed secure by default, and I I do believe that the operating system of the future is immutable, and and I'll close it right there.
Edmund, I see your hand up. Go for it.
Yeah. I just wanted to kinda, you know, kinda high five Philip on that. Right? Is that self healing is really important.
But a lot of times, any this is not a negative on the industry by any means. Right? But we have some of these tools that industry, develops, which is great. Right?
We need the help. But a lot of times, I feel that there is a lack of, you know, to Philip's comment, to look at outside the box thinking, from a ship perspective. I also have to think if my server server vibrates enough because there is vibration. Right?
Now there is some kind of disruption over the Wi Fi or if it's a cable being rattled and the networking. So there's so many different aspects that when a lot of these architectures for the new tools are being designed and developed, I'm encouraging the industry to start testing some of those capabilities with low bandwidth. Right? Like if I'm doing infrastructure as code for deploying a Kubernetes orchestrator and a bunch of different tools to build my entire ecosystem for supporting the mission, can I deploy this entire infrastructure as code capability into a low bandwidth or over the air you know, pulling these images?
So, you know, kind of put another angle versus having a a one gigabyte, you know, Ethernet plugged into my server, and I don't have a problem with my Internet. So again, it's a conversation, again, to Philip's comment that from a think tank perspective, we need to start thinking if there is support to the DoD, it's not just about the zero trust and the immutable infrastructure.
I don't I don't need immutable infrastructure if I have a hacking, active hacking, you know, in my production system and I need somebody to actually just block that port right there, right then because I'm hot in the mission and I can reach to the shore. So, again, these are some of the conversations that we collectively need to start thinking about.
Yes. Thank you.
Okay. So we probably have time for one more question before we end with takeaways. So, Adam, I'd love to hear from you on this one, which is that given some vulnerabilities may not manifest until runtime, what are the most effective strategies for runtime protection and policy enforcement to maintain a secure containerized environment?
Good question.
You know, some of the strategies for doing the kind of runtime involved or behavioral monitoring, for, you know, detecting, checking for unusual activity. It falls really into it's kind of a whole line of heuristics, understanding what should be there and what isn't there.
Knowing it's kinda one of the unfortunates where you may not know what's bad. You just don't you just know what's not good. So you have to do a little more investigation into that and kind of, delve into that mind and point kind of a real time security tools, to block something actively that's happening, isolating containers. Automation is key with the runtime.
Right? You can't rely on a human to, deal with a runtime attack or try runtime protection. You wanna have some automation in place, some tools in place to quarantine and isolate and then notify. That way it can be dealt with at a at a at a more reasonable pace rather than try to be reactive as an environment continuously gets compromised or until you can do some investigation.
So that's kinda what you kinda really need to do. And just make sure your policies are updated, establishing kind of a feedback loop with the team, making sure that as things get detected, as things are found out, you reach back with the security team and say, hey. What can we do to make this better? We found this.
Can we block this in the future? What should we have done differently? Having that communication between the developers and security team, really help prevent future threats like that.
Thank you. Bill, I think I saw your hand up next. Go ahead.
Yeah. I I think run time is is super important. It's the it it's the last bastion. Right? It's the last line of defense if all of your other layers of controls fail.
Yeah. Yeah.
It's just a matter of persistence sometimes.
And even with every conceivable control, you've you've also got the insider threat. Right? So there is there is always a way, that someone will find a way. And when they do, then you need to ensure that you have exactly what what Adam was saying there, that you you you must you must have policies, you must have behavioral detection, and you must you must be able to have behavioral detection that that's automated. So it needs to be able to identify, what a, attack vector looks like.
You know? So use things like MITRE ATT and CK, when you're looking at architecting your environments, to see what a typical attack looks like. Right? In in most cases, the attacker needs access. They need to be able to get the payload onto your system, and they need to be able to execute a binary.
At at any one of those points, you need to have a layer of control that would help you to defend against that. But in in the absence of absolutely nothing else, you need the ability to be able to, defend against drift prevention.
So that would be a binary that executes that wasn't part of the original image that you deploy. So you need a way to identify that and then a way to be able to block that because that's how you stop a zero day.
So that that's going one level beyond behavioral detection.
That's that's looking for absolute that's looking for code execution that wasn't there by design.
Mhmm. Trevor, go ahead.
Yeah. I I really enjoy that point, Philip. I think that sets up well for what what I wanna jump into.
And I do believe the protection or run time begins at the beginning of your your SDLC or your your CICD. Right? Do we have the mechanisms in place? It's been mentioned behavioral some sort of behavioral detection, some sort of policies in place. But the security testing that's gonna scrutinize the change all along the way throughout the SDL, are we tackling, network isolations that are preventing, you know, inter container network traffic? Are we mixing something to consider is are we mixing our workload sensitivities across each other? Is my low application sharing a space with my high application?
Should it be that way? Do we have unbound network access across containers?
We touched upon untrusted images, unbounded administrative access, and and we touched a little bit more on well, a bit earlier on, you know, know, do we trust the registries, or do we trust where we're pulling our container images? And insufficient authentications and authorization restrictions can can really come back and haunt us.
Thank you. And, Edmund, I couldn't tell if your hand was up from the last question or if you wanted to respond.
I'm so sorry. I'll add just just very quickly because I know the time is limited, and I apologize for that. But I just want to emphasize both to Trevor and and Philip's comment is that even though from a container perspective, right, and as a developer myself, just two days ago, I had to find my network in the layer to be custom, but then somehow when I was trying to reach outside of it, again, having one of those moment as a developer, before I spent three hours on it, then I was looking at the problem in my head I mean, in front of my face, and I couldn't figure out what's wrong with it until I involved someone else.
And I said, hey. What the heck am I doing wrong? I don't see it. Right?
So they were able to help me out. I wanted to I brought that up as an example because even though container sounds cool, right, when we deploy applications, there is a level of complexity associated. And when you now introduce like an orchestrator like Kubernetes, and then you deploy it into a, you know, a disconnected environment, and then you apply upgrades. There's, there's, there's backward compatibility.
So there is a lot of aspects that need to be considered, you know, through the process. And the human, there's always a, you know, a chance that humans do errors, not intentionally, but unintentionally. So that's why leveraging things like immutable infrastructure and and configuration scripts that can be validated prior to deployment, it will help us reach better results and outcomes at the end.
Absolutely. Thank you. And time for one more comment. Martin, would you like to weigh in?
Well, I mean, those are those are great those are great answers.
Nothing on this one, but I guess, you know, just sort of like a general thought is, you know, we all my my day job at CISA is maintaining all of our enterprise technology requirements and working with all the programs, and there's a mission impact, right, to all of these different security capabilities. Obviously, we need a deployment. We also have to think about how they impact our legacy applications.
And, you know, oftentimes in these conversations with vendors and, you know, with, you know, technology developers, like, you're like, well, the government doesn't articulate their requirements very well, and, you know, we don't know what we want.
Listening to the speakers today, I think we actually articulate them pretty well, and we know exactly what we want. And I think, you know, one of the things that I would encourage vendors to do is think about ways to, deliver these capabilities to us in in, you know, flexible ways where they can be applied based on, you know, the variances across the infrastructure that we've been referring to.
And, you know, I think it's really interesting to hear the different ways that folks like to deploy technology, the impacts potentially those security controls would have, but also the kinds of advantages that they that they're trying to achieve. So, you know, in general, I think this is just like a really, really interesting conversation. I hope hope folks are listening.
Yeah. Absolutely. Thank you for that. Okay. We are going to post our third and final poll question for our audience.
How is your agency incorporating security tooling and practices to support a shift left approach in the development life cycle for federal applications? So please be sure to respond to that. And there's one more poll at the very end, so please stick around.
That being said, we are coming to the end of our hour here, and I would love to wrap up with a round of key takeaways from this discussion. So each of you just one takeaway each, and you have about two minutes to respond. So, Trevor, do you mind if we start with you on that?
Sure.
Could you remind me what what the question was, or was this the the the overall takeaways?
Yes. Your key takeaway from today's discussion. What you think is the most important concept or idea that you thought about throughout this last hour here?
Oh, sure. Sorry. I was reading the the q and a poll and back to responding.
There's there's a good deal to take away from this. Everybody has shared a very just a wealth of knowledge and a great deal of things to consider from our what does our architecture look like? Are we not just shifting left, but are we designing secure by default principles, but while also meeting our zero trust maturity models as well, and and just policies. Like, it it always comes back to not just the technical the organizational administrative policies in place.
My takeaway is these are cool and these are fun, but not having the support commercially or internally, can can come back to haunt us. We we may be building and designing really cool things, but we may not be always spotting the weaknesses that we create. And so having those types of assessments, having those different perspectives is part of the planning and design phases are are just as important, if not crucial.
Yeah. Wonderful. Thank you. Adam, what is your key takeaway from the discussion?
One of my key takeaway is actually is gonna fall back to where very beginning, which was sign your stuff. Sign the containers. Right?
That way you know things are the way they were designed to be. But one of the things I'll think about with that was don't just sign it with any certification certificate or any code certificate. Have something that's trusted in your environment. Third parties third party CA, another CA design, everybody signing self signing certificate kind of feeds that purpose. So you wanna it's deal of my mentality of the zero trust having a centralized trusting, platform and designing but securely in the beginning.
That way you can trust the deployment of your containers. You can trust the deployment of the code. Because if you can't trust it, it'd be from the very beginning from its development, how can you trust the one within production? So that's kinda what one of my key takeaways here.
Thank you for that. Martin, what about you? What's your one key takeaway?
Well, that's no. I have two key two key takeaways. The first is I think I think, Trevor might have suggested a a a new session for you on assessing your zero trust, implementations. But I think generally, you know, just to go back to what I've said a couple times, yes, particularly with the focus I have on, you know, artificial intelligence risk management.
These infrastructures that underlie all of these applications are leveraged in ways that the, designers and and the maintainers of those infrastructures don't always understand. And so there needs to be a lot of communication with those folks throughout this throughout this, you know, stack, if you will, of of higher applications to ensure that the security controls that are being deployed in this fashion are are, you know, complimentary and not having impacts, but also are meeting, the requirements that, those particular, those particular stakeholders have. So we have these incredibly complex hierarchies and and ecosystems within, you know, relatively small, you know, organizations at this point as a result of this. And so it requires, zero trust professionals to be able to have those kinds of conversations really across the entire spectrum.
Well said. Thank you. Edmund, let's hear from you next.
Oh, thank you for that. It's great to meet some of you folks, you know, and I'm hoping that we will continue these conversations going forward. But, I have, almost like a plea to the industry that, you know, as I mentioned earlier, containers are great, but we also have some challenges when we're trying to develop a container that can deploy on something like Red Hat OpenShift, which requires a specific configuration for the user association to it versus a traditional Kubernetes. So in order for us to to have this holistic view of, you know, designing and developing in one place and then deploy across multiple different dynamic environments, I e, tactical systems, weapon systems, you know, share shore and afloat, as well as, you know, edge devices, whatever whatever there may be, like on subs and and ships.
That at some point, I'd like us collectively, and I think, Philip mentioned that earlier, from the collective of the willing, to have a working group to start thinking about a reference architecture that focuses on the high level definitions of, hey, we talk about OCI compliance and CNCF compliance, but then we have different vendors with different Kubernetes interfaces or configurations that not all containers can deploy the same way across these different, you know, Kubernetes, you know, orchestrators. So I'm hoping that we collectively come up with this this reference architecture to help us, not just from the government perspective, but the industry to all align those expectations and find a cohesive way for us to work together and share some more lessons learned.
Thank you. And, Philip, we will end with you.
Yeah. Okay. So I've got a couple of things. First of all, I need to address that. And then secondly, I'll I'll give you my final kind of overall takeaway.
Okay. So in the UK, I used to do AWS, migration readiness assessments, and this is where you'd go to a government agency and then determine from all of their application stacks what would be suitable for public cloud and what wouldn't. What needs rearchitecting? What needs refactoring?
What needs rewriting.
So I have a lot of experience in that that space, a lot of out of the box thinking.
And sometimes when you when you look at zero trust, and I I mentioned so I started zero trust in twenty seventeen for a large UK bank, And we thought about this a lot.
And the part of the problem is if you look at the individual parts, and this really concerned me, Edmund, earlier on in the presentation today when you talked about these ninety something tasks.
That really concerns me because I think if you try and meet every single one of them, you're just gonna be chasing errors, and I worry about that a lot. What what you need to do is come up with an overall zero trust strategy. I see To get down to that my niche my niche level would be virtually impossible. Actually, it would be impossible. It's just too complex, too much.
And you'll you'll fix it in one place, and you'll break it somewhere else. I absolutely guarantee it. What what we need to do is we need to come up with overarching capabilities that as a as a generalism meet the principles of zero trust.
And where you can't meet zero trust, explicitly meet it, there has to be mitigation that can be risk accepted.
Otherwise, we're never gonna get there. And that that's I think that's the worry for the deadlines, and that's the worry when someone is over architected what we consider zero trust to be. So that's my real concern there, and I think we we have to address that. I definitely wanna help with that because I think I can.
In terms of a final takeaway, I I wanted to bring up the whole.
So, you know, so for example, when when we consider you have to consider all of the components within an application, not not just even your own code. In modern development, we assemble code.
We use third party packages.
We we have, sensors of excellence.
We we use packages from colleagues and other and other teams.
We don't always understand the providence, and we don't always use all of the packages. Right? We import we see this in our data. A significant portion of packages that get imported, they remain on disk, but they never get loaded into memory.
Those functions are seldom called, and yet those those vulnerabilities are carried through into production even though they're inert and inactive.
If something happens to the code or that's or a call is or that function is called and that considerable vulnerability is now live in your production environment that you didn't know about when you deployed it.
It's critical that we that we apply continuous monitoring and validation in production, and we have strategies like the ability to rotate workloads on a regular basis. You know, in banking, I've got this down to every day. We rotated all of our clusters every single day, and we started off by only rotating once every nine months, which coincided with the, Kubernetes upgrade, every every third upgrade.
So, you know, we have to get into those light levels of practices.
But before we can get to that level, I need to go back to that first point I made is that we have to address some of those concerns because they are actually concerning.
Yes. Well, thank you so much for sharing that, and I wanna thank our partners at Aqua Security for making today possible. This is the last poll question. Would you like to receive your CP credits?
So be sure to respond to that.
Watch Next