Open Policy Agent: Authorization in a Cloud Native World

Learn about Open Policy Agent (OPA) and how you can use it to control authorization, admission, and other policies in cloud native environments, with a focus on K8s.

What is Open Policy Agent (OPA)?

The Open Policy Agent (OPA) is a policy engine that automates and unifies the implementation of policies across IT environments, especially in cloud native applications. OPA was originally created by Styra, and has since been accepted by the Cloud Native Computing Foundation (CNCF). The OPA is offered for use under an open source license

Organizations use the OPA to automatically enforce, monitor, and remediate policies across all relevant components. You can use OPA to centralize operational, security, and compliance across Kubernetes, application programming interface (API) gateways, continuous integration / continuous delivery (CI/CD) pipelines, data protection, and more. 

In this article, you will learn:

How Does Open Policy Agent Work?

OPA transfers management of service level policies from individual applications to a centralized policy manager. The basic tenet of OPA is that access policies should be decoupled from business logic, and that policies should never be hard-coded in a service, so that policy decisions are always separated from policy execution. 

OPA policy rules

OPA policy rules can be defined according to the specific context of each service, including:

  • User level access and authorization
  • Infrastructure configuration
  • Auditing and testing
  • Application specific authorization solutions

Central policy management

OPS provides central policy management, which can be accessed through RESTful APIs with JSON over HTTP. OPA runs alongside application services. When a service needs to make a decision on a policy question, it contacts the relevant OPA API, receives a response, and applies the policy accordingly. 

Technically, OPA can be deployed as an operating system daemon, container, or library. An important design principle of OPA is that both policies and service data are stored in memory on the relevant host. Because OPA is colocated with the host providing the service, there are no delays in receiving responses to policy requests. 

Rego query language

OPS provides a query language called Rego, based on Datalog. Rego provides a convenient way to define assertions, and get clear binary decisions about policy questions. 

Because Rego is a declarative language, policy makers can define what queries should return, instead of having to deal with query execution. At the same time, unlike other declarative languages, Rego enables query optimization to improve performance. 

Using OPA in a Cloud Native Environment

Cloud-native environments are complex. There are many components to deploy, terminate, update, monitor, and secure. OPA can support your operation by providing capabilities for enforcing policies. 

Here are key tasks OPA can help you perform in a cloud native environment:

  • Application authorization—OPA uses a declarative policy language that lets you write and enforce rules. This feature comes with a comprehensive stack of tools that can help you integrate policies into applications. You can also grant end users permissions to contribute policies for tenants. 
  • Kubernetes admission control—Kubernetes comes with a built-in feature that provides capabilities for enforcing admission control policies. OPA can help you extend these basic features. To do that, you need to deploy OPA as a mutating admission controller. OPA will then let you point container images at corporate image registries, inject pods with sidecar containers, add specific annotations to your resources, and more. 
  • Service mesh authorization—OPA can help you regulate and control service mesh architectures. With OPA, you can add authorization policies directly into your service mesh. This can help you limit lateral movement across a microservice architecture and enforce compliance regulations.

Related content: read our guide to cloud native infrastructure

Open Policy Agent Examples

Examples are useful to understand how to use OPA in practice. Below are two examples showing how OPA can be used to secure Kubernetes.

Using OPA to Require Specific Kubernetes Labels

In Kubernetes, labels are used to determine how objects are grouped, and by extension:

  • Where workloads can run
  • How resources can communicate with others
  • Which users have permission to access which resources

Because labels are such a critical mechanism, it is important to have access controls, to prevent unauthorized parties or outsiders from manipulating labels. It is also important to avoid manual label entry, which is error-prone, and can lead to security issues and operational problems.

Here is a sample OPA policy that requires every Kubernetes resource to have a specific label in a specified format. The code in this example and next was provided by Torin Sandall in his post on Kubernetes Admission Control policies.

package kubernetes.validating.existence
deny[msg] {
	not input.request.object.metadata.labels.department
	msg := "Every resource must have a department label"
deny[msg] {
	value := input.request.object.metadata.labels.department
	not startswith(value, "cccode-")
	msg := sprintf("Department code must start with `dcode-`; found `%v`", [value])

Using OPA to Secure Kubernetes Ingress

Kubernetes Ingress lets you expose specific services, or deny access to them. This can have several negative consequences:

  • Ingress settings allow administrators to expose services to the public Internet, and this often happens accidentally
  • Having excessive permissions on an Ingress can cause Kubernetes to spin up a large number of load balancers, increasing cloud costs
  • Incorrect Ingress configuration can result in new workloads sharing the same hostname as existing workloads—the new workloads can unintentionally take over existing traffic, causing service disruption and risking exposure of sensitive data 

Below is a sample OPA policy that ensures a new host does not use the same Ingress host as an existing host.

package kubernetes.validating.ingress
deny[msg] {
	input_host := input.request.object.spec.rules[_].host
    some other_ns, other_name
    other_host :=
	[input_ns, input_name] != [other_ns, other_name]
	input_host == other_host
	msg := sprintf("Ingress host conflicts with ingress %v/%v", [other_ns, other_name])
input_ns = input.request.object.metadata.namespace
input_name =
is_ingress {
	input.request.kind.kind == "Ingress" == "extensions"
	input.request.kind.version == "v1beta1"

Open Policy Agent Gatekeeper: First Class Integration Between OPA and Kubernetes

OPA Gatekeeper is an open source project that provides first-party integration between Kubernetes and OPA. It provides the following capabilities:

  • OPA constraint framework—OPA constraints are declarations written in Rego, which is a declarative query language. OPA uses Rego to determine whether instances of data violate an expected system state. Constraints that are not satisfied are rejected. You can define a constraint by creating a constraint template, and then deploy it in a cluster. 
  • Audit functionality—OPA Gatekeeper lets you evaluate how replicated resources measure against the constraints enforced in each cluster. Audits can be performed periodically, for the purpose of detecting existing misconfigurations. OPA Gatekeeper stores violations, which are the result of audit operations, in the status field of each relevant constraint.
  • Data replication—in order to set up autids, you need to replicate Kubernetes resources into the OPA, where the resources can be evaluated against constraints. Additionally, constraints may require data replication, to access cluster objects under evaluation. To enable data replication in Kubernetes, you need to create a sync configuration resource that containers the resources you want to replicate into OPA.
  • Admission control validation—after you install all OPA Gatekeeper components in your cluster, the API server can trigger a Gatekeeper admission webhook. This webhook is responsible for procession admission requests when cluster resources are deleted, updated, or created.

Cloud Native Security with Aqua

The Aqua Cloud Native Security Platform empowers you to unleash the full potential of your cloud native transformation and accelerate innovation with the confidence that your cloud native applications are secured from start to finish, at any scale.

Aqua’s platform provides prevention, detection, and response automation across the entire application lifecycle to secure the build, secure cloud infrastructure and secure running workloads across VMs, containers, and serverless functions wherever they are deployed, on any cloud.

Secure the Cloud Native Build

Shift left security to nip threats and vulnerabilities in the bud, empowering DevOps to detect issues early and fix them fast. Aqua scans artifacts for vulnerabilities, malware, secrets and other risks during development and staging. It allows you to set flexible, dynamic policies to control deployment into your runtime environments.

Secure Cloud Native Infrastructure

Automate compliance and security posture of your public cloud IaaS and Kubernetes infrastructure according to best practices. Aqua checks your cloud services, Infrastructure-as-Code templates, and Kubernetes setup against best practices and standards, to ensure the infrastructure you run your applications on is securely configured and in compliance.

Secure Cloud Native Workloads

Protect VM, container and serverless workloads using granular controls that provide real-time detection and granular response, only blocking the specific processes that violate police. Aqua leverages modern micro-services concepts to enforce immutability of your applications in runtime, establishing zero-trust networking, and detecting and stopping suspicious activities, including zero-day attacks.

The Cloud Native Experts
"The Cloud Native Experts" at Aqua Security specialize in cloud technology and cybersecurity. They focus on advancing cloud-native applications, offering insights into containers, Kubernetes, and cloud infrastructure. Their work revolves around enhancing security in cloud environments and developing solutions to new challenges.