Open Policy Agent (OPA)
It is an extension that we can install inside Kubernetes that allows us to create custom policies.
As we saw earlier, a request first needs to authenticate to identify who it is, then go through the RBAC authorization process to determine if the requester has permission to perform the requested action, and then we move to admission control, which is where OPA will function.
Open Policy Agent (OPA) is an open-source policy control mechanism that allows defining, managing, and applying policies centrally across various systems, including Kubernetes. In the Kubernetes context, OPA is generally used as a Policy-as-Code solution to define rules that control the behavior of resources within the cluster, such as Pods, Deployments, ConfigMaps, etc.
How OPA works in Kubernetes
-
OPA can be implemented as a dynamic Admission Controller in Kubernetes. This means it intercepts all API requests that arrive at Kubernetes, evaluating them according to defined policies before resources are actually created or modified in the cluster.
-
OPA receives the API request, evaluates whether it complies with policies (for example, security restrictions, naming standards, resource limits), and then allows or blocks the operation, returning a response to Kubernetes.
-
Policies in OPA are written in a declarative language called Rego. With Rego, you can define logical expressions to determine whether a particular operation is allowed or denied.
-
Works with JSON/YAML. OPA doesn't know what pods, deployments, or any other Kubernetes resources are; it only works based on JSON or YAML.
Policy examples include:
-
Blocking the creation of Pods that don't have a defined CPU/memory request.
-
Restricting the use of container images that aren't from an approved private registry.
-
Ensuring that all network configurations are following specific security rules.
OPA Gatekeeper vs Kyverno
OPA Gatekeeper uses OPA but brings ease of use with Kubernetes. Actually, OPA Gatekeeper installs custom resource definitions in the cluster so we can create OPA policies.
OPA Gatekeeper and Kyverno are two popular Policy-as-Code tools for Kubernetes, used to define, apply, and audit policies within Kubernetes clusters. While both aim to manage security and compliance policies, they differ in their approach, policy definition language, ease of use, and specific use cases.
-
Policy Language:- OPA Gatekeeper: Uses the Rego language to define policies. Rego is a declarative, expressive, and powerful language, but has a learning curve, especially for those unfamiliar with logic programming.
- Kyverno: Uses YAML, which is the same format used to define resources in Kubernetes. Since most Kubernetes operators are already familiar with YAML, creating policies in Kyverno is generally more intuitive and faster.
-
Policy Approach:-
OPA Gatekeeper: Is a generic policy mechanism, not specific to Kubernetes, that can be extended to other systems and contexts beyond Kubernetes. Its policies are defined with Rego and it can be used anywhere that supports OPA.
-
Kyverno: Was designed specifically for Kubernetes. It focuses directly on the needs of Kubernetes operators, providing YAML-friendly syntax and Kubernetes-specific features like configuration copying, mutations, and resource generation.
-
-
Features and Functionality:-
OPA Gatekeeper:
- Policy Validation: Checks requests against policies before applying them.
- Policy Enforcement: Rejects or allows requests according to defined rules.
- Auditing: Supports policy auditing, allowing identification of which existing resources violate policies.
-
Kyverno:
- Policy Validation: Similar to Gatekeeper, validates requests based on defined rules.
- Mutations: Allows automatic modification of resources (e.g., adding labels, annotations, setting defaults).
- Resource Generation: Can automatically generate Kubernetes resources, such as ConfigMaps or Secrets, in response to events.
- Auditing: Supports auditing and generating policy compliance reports.
-
-
Ease of Use and Integration:-
OPA Gatekeeper: Requires learning the Rego language and can be more complex to configure and use for new users. It's more flexible and generic, but this flexibility comes with greater complexity.
-
Kyverno: Focused on Kubernetes, it's easier for teams already accustomed to YAML and Kubernetes. Integration with Kubernetes is straightforward, and policy development can be faster and easier due to familiarity with the YAML format.
-
-
Use Cases:-
OPA Gatekeeper: Ideal for organizations that want a more generic policy solution that can be applied to multiple systems beyond Kubernetes. Used when there's a need for complex and custom logic, where the expressiveness of the Rego language is an advantage.
-
Kyverno: Ideal for teams that want to focus directly on Kubernetes and need a simple and easy-to-implement solution. Excellent for scenarios where mutation or automatic resource generation is needed, in addition to validation.
-
It's possible to use both at the same time, leveraging the best of each, but it would need to be very well executed.
-
Performance: Each tool adds a bit of overhead to request processing in the cluster. If you use both, it's important to monitor the performance impact and adjust policies as needed.
-
Maintenance: Having two policy tools can increase maintenance and operational complexity, as you'll need to monitor and manage policies in two different systems.
-
Policy Conflict: There's potential for conflicts if policies defined in both tools overlap or contradict. Therefore, it's important to coordinate policies between both to avoid unwanted results.
Installing OPA Gatekeeper
The prerequisites for OPA Gatekeeper to work are that the ValidatingAdmissionWebhook and MutatingAdmissionWebhook admission plugins are loaded. Using kubeadm to create the cluster, they are enabled by default. See the documentation. The vast majority of clusters offered by cloud providers also have these plugins already enabled.
Default plugins:
- CertificateApproval
- CertificateSigning
- CertificateSubjectRestriction
- DefaultIngressClass
- DefaultStorageClass
- DefaultTolerationSeconds
- LimitRanger
MutatingAdmissionWebhook- NamespaceLifecycle
- PersistentVolumeClaimResize
- PodSecurity
- Priority
- ResourceQuota
- RuntimeClass
- ServiceAccount
- StorageObjectInUseProtection
- TaintNodesByCondition
- ValidatingAdmissionPolicy
ValidatingAdmissionWebhook
# Here only extra plugins are shown.
root@cks-master:~# k get pod -n kube-system kube-apiserver-cks-master -o yaml | grep admission
- --enable-admission-plugins=NodeRestriction
# Although not showing above, they were loaded.
root@cks-master:~# kubectl logs -n kube-system kube-apiserver-kind-control-plane| grep ValidatingAdmissionWebhook
I0918 19:16:46.978587 1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
https://open-policy-agent.github.io/gatekeeper/website/docs/install
We can install using helm or by applying the available manifest set.
# Release version
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/v3.17.0/deploy/gatekeeper.yaml
# Or using helm
helm repo add gatekeeper https://open-policy-agent.github.io/gatekeeper/charts
helm install gatekeeper/gatekeeper --name-template=gatekeeper --namespace gatekeeper-system --create-namespace
NAME STATUS AGE
default Active 14d
gatekeeper-system Active 2m12s
ingress-nginx Active 11d
kube-node-lease Active 14d
kube-public Active 14d
kube-system Active 14d
kubernetes-dashboard Active 11d
root@cks-master:~# k get -n gatekeeper-system pod,svc,deploy
NAME READY STATUS RESTARTS AGE
pod/gatekeeper-audit-5cf8bcb8f7-pq9tp 1/1 Running 0 3m7s
pod/gatekeeper-controller-manager-5dbdb9b595-2kprv 1/1 Running 0 3m7s
pod/gatekeeper-controller-manager-5dbdb9b595-846m8 1/1 Running 0 3m7s
pod/gatekeeper-controller-manager-5dbdb9b595-vch4t 1/1 Running 0 3m7s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/gatekeeper-webhook-service ClusterIP 10.103.64.162 <none> 443/TCP 3m7s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/gatekeeper-audit 1/1 1 1 3m7s
deployment.apps/gatekeeper-controller-manager 3/3 3 3 3m7s
# Gatekeeper custom resources at namespace level
root@cks-master:~# k api-resources --namespaced | grep gatekeeper
configs config.gatekeeper.sh/v1alpha1 true Config
constraintpodstatuses status.gatekeeper.sh/v1beta1 true ConstraintPodStatus
constrainttemplatepodstatuses status.gatekeeper.sh/v1beta1 true ConstraintTemplatePodStatus
expansiontemplatepodstatuses status.gatekeeper.sh/v1beta1 true ExpansionTemplatePodStatus
mutatorpodstatuses status.gatekeeper.sh/v1beta1 true MutatorPodStatus
# Gatekeeper custom resources at cluster level
root@cks-master:~# k api-resources --namespaced=false | grep gatekeeper
expansiontemplate expansion.gatekeeper.sh/v1beta1 false ExpansionTemplate
providers externaldata.gatekeeper.sh/v1beta1 false Provider
assign mutations.gatekeeper.sh/v1 false Assign
assignimage mutations.gatekeeper.sh/v1alpha1 false AssignImage
assignmetadata mutations.gatekeeper.sh/v1 false AssignMetadata
modifyset mutations.gatekeeper.sh/v1 false ModifySet
syncsets syncset.gatekeeper.sh/v1alpha1 false SyncSet
constrainttemplates templates.gatekeeper.sh/v1 false ConstraintTemplate
It's necessary to understand a bit about Dynamic Admission Control. Admission webhook is like an admission controller. If we create a custom webhook like OPA Gatekeeper creates for us, every time a request exists it will pass through these webhooks.
Kubernetes offers two possible ways to hook this into the system.
Validating Admission Webhookwhich only serves to validate the request made and can only be approved or denied.Mutating admission Webhookwhich gives the possibility to modify the request such as adding labels, annotations, minimum number of replicas for a deployment, etc.
Let's create an initial DENY ALL rule for everything and see what happens.
We need to create a template that defines constraints. Constraints depend on a template.
First we create the template; in this template we'll create the custom resource definition that will give us the kind to be used to create a constraint, as well as the parameters that will be defined in the constraint. If we try to create the constraint directly, we'll get an error because the CRD for this kind doesn't exist. Let's test.
Defining a constraint.
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sAlwaysDeny # Where does this come from? For now we're making it up
metadata:
name: pod-always-deny
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Pod"]
parameters:
message: "ACCESS DENIED!"
root@cks-master:~# vim constraint.yaml
root@cks-master:~# k apply -f constraint.yaml
error: resource mapping not found for name: "pod-always-deny" namespace: "" from "constraint.yaml": no matches for kind "K8sAlwaysDeny" in version "constraints.gatekeeper.sh/v1beta1"
ensure CRDs are installed first
A template creates the custom resource definition for constraints. Constraints just tell us which resource we're acting on and some other things, but the violation logic is in the template.
Now let's create a template that defines a constraint.
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8salwaysdeny # The same name will be used in Rego.
spec:
crd:
spec:
names:
kind: K8sAlwaysDeny # The constraint we're defining in this template
validation:
# Schema for the `parameters` field
openAPIV3Schema:
properties:
message:
type: string # Note that in the constraint we have parameter.message which is defined here.
# This is the Rego language block used by OPA. Here we define violation rules.
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8salwaysdeny
violation[{"msg": msg}] {
1 > 0
msg := input.parameters.message
}
In the target block we use the Rego language (Language with the purpose of creating policies) and we have a reference to the template's metadata.name itself.
In the violation section we receive a msg parameter that will later be displayed if the condition is met. The violation will be thrown if all conditions are true. At the moment, we have just one simple condition, 1 is greater than zero, which means the condition is true, there are no more conditions, which means the violation will be thrown and all pods will be denied and the message will be thrown.
If you didn't understand, that's okay, let's create and verify.
root@cks-master:~# vim template.yaml
root@cks-master:~# cat template.yaml
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8salwaysdeny
spec:
crd:
spec:
names:
kind: K8sAlwaysDeny
validation:
openAPIV3Schema:
properties:
message:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8salwaysdeny
violation[{"msg": msg}] {
1 > 0
msg := input.parameters.message
}
root@cks-master:~# k apply -f template.yaml
constrainttemplate.templates.gatekeeper.sh/k8salwaysdeny created
root@cks-master:~# k get constrainttemplates
NAME AGE
k8salwaysdeny 22s
# Searching for the created resource
root@cks-master:~# k get K8sAlwaysDeny
No resources found
# Creating the resource we couldn't create before.
root@cks-master:~# k apply -f constraint.yaml
k8salwaysdeny.constraints.gatekeeper.sh/pod-always-deny created
root@cks-master:~# k get K8sAlwaysDeny
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
pod-always-deny deny
# Let's try to create a pod now.
root@cks-master:~# k run nginx --image nginx
Error from server (Forbidden): admission webhook "validation.gatekeeper.sh" denied the request: [pod-always-deny] ACCESS DENIED!
Only new pods will be analyzed, as the rule only has value from the moment it was created, but doesn't affect what already exists. If a replicaset is scaled up and needs to create new pods for a deployment, it won't be able to, or any other way.
root@cks-master:~# k get pods
NAME READY STATUS RESTARTS AGE
app 2/2 Running 0 14h
root@cks-master:~# k get pod app -o yaml | kubectl replace -f - --force
pod "app" deleted
Error from server (Forbidden): admission webhook "validation.gatekeeper.sh" denied the request: [pod-always-deny] ACCESS DENIED!
root@cks-master:~# k get pods
No resources found in default namespace.
Knowing this, we can observe that in the status block we have 21 occurrences of violations which are pods that already exist in the cluster, theoretically if they were recreated they wouldn't pass the rule.
# Total pods we have in the cluster
root@cks-master:~# k get pod -A --no-headers | wc -l
21
root@cks-master:~# k describe k8salwaysdeny pod-always-deny
Name: pod-always-deny
Namespace:
Labels: <none>
Annotations: <none>
API Version: constraints.gatekeeper.sh/v1beta1
Kind: K8sAlwaysDeny
Metadata:
Creation Timestamp: 2024-08-30T14:26:51Z
Generation: 1
Resource Version: 1402472
UID: aaca7120-6206-47f0-9144-6c5dd17561cd
Spec:
Enforcement Action: deny
Match:
Kinds:
API Groups:
Kinds:
Pod
Parameters:
Message: ACCESS DENIED!
Status:
Audit Timestamp: 2024-08-30T14:41:11Z
By Pod:
Constraint UID: aaca7120-6206-47f0-9144-6c5dd17561cd
Enforced: true
Id: gatekeeper-audit-5cf8bcb8f7-pq9tp
Observed Generation: 1
Operations:
audit
mutation-status
status
Constraint UID: aaca7120-6206-47f0-9144-6c5dd17561cd
Enforced: true
Id: gatekeeper-controller-manager-5dbdb9b595-2kprv
Observed Generation: 1
Operations:
mutation-webhook
webhook
Constraint UID: aaca7120-6206-47f0-9144-6c5dd17561cd
Enforced: true
Id: gatekeeper-controller-manager-5dbdb9b595-846m8
Observed Generation: 1
Operations:
mutation-webhook
webhook
Constraint UID: aaca7120-6206-47f0-9144-6c5dd17561cd
Enforced: true
Id: gatekeeper-controller-manager-5dbdb9b595-vch4t
Observed Generation: 1
Operations:
mutation-webhook
webhook
Total Violations: 21
Violations:
Enforcement Action: deny
Group:
Kind: Pod
Message: ACCESS DENIED!
Name: kubernetes-dashboard-metrics-scraper-5485b64c47-8jsxc
Namespace: kubernetes-dashboard
Version: v1
Enforcement Action: deny
Group:
Kind: Pod
Message: ACCESS DENIED!
Name: kubernetes-dashboard-kong-7696bb8c88-kw4s7
Namespace: kubernetes-dashboard
Version: v1
Enforcement Action: deny
Group:
Kind: Pod
Message: ACCESS DENIED!
Name: kubernetes-dashboard-auth-784d848dcb-zg8wq
Namespace: kubernetes-dashboard
Version: v1
Enforcement Action: deny
Group:
Kind: Pod
Message: ACCESS DENIED!
Name: kubernetes-dashboard-api-9567bc759-zfnk6
Namespace: kubernetes-dashboard
Version: v1
Enforcement Action: deny
Group:
Kind: Pod
Message: ACCESS DENIED!
Name: kube-scheduler-cks-master
Namespace: kube-system
Version: v1
Enforcement Action: deny
Group:
Kind: Pod
Message: ACCESS DENIED!
Name: kube-proxy-w2xzr
Namespace: kube-system
Version: v1
Enforcement Action: deny
Group:
Kind: Pod
Message: ACCESS DENIED!
Name: kube-proxy-5mx5m
Namespace: kube-system
Version: v1
Enforcement Action: deny
Group:
Kind: Pod
Message: ACCESS DENIED!
Name: kube-controller-manager-cks-master
Namespace: kube-system
Version: v1
Enforcement Action: deny
Group:
Kind: Pod
Message: ACCESS DENIED!
Name: kube-apiserver-cks-master
Namespace: kube-system
Version: v1
Enforcement Action: deny
Group:
Kind: Pod
Message: ACCESS DENIED!
Name: etcd-cks-master
Namespace: kube-system
Version: v1
Enforcement Action: deny
Group:
Kind: Pod
Message: ACCESS DENIED!
Name: coredns-7db6d8ff4d-hsmkr
Namespace: kube-system
Version: v1
Enforcement Action: deny
Group:
Kind: Pod
Message: ACCESS DENIED!
Name: coredns-7db6d8ff4d-7ktqv
Namespace: kube-system
Version: v1
Enforcement Action: deny
Group:
Kind: Pod
Message: ACCESS DENIED!
Name: canal-8nn2f
Namespace: kube-system
Version: v1
Enforcement Action: deny
Group:
Kind: Pod
Message: ACCESS DENIED!
Name: canal-665nh
Namespace: kube-system
Version: v1
Enforcement Action: deny
Group:
Kind: Pod
Message: ACCESS DENIED!
Name: calico-kube-controllers-75bdb5b75d-wfr98
Namespace: kube-system
Version: v1
Enforcement Action: deny
Group:
Kind: Pod
Message: ACCESS DENIED!
Name: ingress-nginx-controller-7d4db76476-xxqvt
Namespace: ingress-nginx
Version: v1
Enforcement Action: deny
Group:
Kind: Pod
Message: ACCESS DENIED!
Name: gatekeeper-controller-manager-5dbdb9b595-vch4t
Namespace: gatekeeper-system
Version: v1
Enforcement Action: deny
Group:
Kind: Pod
Message: ACCESS DENIED!
Name: gatekeeper-controller-manager-5dbdb9b595-846m8
Namespace: gatekeeper-system
Version: v1
Enforcement Action: deny
Group:
Kind: Pod
Message: ACCESS DENIED!
Name: gatekeeper-controller-manager-5dbdb9b595-2kprv
Namespace: gatekeeper-system
Version: v1
Enforcement Action: deny
Group:
Kind: Pod
Message: ACCESS DENIED!
Name: gatekeeper-audit-5cf8bcb8f7-pq9tp
Namespace: gatekeeper-system
Version: v1
Events: <none>
We can do this analysis to check if we'll have problems in the future when we define these policies in a cluster that's already operating.
If we change the condition 1 > 0 which is true to 1 > 2 it would be false so everything would be allowed and the violation wouldn't happen.
root@cks-master:~# vim template.yaml
root@cks-master:~# cat template.yaml
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8salwaysdeny
spec:
crd:
spec:
names:
kind: K8sAlwaysDeny
validation:
openAPIV3Schema:
properties:
message:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8salwaysdeny
violation[{"msg": msg}] {
1 > 2
msg := input.parameters.message
}
# See above we changed the condition to always be false
root@cks-master:~# k run nginx --image nginx
pod/nginx created
# We have no more violations
root@cks-master:~# k describe k8salwaysdeny pod-always-deny
Name: pod-always-deny
Namespace:
Labels: <none>
Annotations: <none>
API Version: constraints.gatekeeper.sh/v1beta1
Kind: K8sAlwaysDeny
Metadata:
Creation Timestamp: 2024-08-30T14:26:51Z
Generation: 1
Resource Version: 1403247
UID: aaca7120-6206-47f0-9144-6c5dd17561cd
Spec:
Enforcement Action: deny
Match:
Kinds:
API Groups:
Kinds:
Pod
Parameters:
Message: ACCESS DENIED!
Status:
Audit Timestamp: 2024-08-30T14:49:11Z
By Pod:
Constraint UID: aaca7120-6206-47f0-9144-6c5dd17561cd
Enforced: true
Id: gatekeeper-audit-5cf8bcb8f7-pq9tp
Observed Generation: 1
Operations:
audit
mutation-status
status
Constraint UID: aaca7120-6206-47f0-9144-6c5dd17561cd
Enforced: true
Id: gatekeeper-controller-manager-5dbdb9b595-2kprv
Observed Generation: 1
Operations:
mutation-webhook
webhook
Constraint UID: aaca7120-6206-47f0-9144-6c5dd17561cd
Enforced: true
Id: gatekeeper-controller-manager-5dbdb9b595-846m8
Observed Generation: 1
Operations:
mutation-webhook
webhook
Constraint UID: aaca7120-6206-47f0-9144-6c5dd17561cd
Enforced: true
Id: gatekeeper-controller-manager-5dbdb9b595-vch4t
Observed Generation: 1
Operations:
mutation-webhook
webhook
Total Violations: 0 # <<<<<
Events: <none>
Clean up the constraints for the next scenario.
root@cks-master:~# k delete -f constraint.yaml
root@cks-master:~# k delete -f template.yaml
root@cks-master:~# rm -rf constraint.yaml template.yaml
Let's create a policy where all namespaces need to have the cks label. Let's follow the steps and explain along the way.
root@cks-master:~# vim template.yaml
# The idea is to create a general template that requires labels from objects and use this template for different things.
root@cks-master:~# cat template.yaml
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8srequiredlabels
spec:
crd:
spec:
names:
kind: K8sRequiredLabels # The name of the kind to be created
validation:
# Schema for the `parameters` field
openAPIV3Schema:
properties:
# labels is an array type of strings that will need to exist in the constraint.
labels:
type: array
items: string
# Let's analyze this a bit further down after the explanation.
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8srequiredlabels
violation[{"msg": msg, "details": {"missing_labels": missing}}] {
provided := {label | input.review.object.metadata.labels[label]}
required := {label | label := input.parameters.labels[_]}
missing := required - provided
count(missing) > 0
msg := sprintf("you must provide labels: %v", [missing])
}
# Here we'll create two constraints with the same kind, one for namespaces and one for pods that implement the same thing but for different Kubernetes objects.
root@cks-master:~# vim constraint-ns.yaml
root@cks-master:~# cat constraint-ns.yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
name: ns-must-have-cks
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Namespace"]
parameters:
labels: ["cks"]
root@cks-master:~#
root@cks-master:~# vim constraint-pod.yaml
root@cks-master:~# cat constraint-pod.yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
name: pod-must-have-cks
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Pod"]
parameters:
labels: ["cks"]
Let's analyze the template in the target part.
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8srequiredlabels
violation[{"msg": msg, "details": {"missing_labels": missing}}] {
provided := {label | input.review.object.metadata.labels[label]}
required := {label | label := input.parameters.labels[_]}
missing := required - provided
count(missing) > 0
msg := sprintf("you must provide labels: %v", [missing])
}
Again violation receives the parameters, in this case msg and details where msg will be defined internally in msg := sprintf("you must provide labels: %v", [missing]) and will be missing_labels: missing where missing will be the number of missing labels.
-
input: is the global variable in Rego that contains the data being evaluated. -
input.review: Within the Gatekeeper context it's the standard path to access the request being reviewed, i.e., the object submitted to the admission controller. -
input.review.object: Is the Kubernetes resource being created, modified, or deleted. It can be a pod, it can be a namespace, it can be any resource being sent to the Kubernetes API. -
input.review.object.metadata.labels: Is the key-value we're searching for. In this case [label] was passed to get the list of labels. -
In Rego
label |is creating a set from the input. -
input.parametersis fetching the list in the constraint. Now looking at the variables in how we get the values. -
providedcontains the set (list) of labels from the request. If it's a namespace it will be from the namespace object, if it's a pod it will be labels from the pod. -
requiredis receiving the list of labels from the constraint of the same object type. -
missingWe're removing the entire set of labels from the request from the required labels. If we can remove everything then they were defined. -
Finally we use a count function to see how many labels are still in the condition and if any label is missing, i.e., missing is greater than zero, it will be denied and the parameterized message will be sent saying which labels we should have and how many are missing.
Let's create and check violations in a simpler way.
root@cks-master:~# k create -f template.yaml
constrainttemplate.templates.gatekeeper.sh/k8srequiredlabels created
root@cks-master:~# k create -f constraint-ns.yaml
k8srequiredlabels.constraints.gatekeeper.sh/ns-must-have-cks created
root@cks-master:~# k create -f constraint-pod.yaml
k8srequiredlabels.constraints.gatekeeper.sh/pod-must-have-cks created
# Wait a bit and TOTAL-VIOLATIONS will appear.
root@cks-master:~# k get k8srequiredlabels
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
ns-must-have-cks deny 7
pod-must-have-cks deny 22
# Violations are in all pods and all namespaces because none of them have this label defined.
root@cks-master:~# k get pod -A --no-headers | wc -l
22
root@cks-master:~# k get ns --no-headers | wc -l
7
# If we put a label on a namespace will it be reduced? Yes. It can be any value for the label, as the rule only expects the label to exist.
root@cks-master:~# k label namespaces default cks=true
namespace/default labeled
root@cks-master:~# k describe ns default
Name: default
Labels: cks=true
kubernetes.io/metadata.name=default
Annotations: <none>
Status: Active
No resource quota.
No LimitRange resource.
root@cks-master:~# k get k8srequiredlabels
ns-must-have-cks pod-must-have-cks
root@cks-master:~# k get k8srequiredlabels ns-must-have-cks
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
ns-must-have-cks deny
# Let's force the namespace to be required to have two labels, cks and team.
root@cks-master:~# vim constraint-ns.yaml
root@cks-master:~# cat constraint-ns.yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
name: ns-must-have-cks
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Namespace"]
parameters:
labels: ["cks","team"]
root@cks-master:~# k replace -f constraint-ns.yaml
k8srequiredlabels.constraints.gatekeeper.sh/ns-must-have-cks replaced
root@cks-master:~# k create namespace test
AError from server (Forbidden): admission webhook "validation.gatekeeper.sh" denied the request: [ns-must-have-cks] you must provide labels: {"cks", "team"}
# Now let's create a namespace with these labels.
root@cks-master:~# k create ns test -oyaml --dry-run=client > ns.yaml
root@cks-master:~# vim ns.yaml
root@cks-master:~# cat ns.yaml
apiVersion: v1
kind: Namespace
metadata:
creationTimestamp: null
name: test
labels:
cks: isnothard
team: devops
spec: {}
status: {}
root@cks-master:~# k apply -f ns.yaml
namespace/test created
# Testing for pod
root@cks-master:~# k run pod --image=nginx
Error from server (Forbidden): admission webhook "validation.gatekeeper.sh" denied the request: [pod-must-have-cks] you must provide labels: {"cks"}
root@cks-master:~# k run pod --image=nginx --labels cks=amazing
pod/pod created
Delete the templates, constraints and let's do another one to check if a pod has the minimum replica count.
root@cks-master:~# k delete -f constraint-ns.yaml
k8srequiredlabels.constraints.gatekeeper.sh "ns-must-have-cks" deleted
root@cks-master:~# k delete -f constraint-pod.yaml
k8srequiredlabels.constraints.gatekeeper.sh "pod-must-have-cks" deleted
root@cks-master:~# k delete -f template.yaml
constrainttemplate.templates.gatekeeper.sh "k8srequiredlabels" deleted
root@cks-master:~# rm -rf template.yaml constraint-*
Here are the files we'll need.
root@cks-master:~# vim template.yaml
root@cks-master:~# cat template.yaml
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8sminreplicacount
spec:
crd:
spec:
names:
kind: K8sMinReplicaCount
validation:
# Schema for the `parameters` field
openAPIV3Schema:
properties:
min:
type: integer
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8sminreplicacount
violation[{"msg": msg, "details": {"missing_replicas": missing}}] {
provided := input.review.object.spec.replicas
required := input.parameters.min
missing := required - provided
missing > 0
msg := sprintf("you must provide %v more replicas", [missing])
}
root@cks-master:~# vim constraint-rc.yaml
root@cks-master:~# cat constraint-rc.yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sMinReplicaCount
metadata:
name: deployment-must-have-min-replicas
spec:
match:
kinds:
- apiGroups: ["apps"]
kinds: ["Deployment"]
parameters:
min: 2
root@cks-master:~#
It's practically the same thing, the only difference is that now instead of searching in metadata we search in input.review.object.spec.replicas. Since it's not a list we can just check this value against the constraint value. We don't need to use the count function because we already have the number we need.
If the number of replicas is 3, 3-2=1 which is greater than zero. If it's 2 it would be 2-2=0. Although it seems that 0 > 0 wouldn't satisfy, in Rego it actually satisfies because in this case it managed to meet the requirement.
root@cks-master:~# k apply -f template.yaml
constrainttemplate.templates.gatekeeper.sh/k8sminreplicacount created
root@cks-master:~# k apply -f constraint-rc.yaml
k8sminreplicacount.constraints.gatekeeper.sh/deployment-must-have-min-replicas created
root@cks-master:~# k get k8sminreplicacount deployment-must-have-min-replicas
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
deployment-must-have-min-replicas deny 8
root@cks-master:~# k create deployment nginx --image=nginx
error: failed to create deployment: admission webhook "validation.gatekeeper.sh" denied the request: [deployment-must-have-min-replicas] you must provide 1 more replicas
root@cks-master:~# k create deployment nginx --image=nginx --replicas 2
deployment.apps/nginx created
Worth mentioning this github with some templates and policies that are worth studying. It's a good starting point.
Rego Playground
To create policies, this playground can be of great help including some examples for Kubernetes.