Question 17 | Audit Log Policy
Use context: kubectl config use-context infra-prod
Audit Logging has been enabled in the cluster with an Audit Policy located at /etc/kubernetes/audit/policy.yaml on cluster2-controlplane1.
Change the configuration so that only one backup of the logs is stored.
Alter the Policy in a way that it only stores logs:
From Secret resources, level Metadata
From "system:nodes" userGroups, level RequestResponse
After you altered the Policy make sure to empty the log file so it only contains entries according to your changes, like using truncate -s 0 /etc/kubernetes/audit/logs/audit.log.
NOTE: You can use jq to render json more readable. cat data.json | jq
Answer:
First we check the apiserver configuration and change as requested:
➜ ssh cluster2-controlplane1
➜ root@cluster2-controlplane1:~# cp /etc/kubernetes/manifests/kube-apiserver.yaml ~/17_kube-apiserver.yaml # backup
➜ root@cluster2-controlplane1:~# vim /etc/kubernetes/manifests/kube-apiserver.yaml
# /etc/kubernetes/manifests/kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --audit-policy-file=/etc/kubernetes/audit/policy.yaml
- --audit-log-path=/etc/kubernetes/audit/logs/audit.log
- --audit-log-maxsize=5
- --audit-log-maxbackup=1 # CHANGE
- --advertise-address=192.168.100.21
- --allow-privileged=true
...
NOTE: You should know how to enable Audit Logging completely yourself as described in the docs. Feel free to try this in another cluster in this environment.
Now we look at the existing Policy:
➜ root@cluster2-controlplane1:~# vim /etc/kubernetes/audit/policy.yaml
# /etc/kubernetes/audit/policy.yaml
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: Metadata
We can see that this simple Policy logs everything on Metadata level. So we change it to the requirements:
# /etc/kubernetes/audit/policy.yaml
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
# log Secret resources audits, level Metadata
- level: Metadata
resources:
- group: ""
resources: ["secrets"]
# log node related audits, level RequestResponse
- level: RequestResponse
userGroups: ["system:nodes"]
# for everything else don't log anything
- level: None
After saving the changes we have to restart the apiserver:
➜ root@cluster2-controlplane1:~# cd /etc/kubernetes/manifests/
➜ root@cluster2-controlplane1:/etc/kubernetes/manifests# mv kube-apiserver.yaml ..
➜ root@cluster2-controlplane1:/etc/kubernetes/manifests# watch crictl ps # wait for apiserver gone
➜ root@cluster2-controlplane1:/etc/kubernetes/manifests# truncate -s 0 /etc/kubernetes/audit/logs/audit.log
➜ root@cluster2-controlplane1:/etc/kubernetes/manifests# mv ../kube-apiserver.yaml .
Once the apiserver is running again we can check the new logs and scroll through some entries:
cat audit.log | tail | jq
{
"kind": "Event",
"apiVersion": "audit.k8s.io/v1",
"level": "Metadata",
"auditID": "e598dc9e-fc8b-4213-aee3-0719499ab1bd",
"stage": "RequestReceived",
"requestURI": "...",
"verb": "watch",
"user": {
"username": "system:serviceaccount:gatekeeper-system:gatekeeper-admin",
"uid": "79870838-75a8-479b-ad42-4b7b75bd17a3",
"groups": [
"system:serviceaccounts",
"system:serviceaccounts:gatekeeper-system",
"system:authenticated"
]
},
"sourceIPs": [
"192.168.102.21"
],
"userAgent": "manager/v0.0.0 (linux/amd64) kubernetes/$Format",
"objectRef": {
"resource": "secrets",
"apiVersion": "v1"
},
"requestReceivedTimestamp": "2020-09-27T20:01:36.238911Z",
"stageTimestamp": "2020-09-27T20:01:36.238911Z",
"annotations": {
"authentication.k8s.io/legacy-token": "..."
}
}
# Above we logged a watch action by OPA Gatekeeper for Secrets, level Metadata.
{
"kind": "Event",
"apiVersion": "audit.k8s.io/v1",
"level": "RequestResponse",
"auditID": "c90e53ed-b0cf-4cc4-889a-f1204dd39267",
"stage": "ResponseComplete",
"requestURI": "...",
"verb": "list",
"user": {
"username": "system:node:cluster2-controlplane1",
"groups": [
"system:nodes",
"system:authenticated"
]
},
"sourceIPs": [
"192.168.100.21"
],
"userAgent": "kubelet/v1.19.1 (linux/amd64) kubernetes/206bcad",
"objectRef": {
"resource": "configmaps",
"namespace": "kube-system",
"name": "kube-proxy",
"apiVersion": "v1"
},
"responseStatus": {
"metadata": {},
"code": 200
},
"responseObject": {
"kind": "ConfigMapList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/namespaces/kube-system/configmaps",
"resourceVersion": "83409"
},
"items": [
{
"metadata": {
"name": "kube-proxy",
"namespace": "kube-system",
"selfLink": "/api/v1/namespaces/kube-system/configmaps/kube-proxy",
"uid": "0f1c3950-430a-4543-83e4-3f9c87a478b8",
"resourceVersion": "232",
"creationTimestamp": "2020-09-26T20:59:50Z",
"labels": {
"app": "kube-proxy"
},
"annotations": {
"kubeadm.kubernetes.io/component-config.hash": "..."
},
"managedFields": [
{
...
}
]
},
...
}
]
},
"requestReceivedTimestamp": "2020-09-27T20:01:36.223781Z",
"stageTimestamp": "2020-09-27T20:01:36.225470Z",
"annotations": {
"authorization.k8s.io/decision": "allow",
"authorization.k8s.io/reason": ""
}
}
And in the one above we logged a list action by system:nodes for a ConfigMaps, level RequestResponse.
Because all JSON entries are written in a single line in the file we could also run some simple verifications on our Policy:
# shows Secret entries
cat audit.log | grep '"resource":"secrets"' | wc -l
# confirms Secret entries are only of level Metadata
cat audit.log | grep '"resource":"secrets"' | grep -v '"level":"Metadata"' | wc -l
# shows RequestResponse level entries
cat audit.log | grep -v '"level":"RequestResponse"' | wc -l
# shows RequestResponse level entries are only for system:nodes
cat audit.log | grep '"level":"RequestResponse"' | grep -v "system:nodes" | wc -l