Question 43 | CKS Challenge 3 - Kube-bench Cluster Hardening
Please note that the competition status for CKS Challenges is ended. Please do not submit a solution. It will not be scored.
This is a two node kubernetes cluster. Using the kube-bench utility, identify and fix all the issues that were reported as failed for the controlplane and the worker node components.
Inspect the issues in detail by clicking on the icons of the interactive architecture diagram on the right and complete the tasks to secure the cluster. Once done click on the Check button to validate your work.
Click on each icon (in the lab) to see more details. Once done, click the Check button to test your work.
Do the tasks in this order:
- Download
kube-benchfrom AquaSec and extract it under/optfilesystem. Use the appropriate steps from the kube-bench docs to complete this task. - Run
kube-benchwith config directory set to/opt/cfgand/opt/cfg/config.yamlas the config file. Redirect the result to/var/www/html/index.htmlfile.
When this challenge was created, v0.6.2 of kube-bench was current, so we will download that version for best compatibility.
Download and place under opt
curl -L https://github.com/aquasecurity/kube-bench/releases/download/v0.6.2/kube-bench_0.6.2_linux_amd64.tar.gz | tar -xz -C /opt
Create directory for report
mkdir -p /var/www/html
Execute with given configuration instructions
/opt/kube-bench --config-dir /opt/cfg --config /opt/cfg/config.yaml > /var/www/html/index.html
Despite the fact that we redirected the output to index.html, the file content is text and can be inspected like this
less /var/www/html/index.html
kubelet (node)
- nsure that the
--protect-kernel-defaultsargument is set to true (node01)
ssh to node01
ssh node01
Edit the kubelet configuration
vi /var/lib/kubelet/config.yaml
Add the following line to the end of the file
protectKernelDefaults: true
Save and exit vi, then restart kubelet
systemctl restart kubelet
Return to controlplane node
kubelet (controlplane)
-
Ensure that the
--protect-kernel-defaultsargument is set to true (node01)Do exactly the same as above, but this time you don't need to
sshto anywhere first.
kube-controller-mananger
- Ensure that the
--profiling argumentis set to false
1Edit the manifest
vi /etc/kubernetes/manifests/kube-controller-manager.yaml
Add the following to the list of arguments in the command section of the pod spec:
- --profiling=false
Save and exit from vi. Controller manager pod will restart in a minute or so
kube-scheduler
Ensure that the --profiling argument is set to false
Do the exact same staps as above, but with /etc/kubernetes/manifests/kube-scheduler.yaml
etcd
- Correct the
etcddata directory ownership-
View the report as discussed in the
kube-benchsection above, and find the FAIL at section1.1.12 -
Verify the data directory by checking the
volumessection of theetcdpod static manifest for thehostPath. -
Correct the ownership as directed
chown -R etcd:etcd /var/lib/etcd
-
kube-apiserver
- Ensure that the
--profilingargument is set tofalse - Ensure
PodSecurityPolicyadmission controller is enabled - Ensure that the
--insecure-portargument is set to0 - Ensure that the
--audit-log-pathargument is set to/var/log/apiserver/audit.log - Ensure that the
--audit-log-maxageargument is set to30 - Ensure that the
--audit-log-maxbackupargument is set to10 - Ensure that the
--audit-log-maxsizeargument is set to100
So this looks like a bunch of argument changes. Well it is, but there's a bit more work than that. If we tell the apiserver to open a log at a given directory, then that directory is expected to be on the host machine, i.e. controlplane itself. This means we also need to create a volume and volumeMount to satisfy this criterion, and also the host directory must exist.
The directory into which the log file will go needs to exist first
mkdir -p /var/log/apiserver
Edit the manifest file
vi /etc/kubernetes/manifests/kube-apiserver.yaml
Put in all the new arguments
- --profiling=false
- --insecure-port=0
- --audit-log-maxage=30
- --audit-log-maxbackup=10
- --audit-log-path=/var/log/apiserver/audit.log
- --audit-log-maxsize=100
Enable the admission controller, by appending PodSecurityPolicy to the --enable-admission-plugins argument so it looks like
- --enable-admission-plugins=NodeRestriction,PodSecurityPolicy
Create a volume for the log file (add to existing volumes)
volumes:
- hostPath:
path: /var/log/apiserver/audit.log
type: FileOrCreate
name: audit-log
Create a volumeMount for this volume (add to existing volumeMounts)
volumeMounts:
- mountPath: /var/log/apiserver/audit.log
name: audit-log
Save and exit vi. Wait up to a minute for api server to restart. Be aware of how to debug a crashed apiserver if you muck it up!