Skip to main content

KODE CLOUD CKS Challenge 3

Take me to the lab!

Please note that the competition status for CKS Challenges is ended. Please do not submit a solution. It will not be scored.

This is a two node kubernetes cluster. Using the kube-bench utility, identify and fix all the issues that were reported as failed for the controlplane and the worker node components.

Inspect the issues in detail by clicking on the icons of the interactive architecture diagram on the right and complete the tasks to secure the cluster. Once done click on the Check button to validate your work.

Click on each icon (in the lab) to see more details. Once done, click the Check button to test your work.

Do the tasks in this order:

  • Download kube-bench from AquaSec and extract it under /opt filesystem. Use the appropriate steps from the kube-bench docs to complete this task.
  • Run kube-bench with config directory set to /opt/cfg and /opt/cfg/config.yaml as the config file. Redirect the result to /var/www/html/index.html file.

When this challenge was created, v0.6.2 of kube-bench was current, so we will download that version for best compatibility.

Download and place under opt

curl -L https://github.com/aquasecurity/kube-bench/releases/download/v0.6.2/kube-bench_0.6.2_linux_amd64.tar.gz | tar -xz -C /opt

Create directory for report

mkdir -p /var/www/html

Execute with given configuration instructions

/opt/kube-bench --config-dir /opt/cfg --config /opt/cfg/config.yaml > /var/www/html/index.html

Despite the fact that we redirected the output to index.html, the file content is text and can be inspected like this

less /var/www/html/index.html

kubelet (node)

  • nsure that the --protect-kernel-defaults argument is set to true (node01)

ssh to node01

ssh node01

Edit the kubelet configuration

vi /var/lib/kubelet/config.yaml

Add the following line to the end of the file

protectKernelDefaults: true

Save and exit vi, then restart kubelet

systemctl restart kubelet

Return to controlplane node

kubelet (controlplane)

  • Ensure that the --protect-kernel-defaults argument is set to true (node01)

    Do exactly the same as above, but this time you don't need to ssh to anywhere first.

kube-controller-mananger

  • Ensure that the --profiling argument is set to false

1Edit the manifest

vi /etc/kubernetes/manifests/kube-controller-manager.yaml

Add the following to the list of arguments in the command section of the pod spec:

    - --profiling=false

Save and exit from vi. Controller manager pod will restart in a minute or so

kube-scheduler

Ensure that the --profiling argument is set to false

Do the exact same staps as above, but with /etc/kubernetes/manifests/kube-scheduler.yaml

etcd

  • Correct the etcd data directory ownership
    • View the report as discussed in the kube-bench section above, and find the FAIL at section 1.1.12

    • Verify the data directory by checking the volumes section of the etcd pod static manifest for the hostPath.

    • Correct the ownership as directed

      chown -R etcd:etcd /var/lib/etcd

kube-apiserver

  • Ensure that the --profiling argument is set to false
  • Ensure PodSecurityPolicy admission controller is enabled
  • Ensure that the --insecure-port argument is set to 0
  • Ensure that the --audit-log-path argument is set to /var/log/apiserver/audit.log
  • Ensure that the --audit-log-maxage argument is set to 30
  • Ensure that the --audit-log-maxbackup argument is set to 10
  • Ensure that the --audit-log-maxsize argument is set to 100

So this looks like a bunch of argument changes. Well it is, but there's a bit more work than that. If we tell the apiserver to open a log at a given directory, then that directory is expected to be on the host machine, i.e. controlplane itself. This means we also need to create a volume and volumeMount to satisfy this criterion, and also the host directory must exist.

The directory into which the log file will go needs to exist first

mkdir -p /var/log/apiserver

Edit the manifest file

vi /etc/kubernetes/manifests/kube-apiserver.yaml

Put in all the new arguments

    - --profiling=false
- --insecure-port=0
- --audit-log-maxage=30
- --audit-log-maxbackup=10
- --audit-log-path=/var/log/apiserver/audit.log
- --audit-log-maxsize=100

Enable the admission controller, by appending PodSecurityPolicy to the --enable-admission-plugins argument so it looks like

    - --enable-admission-plugins=NodeRestriction,PodSecurityPolicy

Create a volume for the log file (add to existing volumes)

    volumes:
- hostPath:
path: /var/log/apiserver/audit.log
type: FileOrCreate
name: audit-log

Create a volumeMount for this volume (add to existing volumeMounts)

    volumeMounts:
- mountPath: /var/log/apiserver/audit.log
name: audit-log

Save and exit vi. Wait up to a minute for api server to restart. Be aware of how to debug a crashed apiserver if you muck it up!