KODE CLOUD CKS Challenge 2
Please note that the competition status for CKS Challenges is ended. Please do not submit a solution. It will not be scored.
A number of applications have been deployed in the dev
, staging
and prod
namespaces. There are a few security issues with these applications.
Inspect the issues in detail by clicking on the icons of the interactive architecture diagram in the lab and complete the tasks to secure the applications. Once done click on the Check button to validate your work.
Do the tasks in this order:
dockerfile
- Run as non root(instead, use correct application user)
- Avoid exposing unnecessary ports
- Avoid copying the
Dockerfile
and other unnecessary files and directories in to the image. Move the required files and directories (app.py, requirements.txt and the templates directory) to a subdirectory calledapp
underwebapp
and update the COPY instruction in theDockerfile
accordingly. - Once the security issues are fixed, rebuild this image locally with the tag
kodekloud/webapp-color:stable
The first two subtasks involve cleaning the Dockerfile...
cd /root/webapp
vi Dockerfile
Change the Dockerfile this:
We are asked to move the application to subdirectory app
, so change to COPY
instruction in anticipation of this
COPY ./app /opt
Run as non root
A user has been created with RUN adduser -D worker
, but we are not switching to it, but are instead switching to root
.
Change the line USER root
to USER worker
Avoid exposing unnecessary ports. We don't need port 22 (SSH) for the app, so delete the following lines
## Expose port 22
EXPOSE 22
Now save and exit vi
Move the app and related files to a new subdirectory app
mkdir app
mv app.py app/
mv requirements.txt app/
mv templates app/
Rebuild image
docker build -t kodekloud/webapp-color:stable .
Return to home directory
cd ~
kubesec
- Fix issues with the
/root/dev-webapp.yaml
file which was used to deploy thedev-webapp
pod in thedev
namespace. - Redeploy the
dev-webapp
pod once issues are fixed with the imagekodekloud/webapp-color:stable
- Fix issues with the
/root/staging-webapp.yaml
file which was used to deploy thestaging-webapp
pod in thestaging
namespace. - Redeploy the
staging-webapp
pod once issues are fixed with the imagekodekloud/webapp-color:stable
When running kubesec
we can use jq
to extract the part of the JSON output that's relevant to identifying critical issues with the scanned manifest. Run without | jq
and everything after to see the whole report.
dev-webapp.yaml
kubesec scan /root/dev-webapp.yaml | jq '.[] | .scoring.critical'
Note that CapSysAdmin
and AllowPrivilegeEscalation
are called out.
Edit the manifest:
Remove the SYS_ADMIN
capability
Set allowPrivilegeEscalation
to false
Set the container's image to kodekloud/webapp-color:stable
(which we built earlier)
Don't recreate the pod yet. There's more to do in the next stage.
staging-webapp.yaml
kubesec scan /root/dev-webapp.yaml | jq '.[] | .scoring.critical'
Note that this has exactly the same issues as dev-webapp.yaml
. Perform exactly the same steps as for staging-webapp.yaml
.
dev-webapp
Ensure that the pod dev-webapp
is immutable:
- This pod can be accessed using the
kubectl exec
command. We want to make sure that this does not happen. Use a startupProbe to remove all shells before the container startup. UseinitialDelaySeconds
andperiodSeconds
of5
. Hint: For this to work you would have to run the container as root! - Image used:
kodekloud/webapp-color:stable
(We have already done this above) - Redeploy the pod as per the above recommendations and make sure that the application is up.
Check what shells are present in the container - shell commands are found in /bin
directory and usually end with sh
, e.g. sh
itself, bash
etc.
kubectl exec -n dev dev-webapp -- ls /bin | grep sh
Output:
fdflush
isn't a shell, but the other two are. ash
is a shell normally packaged with Alpine Linux.
Create a startup probe according to the specification, and ensure the startup probe can run as root. Note that the probes aren't affected by the USER
command in the Dockerfile.
Edit dev-webapp.yaml
Add the following under securityContext
, if it is not already there
runAsUser: 0
Insert the probe
startupProbe:
exec:
command:
- rm
- /bin/sh
- /bin/ash
initialDelaySeconds: 5
periodSeconds: 5
Now recreate the running pod with everything we changed in step 2 and this step
kubectl replace -f dev-webapp.yaml --force
staging-webapp
Ensure that the pod dev-webapp
is immutable:
- This pod can be accessed using the
kubectl exec
command. We want to make sure that this does not happen. Use a startupProbe to remove all shells before the container startup. UseinitialDelaySeconds
andperiodSeconds
of5
. Hint: For this to work you would have to run the container as root! - Image used:
kodekloud/webapp-color:stable
(We have already done this above) - Redeploy the pod as per the above recommendations and make sure that the application is up.
Follow the same steps as for dev-webapp
above, adjust staging-webapp.yaml
and recreate the pod.
prod-web
- The deployment has a secret hardcoded. Instead, create a secret called
prod-db
for all the hardcoded values and consume the secret values as environment variables within the deployment.
Examine the deployment manifest to see what this secret is
kubectl get deployment -n prod prod-web -o yaml
We can see there are 3 environment variables with values.
Create a secret for these vars
kubectl create secret generic prod-db -n prod \
--from-literal DB_Host=prod-db \
--from-literal DB_User=root \
--from-literal DB_Password=paswrd
Edit the deployment and change the env
section to get the values from the secret
kubectl edit deployment -n prod prod-web
Replace the variables under the env
block with
- name: DB_User
valueFrom:
secretKeyRef:
key: DB_User
name: prod-db
- name: DB_Host
valueFrom:
secretKeyRef:
key: DB_Host
name: prod-db
- name: DB_Password
valueFrom:
secretKeyRef:
key: DB_Password
name: prod-db
Test this by pressing the prod-web
button above the terminal. After you apply the network policy next, this will no longer work
prod-netpol
- Use a network policy called
prod-netpol
that will only allow traffic only within theprod
namespace. All the traffic from other namespaces should be denied.
Note that all namespaces have a predefined label kubernetes.io/metadata.name
which is very useful when creating namespace-restricted network policies.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: prod-netpol
namespace: prod
spec:
podSelector: {} # apply to all pods in prod namespace
policyTypes:
- Ingress
ingress:
- from:
- podSelector: {} # any pod...
namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: prod # ...that is only in prod namespace
Once all the above tasks are completed, click the Check
button.