Skip to main content

Question 10 | Container Runtime Sandbox gVisor

Use context: kubectl config use-context workload-prod

Team purple wants to run some of their workloads more secure. Worker node cluster1-node2 has container engine containerd already installed and it's configured to support the runsc/gvisor runtime.

Create a RuntimeClass named gvisor with handler runsc.

Create a Pod that uses the RuntimeClass. The Pod should be in Namespace team-purple, named gvisor-test and of image nginx:1.19.2. Make sure the Pod runs on cluster1-node2.

Write the dmesg output of the successfully started Pod into /opt/course/10/gvisor-test-dmesg.


Answer:

We check the nodes and we can see that all are using containerd:

➜ k get node -o wide
NAME STATUS ROLES ... CONTAINER-RUNTIME
cluster1-controlplane1 Ready control-plane ... containerd://1.5.2
cluster1-node1 Ready <none> ... containerd://1.5.2
cluster1-node2 Ready <none> ... containerd://1.5.2

But just one has containerd configured to work with runsc/gvisor runtime which is cluster1-node2.

(Optionally) we ssh into the worker node and check if containerd+runsc is configured:

ssh cluster1-node2

➜ root@cluster1-node2:~# runsc --version
runsc version release-20201130.0
spec: 1.0.1-dev

➜ root@cluster1-node2:~# cat /etc/containerd/config.toml | grep runsc
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runsc]
runtime_type = "io.containerd.runsc.v1"

Now we best head to the k8s docs for RuntimeClasses https://kubernetes.io/docs/concepts/containers/runtime-class, steal an example and create the gvisor one:

vim 10_rtc.yaml
# 10_rtc.yaml
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: gvisor
handler: runsc
k -f 10_rtc.yaml create

#And the required Pod:
k -n team-purple run gvisor-test --image=nginx:1.19.2 --dry-run=client -o yaml > 10_pod.yaml

vim 10_pod.yaml
# 10_pod.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: gvisor-test
name: gvisor-test
namespace: team-purple
spec:
nodeName: cluster1-node2 # add
runtimeClassName: gvisor # add
containers:
- image: nginx:1.19.2
name: gvisor-test
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

k -f 10_pod.yaml create

After creating the pod we should check if it's running and if it uses the gvisor sandbox:

➜ k -n team-purple get pod gvisor-test
NAME READY STATUS RESTARTS AGE
gvisor-test 1/1 Running 0 30s

➜ k -n team-purple exec gvisor-test -- dmesg
[ 0.000000] Starting gVisor...
[ 0.417740] Checking naughty and nice process list...
[ 0.623721] Waiting for children...
[ 0.902192] Gathering forks...
[ 1.258087] Committing treasure map to memory...
[ 1.653149] Generating random numbers by fair dice roll...
[ 1.918386] Creating cloned children...
[ 2.137450] Digging up root...
[ 2.369841] Forking spaghetti code...
[ 2.840216] Rewriting operating system in Javascript...
[ 2.956226] Creating bureaucratic processes...
[ 3.329981] Ready!

Looking good. And as required we finally write the dmesg output into the file:

k -n team-purple exec gvisor-test > /opt/course/10/gvisor-test-dmesg -- dmesg