Question 12 - Deployment with Anti-Affinity
Question 12 | Deployment on all Nodes
Use context: kubectl config use-context k8s-c1-H
Use Namespace project-tiger for the following. Create a Deployment named deploy-important with label id=very-important (the Pods should also have this label) and 3 replicas. It should contain two containers, the first named container1 with image nginx:1.17.6-alpine and the second one named container2 with image google/pause.
There should be only ever one Pod of that Deployment running on one worker node. We have two worker nodes: cluster1-node1 and cluster1-node2. Because the Deployment has three replicas the result should be that on both nodes one Pod is running. The third Pod won't be scheduled, unless a new worker node will be added. Use topologyKey: kubernetes.io/hostname for this.
In a way we kind of simulate the behaviour of a DaemonSet here, but using a Deployment and a fixed number of replicas.
What we need is to ensure that a pod doesn't go where another pod is, for this we can use podAntiAffinity.
kubectl config use-context k8s-c1-H
k create deployment -n project-tiger --image=nginx:1.17.6-alpine deploy-important --replicas 3 --dry-run=client -o yaml > opt/course/12/deploy-important.yaml
vim opt/course/12/deploy-important.yaml
# opt/course/12/deploy-important.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
#app: deploy-important # remove
id: very-important # add
name: deploy-important
namespace: project-tiger # important
spec:
replicas: 3
selector:
matchLabels:
#app: deploy-important # remove
id: very-important # add
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
#app: deploy-important # remove
id: very-important # add
spec:
containers:
- image: nginx:1.17.6-alpine
name: container1 # change
resources: {}
- image: google/pause # add
name: container2 # add
affinity: # add
podAntiAffinity: # add
requiredDuringSchedulingIgnoredDuringExecution: # add
- labelSelector: # add
matchExpressions: # add
- key: id # add
operator: In # add
values: # add
- very-important # add
topologyKey: kubernetes.io/hostname # add
status: {}
Now let's apply
kubectl apply -f opt/course/12/deploy-important.yaml
# We requested 3 replicas but only have two nodes available because we didn't add the toleration for control-plane
➜ k get deploy -n project-tiger -l id=very-important
NAME READY UP-TO-DATE AVAILABLE AGE
deploy-important 2/3 3 2 2m35s
➜ k get pod -n project-tiger -o wide -l id=very-important
NAME READY STATUS ... NODE
deploy-important-58db9db6fc-9ljpw 2/2 Running ... cluster1-node1
deploy-important-58db9db6fc-lnxdb 0/2 Pending ... <none>
deploy-important-58db9db6fc-p2rz8 2/2 Running ... cluster1-node2
Do a describe on the pod that didn't come up and check the reason, but it's something like
Warning FailedScheduling 63s (x3 over 65s) default-scheduler 0/3 nodes are available: 1 node(s) had taint
{node-role.kubernetes.io/control-plane: }, that the pod didn't tolerate, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules.