Skip to main content

Pregunta 12 - Deployment con Anti-Affinity

Question 12 | Deployment on all Nodes

Use context: kubectl config use-context k8s-c1-H

Use Namespace project-tiger for the following. Create a Deployment named deploy-important with label id=very-important (the Pods should also have this label) and 3 replicas. It should contain two containers, the first named container1 with image nginx:1.17.6-alpine and the second one named container2 with image google/pause.

There should be only ever one Pod of that Deployment running on one worker node. We have two worker nodes: cluster1-node1 and cluster1-node2. Because the Deployment has three replicas the result should be that on both nodes one Pod is running. The third Pod won't be scheduled, unless a new worker node will be added. Use topologyKey: kubernetes.io/hostname for this.

In a way we kind of simulate the behaviour of a DaemonSet here, but using a Deployment and a fixed number of replicas.


Lo que necesitamos es garantizar que un pod no entre donde otro pod esté, para esto podemos usar un podAntiAffinity.

kubectl config use-context k8s-c1-H

k create deployment -n project-tiger --image=nginx:1.17.6-alpine deploy-important --replicas 3 --dry-run=client -o yaml > opt/course/12/deploy-important.yaml

vim opt/course/12/deploy-important.yaml
# opt/course/12/deploy-important.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
#app: deploy-important # remover
id: very-important # agregar
name: deploy-important
namespace: project-tiger # importante
spec:
replicas: 3
selector:
matchLabels:
#app: deploy-important # remover
id: very-important # agregar
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
#app: deploy-important # remover
id: very-important # agregar
spec:
containers:
- image: nginx:1.17.6-alpine
name: container1 # cambiar
resources: {}
- image: google/pause # agregar
name: container2 # agregar
affinity: # agregar
podAntiAffinity: # agregar
requiredDuringSchedulingIgnoredDuringExecution: # agregar
- labelSelector: # agregar
matchExpressions: # agregar
- key: id # agregar
operator: In # agregar
values: # agregar
- very-important # agregar
topologyKey: kubernetes.io/hostname # agregar
status: {}

Ahora vamos a aplicar

kubectl apply -f opt/course/12/deploy-important.yaml

# Pedimos 3 réplicas pero solo hay dos nodos disponibles pues no colocamos la toleration para el control-plane
➜ k get deploy -n project-tiger -l id=very-important
NAME READY UP-TO-DATE AVAILABLE AGE
deploy-important 2/3 3 2 2m35s

➜ k get pod -n project-tiger -o wide -l id=very-important
NAME READY STATUS ... NODE
deploy-important-58db9db6fc-9ljpw 2/2 Running ... cluster1-node1
deploy-important-58db9db6fc-lnxdb 0/2 Pending ... <none>
deploy-important-58db9db6fc-p2rz8 2/2 Running ... cluster1-node2

Haz un describe en el pod que no subió y verifica el motivo, pero es algo como

Warning FailedScheduling 63s (x3 over 65s) default-scheduler 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/control-plane: }, that the pod didn't tolerate, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules.