Question 26 - Find Pods First to be Terminated
Question 26 | Find Pods first to be terminated
Use context: kubectl config use-context k8s-c1-H
Check all available Pods in the Namespace project-c13 and find the names of those that would probably be terminated first if the nodes run out of resources (cpu or memory) to schedule all Pods. Write the Pod names into /opt/course/e1/pods-not-stable.txt.
When available CPU or memory resources on nodes reach their limit, Kubernetes will kill some pods. The first ones will be those using more resources than they requested in their request. If any pod has container requests/resource limits defined, they will be considered as using more than requested.
When a pod doesn't define resources it has a qosclass (Quality of Service) BestEffort. This is always the worst case and will be the first to go. The qosClass can be seen inside status.qosClass
kubectl config use-context k8s-c1-H
# We can filter to analyze pods by request maps and see who has and who doesn't have defined resources. Whoever doesn't have resources defined is on the list of first candidates.
kubectl get pods -n project-c13 -o jsonpath="{range .items[*]} {.metadata.name} {.spec.containers[*].resources} {'\n'}"
c13-2x3-api-86784557bd-cgs8gmap[requests:map[cpu:50m memory:20Mi]]
c13-2x3-api-86784557bd-lnxvjmap[requests:map[cpu:50m memory:20Mi]]
c13-2x3-api-86784557bd-mnp77map[requests:map[cpu:50m memory:20Mi]]
c13-2x3-web-769c989898-6hbgtmap[requests:map[cpu:50m memory:10Mi]]
c13-2x3-web-769c989898-g57nqmap[requests:map[cpu:50m memory:10Mi]]
c13-2x3-web-769c989898-hfd5vmap[requests:map[cpu:50m memory:10Mi]]
c13-2x3-web-769c989898-jfx64map[requests:map[cpu:50m memory:10Mi]]
c13-2x3-web-769c989898-r89mgmap[requests:map[cpu:50m memory:10Mi]]
c13-2x3-web-769c989898-wtgxlmap[requests:map[cpu:50m memory:10Mi]]
c13-3cc-runner-98c8b5469-dzqhrmap[requests:map[cpu:30m memory:10Mi]]
c13-3cc-runner-98c8b5469-hbtdvmap[requests:map[cpu:30m memory:10Mi]]
c13-3cc-runner-98c8b5469-n9lswmap[requests:map[cpu:30m memory:10Mi]]
c13-3cc-runner-heavy-65588d7d6-djtv9map[]
c13-3cc-runner-heavy-65588d7d6-v8kf5map[]
c13-3cc-runner-heavy-65588d7d6-wwpb4map[]
c13-3cc-web-675456bcd-glpq6map[requests:map[cpu:50m memory:10Mi]]
c13-3cc-web-675456bcd-knlpxmap[requests:map[cpu:50m memory:10Mi]]
c13-3cc-web-675456bcd-nfhp9map[requests:map[cpu:50m memory:10Mi]]
c13-3cc-web-675456bcd-twn7mmap[requests:map[cpu:50m memory:10Mi]]
o3db-0{}
# A more elegant and faster solution would be
kubectl get pods -n project-c13 -o custom-columns=NAME:.metadata.name,QOS:.status.qosClass | grep BestEffort
NAME QOS
c13-3cc-runner-heavy-65588d7d6-djtv9 BestEffort
c13-3cc-runner-heavy-65588d7d6-v8kf5 BestEffort
c13-3cc-runner-heavy-65588d7d6-wwpb4 BestEffort
o3db-0 BestEffort
vim /opt/course/26/pods-not-stable.txt
Just write to the file.
#/opt/course/26/pods-not-stable.txt
c13-3cc-runner-heavy-65588d7d6-djtv9
c13-3cc-runner-heavy-65588d7d6-v8kf5
c13-3cc-runner-heavy-65588d7d6-wwpb4
o3db-0