Skip to main content

Question 9 - Stop Scheduler and Manual Scheduling

Question 9 | Kill Scheduler, Manual Scheduling

Use context: kubectl config use-context k8s-c2-AC

Ssh into the controlplane node with ssh cluster2-controlplane1. Temporarily stop the kube-scheduler, this means in a way that you can start it again afterwards.

Create a single Pod named manual-schedule of image httpd:2.4-alpine, confirm it's created but not scheduled on any node.

Now you're the scheduler and have all its power, manually schedule that Pod on node cluster2-controlplane1. Make sure it's running.

Start the kube-scheduler again and confirm it's running correctly by creating a second Pod named manual-schedule2 of image httpd:2.4-alpine and check if it's running on cluster2-node1.


kubectl config use-context k8s-c2-AC

k get node
NAME STATUS ROLES AGE VERSION
cluster2-controlplane1 Ready control-plane 26h v1.29.0
cluster2-node1 Ready <none> 26h v1.29.0

ssh cluster2-controlplane1

# Removing the manifest will stop the scheduler. Let's just move it to another folder
root@cluster2-controlplane1:~# cd /etc/kubernetes/manifests/
root@cluster2-controlplane1:~# mv kube-scheduler.yaml ..

# Going back to default terminal let's create the pod and wait for it not to be scheduled
k run manual-schedule --image=httpd:2.4-alpine
k get pod manual-schedule -o wide
NAME READY STATUS ... NODE NOMINATED NODE
manual-schedule 0/1 Pending ... <none> <none>

# Let's get this pod and put a nodeName on it
k get pod manual-schedule -o yaml > /opt/course/9/manualpod.yaml

vim /opt/course/9/manualpod.yaml

Edit the file by adding nodeName to direct the creation.

...
spec:
nodeName: cluster2-controlplane1 # add the controlplane node name
containers:
- image: httpd:2.4-alpine
imagePullPolicy: IfNotPresent
name: manual-schedule
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-nxnc7
readOnly: true
dnsPolicy: ClusterFirst
...

Done, now just bring it up using replace

kubectl replace -f /opt/course/9/manualpod.yaml --force
# And check if it's working
k get pod manual-schedule -o wide
NAME READY STATUS ... NODE
manual-schedule 1/1 Running ... cluster2-controlplane1

Go back to ssh and put the file back in place

ssh cluster2-controlplane1

# Removing the manifest will stop the scheduler. Let's just move it to another folder
root@cluster2-controlplane1:~# cd /etc/kubernetes/
root@cluster2-controlplane1:~# mv kube-scheduler.yaml manifests/

# Now just launch another pod. Go back to normal terminal
k run manual-schedule2 --image=httpd:2.4-alpine
k get pod -o wide | grep schedule
manual-schedule 1/1 Running ... cluster2-controlplane1
manual-schedule2 1/1 Running ... cluster2-node1

It's worth noting that we were only able to schedule the pod on cluster2-controlplane1 because it didn't have a taint.