Sequência de Verificação de Falhas de Nodes
No caso de um node falhar temos algumas situações para ter em mente.
-
Verifique o status dos nodes com o comando
kubectl get nodespara ver o status. -
Procure os eventos dos nodes que estão not-ready com o comando describe para este node. Verifique as conditions.
kubectl describe nodes kind-cluster-worker
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
# Se MemoryPressure estiver como true estamos com falta de memória para rodar os pods. Provavelmente os pods devem estar crashando.
MemoryPressure False Mon, 26 Feb 2024 08:57:01 -0300 Thu, 08 Feb 2024 20:02:46 -0300 KubeletHasSufficientMemory kubelet has sufficient memory available
# Se DiskPressure estiver como true então estamos com falta de capacidade de disco
DiskPressure False Mon, 26 Feb 2024 08:57:01 -0300 Thu, 08 Feb 2024 20:02:46 -0300 KubeletHasNoDiskPressure kubelet has no disk pressure
# PIDPressure será setado como true em se tiver muitos pods nesse node
PIDPressure False Mon, 26 Feb 2024 08:57:01 -0300 Thu, 08 Feb 2024 20:02:46 -0300 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 26 Feb 2024 08:57:01 -0300 Thu, 08 Feb 2024 20:02:49 -0300 KubeletReady kubelet is posting ready statusSe alguma dessas pressures estiver setado como true já sabemos que é alguma falta de recurso. Se estiver como
Unknownprovavelmente algum acidente aconteceu e perdeu o status. -
Confira os processos e consumos no node com o comando
topedf -htop - 12:11:32 up 22:41, 0 user, load average: 3.25, 2.79, 2.56
Tasks: 17 total, 1 running, 16 sleeping, 0 stopped, 0 zombie
%Cpu(s): 8.0 us, 0.5 sy, 0.0 ni, 91.2 id, 0.1 wa, 0.0 hi, 0.2 si, 0.0 st
MiB Mem : 64001.3 total, 41471.9 free, 11956.6 used, 13210.8 buff/cache
MiB Swap: 1952.0 total, 1952.0 free, 0.0 used. 52044.7 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
116 root 20 0 2470452 66056 36480 S 0.7 0.1 13:23.44 containerd
223 root 20 0 2999176 86524 53376 S 0.3 0.1 8:36.64 kubelet
1 root 20 0 20392 11648 8704 S 0.0 0.0 0:01.13 systemd
97 root 20 0 24792 11008 10240 S 0.0 0.0 0:00.08 systemd-journal
271 root 20 0 722648 13824 9856 S 0.0 0.0 0:07.79 containerd-shim
287 root 20 0 722648 13852 9856 S 0.0 0.0 0:07.95 containerd-shim
317 65535 20 0 996 512 512 S 0.0 0.0 0:00.00 pause
324 65535 20 0 996 512 512 S 0.0 0.0 0:00.01 pause
358 root 20 0 1284848 49360 36608 S 0.0 0.1 0:07.92 kube-proxy
446 root 20 0 743928 27448 19072 S 0.0 0.0 0:15.96 kindnetd
14316 root 20 0 722392 13184 9600 S 0.0 0.0 0:00.01 containerd-shim
14336 65535 20 0 996 512 512 S 0.0 0.0 0:00.00 pause
14373 root 20 0 2484 1280 1280 S 0.0 0.0 0:00.01 sleep
14400 root 20 0 2576 1408 1408 S 0.0 0.0 0:00.00 sh
14406 root 20 0 2576 128 128 S 0.0 0.0 0:00.00 sh
14407 root 20 0 4192 3328 2816 S 0.0 0.0 0:00.00 bash
14412 root 20 0 8568 4736 2688 R 0.0 0.0 0:00.00 top
root@kind-cluster-worker:/# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 1.8T 571G 1.2T 33% /
tmpfs 64M 0 64M 0% /dev
shm 64M 0 64M 0% /dev/shm
/dev/mapper/vgubuntu-root 1.8T 571G 1.2T 33% /var
tmpfs 32G 8.6M 32G 1% /run
tmpfs 32G 0 32G 0% /tmp
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 63G 12K 63G 1% /var/lib/kubelet/pods/5a2bf15d-36fa-4c73-94a3-b491f4774e72/volumes/kubernetes.io~projected/kube-api-access-tpjjt
tmpfs 50M 12K 50M 1% /var/lib/kubelet/pods/92c3fe67-ccb9-437c-8d18-c16008dfa93b/volumes/kubernetes.io~projected/kube-api-access-cxt56
shm 64M 0 64M 0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/b6087483482422390d1fad0ec6726dfd98aba0d990b3f7f5a6d8224c15c4a4a3/shm
shm 64M 0 64M 0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/c13daa616a7ee7a7144b2acf39476a6e36fd454c1ebf345c26d3834703d11756/shm
overlay 1.8T 571G 1.2T 33% /run/containerd/io.containerd.runtime.v2.task/k8s.io/b6087483482422390d1fad0ec6726dfd98aba0d990b3f7f5a6d8224c15c4a4a3/rootfs
overlay 1.8T 571G 1.2T 33% /run/containerd/io.containerd.runtime.v2.task/k8s.io/c13daa616a7ee7a7144b2acf39476a6e36fd454c1ebf345c26d3834703d11756/rootfs
overlay 1.8T 571G 1.2T 33% /run/containerd/io.containerd.runtime.v2.task/k8s.io/0e6c2021f2b349bb0a16e5e5ecedb44a364566413ddfac25d09dd0538bf1de3b/rootfs
overlay 1.8T 571G 1.2T 33% /run/containerd/io.containerd.runtime.v2.task/k8s.io/970340bd3152b21a503b9e8fbc0b6af4948bed0bc9581f03f7140cbad18b8015/rootfs
tmpfs 63G 12K 63G 1% /var/lib/kubelet/pods/1af287b4-b519-4956-995a-5cf7403e0699/volumes/kubernetes.io~projected/kube-api-access-h9vz9
overlay 1.8T 571G 1.2T 33% /run/containerd/io.containerd.runtime.v2.task/k8s.io/36a2786c5693be823e1cd178341a794744583fe1d67548132f4a364933d54967/rootfs
overlay 1.8T 571G 1.2T 33% /run/containerd/io.containerd.runtime.v2.task/k8s.io/27c42f370c029cf965538366fd6f310cb9408ef6b305442ce21f32ec8947e2a6/rootfs -
Confira o status do kubelet e os logs com o
systemd status kubelet.serviceejournalctl -xeu kubeletroot@kind-cluster-worker:/# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf, 11-kind.conf
Active: active (running) since Sun 2024-02-25 13:30:16 UTC; 22h ago
Docs: http://kubernetes.io/docs/
Process: 214 ExecStartPre=/bin/sh -euc if [ -f /sys/fs/cgroup/cgroup.controllers ]; then /kind/bin/create-kubelet-cgroup-v2.sh; fi (code=exited, status=0/SUCCESS)
Process: 222 ExecStartPre=/bin/sh -euc if [ ! -f /sys/fs/cgroup/cgroup.controllers ] && [ ! -d /sys/fs/cgroup/systemd/kubelet ]; then mkdir -p /sys/fs/cgroup/systemd/kubelet; fi (code=exited, status=0/SUCCESS)
Main PID: 223 (kubelet)
Tasks: 24 (limit: 11496)
Memory: 35.9M
CPU: 8min 38.699s
CGroup: /kubelet.slice/kubelet.service
└─223 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --node-ip=172.18.0.4 --node-labels= --pod-infra-container-image=registry.k8s.io/pause:3.9 --provider-id=kind://docker/kind-cluster/kind-cluster-worker --runtime-cgroups=/system.slice/containerd.service
Feb 25 13:30:19 kind-cluster-worker kubelet[223]: I0225 13:30:19.211689 223 topology_manager.go:215] "Topology Admit Handler" podUID="5a2bf15d-36fa-4c73-94a3-b491f4774e72" podNamespace="kube-system" podName="kube-proxy-9zhh2"
Feb 25 13:30:19 kind-cluster-worker kubelet[223]: I0225 13:30:19.307285 223 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Feb 25 13:30:19 kind-cluster-worker kubelet[223]: I0225 13:30:19.340272 223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a2bf15d-36fa-4c73-94a3-b491f4774e72-xtables-lock\") pod \"kube-proxy-9zhh2\" (UID: \"5a2bf15d-36fa-4c73-94a3-b491f4774e72\") " pod="kube-system/kube-proxy-9zhh2"
Feb 25 13:30:19 kind-cluster-worker kubelet[223]: I0225 13:30:19.340288 223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a2bf15d-36fa-4c73-94a3-b491f4774e72-lib-modules\") pod \"kube-proxy-9zhh2\" (UID: \"5a2bf15d-36fa-4c73-94a3-b491f4774e72\") " pod="kube-system/kube-proxy-9zhh2"
Feb 25 13:30:19 kind-cluster-worker kubelet[223]: I0225 13:30:19.340304 223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/92c3fe67-ccb9-437c-8d18-c16008dfa93b-cni-cfg\") pod \"kindnet-wnzds\" (UID: \"92c3fe67-ccb9-437c-8d18-c16008dfa93b\") " pod="kube-system/kindnet-wnzds"
Feb 25 13:30:19 kind-cluster-worker kubelet[223]: I0225 13:30:19.340316 223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92c3fe67-ccb9-437c-8d18-c16008dfa93b-xtables-lock\") pod \"kindnet-wnzds\" (UID: \"92c3fe67-ccb9-437c-8d18-c16008dfa93b\") " pod="kube-system/kindnet-wnzds"
Feb 25 13:30:19 kind-cluster-worker kubelet[223]: I0225 13:30:19.340476 223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92c3fe67-ccb9-437c-8d18-c16008dfa93b-lib-modules\") pod \"kindnet-wnzds\" (UID: \"92c3fe67-ccb9-437c-8d18-c16008dfa93b\") " pod="kube-system/kindnet-wnzds"
Feb 26 12:11:12 kind-cluster-worker kubelet[223]: I0226 12:11:12.480201 223 topology_manager.go:215] "Topology Admit Handler" podUID="1af287b4-b519-4956-995a-5cf7403e0699" podNamespace="kube-system" podName="node-shell-2a728d18-d3d7-4c59-ad22-3a763f34b1c9"
Feb 26 12:11:12 kind-cluster-worker kubelet[223]: I0226 12:11:12.567123 223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9vz9\" (UniqueName: \"kubernetes.io/projected/1af287b4-b519-4956-995a-5cf7403e0699-kube-api-access-h9vz9\") pod \"node-shell-2a728d18-d3d7-4c59-ad22-3a763f34b1c9\" (UID: \"1af287b4-b519-4956-995a-5cf7403e0699\") " pod="kube-system/node-shell-2a728d18-d3d7-4c59-ad22-3a763f34b1c9"
Feb 26 12:11:16 kind-cluster-worker kubelet[223]: I0226 12:11:16.957965 223 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/node-shell-2a728d18-d3d7-4c59-ad22-3a763f34b1c9" podStartSLOduration=1.652541497 podStartE2EDuration="4.957928607s" podCreationTimestamp="2024-02-26 12:11:12 +0000 UTC" firstStartedPulling="2024-02-26 12:11:12.853842258 +0000 UTC m=+81656.677152456" lastFinishedPulling="2024-02-26 12:11:16.159229367 +0000 UTC m=+81659.982539566" observedRunningTime="2024-02-26 12:11:16.957817629 +0000 UTC m=+81660.781127839" watchObservedRunningTime="2024-02-26 12:11:16.957928607 +0000 UTC m=+81660.781238814"
# E caso não seja possível analisar pelo comando acima, vamos ver mais detalhados
root@kind-cluster-worker:/# journalctl -u kubelet
Feb 25 13:30:13 kind-cluster-worker systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 25 13:30:13 kind-cluster-worker systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 25 13:30:13 kind-cluster-worker kubelet[129]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 25 13:30:13 kind-cluster-worker kubelet[129]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Feb 25 13:30:13 kind-cluster-worker kubelet[129]: I0225 13:30:13.340627 129 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb 25 13:30:13 kind-cluster-worker kubelet[129]: I0225 13:30:13.345671 129 server.go:487] "Kubelet version" kubeletVersion="v1.29.1"
Feb 25 13:30:13 kind-cluster-worker kubelet[129]: I0225 13:30:13.345685 129 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb 25 13:30:13 kind-cluster-worker kubelet[129]: I0225 13:30:13.345830 129 server.go:919] "Client rotation is on, will bootstrap in background"
Feb 25 13:30:13 kind-cluster-worker kubelet[129]: I0225 13:30:13.347215 129 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Feb 25 13:30:13 kind-cluster-worker kubelet[129]: I0225 13:30:13.352315 129 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb 25 13:30:13 kind-cluster-worker kubelet[129]: E0225 13:30:13.358390 129 run.go:74] "command failed" err="failed to run Kubelet: validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unknown desc = server is not initialized yet"
Feb 25 13:30:13 kind-cluster-worker systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 25 13:30:13 kind-cluster-worker systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 25 13:30:14 kind-cluster-worker systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Feb 25 13:30:14 kind-cluster-worker systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 25 13:30:14 kind-cluster-worker systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 25 13:30:14 kind-cluster-worker systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.687895 176 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.690188 176 server.go:487] "Kubelet version" kubeletVersion="v1.29.1"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.690200 176 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.690295 176 server.go:919] "Client rotation is on, will bootstrap in background"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.690971 176 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.691686 176 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.703590 176 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=["kubelet"]
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.703681 176 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"/system.slice/containerd.service","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/kubelet","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.703690 176 topology_manager.go:138] "Creating topology manager with none policy"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.703695 176 container_manager_linux.go:301] "Creating device plugin manager"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.703719 176 state_mem.go:36] "Initialized new in-memory state store"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.703752 176 kubelet.go:396] "Attempting to sync node with API server"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.703759 176 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.703778 176 kubelet.go:312] "Adding apiserver pod source"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.703785 176 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.704063 176 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.704141 176 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: W0225 13:30:14.704493 176 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://kind-cluster-control-plane:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: W0225 13:30:14.704491 176 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://kind-cluster-control-plane:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkind-cluster-worker&limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: E0225 13:30:14.704534 176 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://kind-cluster-control-plane:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: E0225 13:30:14.704535 176 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://kind-cluster-control-plane:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkind-cluster-worker&limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.704960 176 server.go:1256] "Started kubelet"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.704988 176 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.705000 176 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.705140 176 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-/docs/kubernetes/certifications/cka/troubleshooting/resources/kubelet.sock"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.705452 176 server.go:461] "Adding debug handlers to kubelet server"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.705708 176 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.705989 176 scope.go:117] "RemoveContainer" containerID="276f1c5059b79322ab4819fc1b3ab0eb274566f5e04515c395567d120ae2433c"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.706509 176 volume_manager.go:291] "Starting Kubelet Volume Manager"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: E0225 13:30:14.708181 176 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"kind-cluster-worker\" not found"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.708572 176 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.710194 176 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: W0225 13:30:14.710541 176 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://kind-cluster-control-plane:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: E0225 13:30:14.710609 176 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://kind-cluster-control-plane:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kind-cluster-worker?timeout=10s\": dial tcp 172.18.0.3:6443: connect: connection refused" interval="200ms"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: E0225 13:30:14.710614 176 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://kind-cluster-control-plane:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: E0225 13:30:14.711361 176 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://kind-cluster-control-plane:6443/api/v1/namespaces/default/events\": dial tcp 172.18.0.3:6443: connect: connection refused" event="&Event{ObjectMeta:{kind-cluster-worker.17b71e3937963104 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:kind-cluster-worker,UID:kind-cluster-worker,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:kind-cluster-worker,},FirstTimestamp:2024-02-25 13:30:14.704951556 +0000 UTC m=+0.040446163,LastTimestamp:2024-02-25 13:30:14.704951556 +0000 UTC m=+0.040446163,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:kind-cluster-worker,}"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.711708 176 reconciler_new.go:29] "Reconciler: start to sync state"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.712134 176 factory.go:221] Registration of the containerd container factory successfully
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.712151 176 factory.go:221] Registration of the systemd container factory successfully
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.715249 176 scope.go:117] "RemoveContainer" containerID="1b3fdc8062f6983feb33f50bbbd58d4ab98aed71a71afda107df96b990edf634"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.717461 176 cpu_manager.go:214] "Starting CPU manager" policy="none"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.717478 176 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.717494 176 state_mem.go:36] "Initialized new in-memory state store"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.717742 176 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.717774 176 state_mem.go:96] "Updated CPUSet assignments" assignments={}
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.717782 176 policy_none.go:49] "None policy: Start"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.718440 176 memory_manager.go:170] "Starting memorymanager" policy="None"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.718474 176 state_mem.go:35] "Initializing new in-memory state store"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.718878 176 state_mem.go:75] "Updated machine memory state"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.725935 176 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.726712 176 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.726729 176 status_manager.go:217] "Starting to sync pod status with apiserver"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.726741 176 kubelet.go:2329] "Starting kubelet main sync loop"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: E0225 13:30:14.726767 176 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: W0225 13:30:14.727967 176 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://kind-cluster-control-plane:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: E0225 13:30:14.728079 176 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://kind-cluster-control-plane:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: I0225 13:30:14.811124 176 kubelet_node_status.go:73] "Attempting to register node" node="kind-cluster-worker"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: E0225 13:30:14.813167 176 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://kind-cluster-control-plane:6443/api/v1/nodes\": dial tcp 172.18.0.3:6443: connect: connection refused" node="kind-cluster-worker"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: E0225 13:30:14.827382 176 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Feb 25 13:30:14 kind-cluster-worker kubelet[176]: E0225 13:30:14.859071 176 kubelet.go:1542] "Failed to start ContainerManager" err="failed to initialize top level QOS containers: root container [kubelet kubepods] doesn't exist"
Feb 25 13:30:14 kind-cluster-worker systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 25 13:30:14 kind-cluster-worker systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 25 13:30:16 kind-cluster-worker systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
Feb 25 13:30:16 kind-cluster-worker systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 25 13:30:16 kind-cluster-worker systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 25 13:30:16 kind-cluster-worker systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.192645 223 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.194746 223 server.go:487] "Kubelet version" kubeletVersion="v1.29.1"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.194761 223 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.194935 223 server.go:919] "Client rotation is on, will bootstrap in background"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.195935 223 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.197389 223 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.204654 223 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=["kubelet"]
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.204739 223 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"/system.slice/containerd.service","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/kubelet","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.204749 223 topology_manager.go:138] "Creating topology manager with none policy"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.204754 223 container_manager_linux.go:301] "Creating device plugin manager"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.204765 223 state_mem.go:36] "Initialized new in-memory state store"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.204805 223 kubelet.go:396] "Attempting to sync node with API server"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.204811 223 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.204825 223 kubelet.go:312] "Adding apiserver pod source"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.204831 223 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.205105 223 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.205235 223 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.205536 223 server.go:1256] "Started kubelet"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.205579 223 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.205581 223 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: W0225 13:30:16.205708 223 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://kind-cluster-control-plane:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.205739 223 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-/docs/kubernetes/certifications/cka/troubleshooting/resources/kubelet.sock"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: W0225 13:30:16.205723 223 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://kind-cluster-control-plane:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkind-cluster-worker&limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: E0225 13:30:16.205745 223 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://kind-cluster-control-plane:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: E0225 13:30:16.205764 223 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://kind-cluster-control-plane:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkind-cluster-worker&limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: E0225 13:30:16.205922 223 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://kind-cluster-control-plane:6443/api/v1/namespaces/default/events\": dial tcp 172.18.0.3:6443: connect: connection refused" event="&Event{ObjectMeta:{kind-cluster-worker.17b71e399107118e default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:kind-cluster-worker,UID:kind-cluster-worker,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:kind-cluster-worker,},FirstTimestamp:2024-02-25 13:30:16.205521294 +0000 UTC m=+0.028831493,LastTimestamp:2024-02-25 13:30:16.205521294 +0000 UTC m=+0.028831493,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:kind-cluster-worker,}"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.206062 223 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.206102 223 volume_manager.go:291] "Starting Kubelet Volume Manager"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: E0225 13:30:16.206204 223 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"kind-cluster-worker\" not found"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.206238 223 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.206240 223 server.go:461] "Adding debug handlers to kubelet server"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.206401 223 reconciler_new.go:29] "Reconciler: start to sync state"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.206765 223 factory.go:221] Registration of the systemd container factory successfully
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.206926 223 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.208702 223 factory.go:221] Registration of the containerd container factory successfully
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: E0225 13:30:16.209087 223 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://kind-cluster-control-plane:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kind-cluster-worker?timeout=10s\": dial tcp 172.18.0.3:6443: connect: connection refused" interval="200ms"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: W0225 13:30:16.209270 223 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://kind-cluster-control-plane:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: E0225 13:30:16.209400 223 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://kind-cluster-control-plane:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.212946 223 cpu_manager.go:214] "Starting CPU manager" policy="none"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.212958 223 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.212967 223 state_mem.go:36] "Initialized new in-memory state store"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.213039 223 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.213054 223 state_mem.go:96] "Updated CPUSet assignments" assignments={}
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.213057 223 policy_none.go:49] "None policy: Start"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.213385 223 memory_manager.go:170] "Starting memorymanager" policy="None"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.213404 223 state_mem.go:35] "Initializing new in-memory state store"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.213528 223 state_mem.go:75] "Updated machine memory state"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.213811 223 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.214383 223 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.214402 223 status_manager.go:217] "Starting to sync pod status with apiserver"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.214414 223 kubelet.go:2329] "Starting kubelet main sync loop"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: E0225 13:30:16.214452 223 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: W0225 13:30:16.215231 223 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://kind-cluster-control-plane:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: E0225 13:30:16.215343 223 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://kind-cluster-control-plane:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.273047 223 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.273482 223 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: E0225 13:30:16.274636 223 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"kind-cluster-worker\" not found"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.308149 223 kubelet_node_status.go:73] "Attempting to register node" node="kind-cluster-worker"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: E0225 13:30:16.310949 223 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://kind-cluster-control-plane:6443/api/v1/nodes\": dial tcp 172.18.0.3:6443: connect: connection refused" node="kind-cluster-worker"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.315266 223 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d16aa4a4a851809974b3a6fdc2afdd51bd321d191e6bc28153dce14541719c6"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.315297 223 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72d998d5e31a55a8e2acc054a57543a3b64092cd3044e1cf885cf7f80314e4ed"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: E0225 13:30:16.411166 223 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://kind-cluster-control-plane:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kind-cluster-worker?timeout=10s\": dial tcp 172.18.0.3:6443: connect: connection refused" interval="400ms"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.512240 223 kubelet_node_status.go:73] "Attempting to register node" node="kind-cluster-worker"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: E0225 13:30:16.513242 223 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://kind-cluster-control-plane:6443/api/v1/nodes\": dial tcp 172.18.0.3:6443: connect: connection refused" node="kind-cluster-worker"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: E0225 13:30:16.814010 223 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://kind-cluster-control-plane:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kind-cluster-worker?timeout=10s\": dial tcp 172.18.0.3:6443: connect: connection refused" interval="800ms"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: I0225 13:30:16.916038 223 kubelet_node_status.go:73] "Attempting to register node" node="kind-cluster-worker"
Feb 25 13:30:16 kind-cluster-worker kubelet[223]: E0225 13:30:16.918669 223 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://kind-cluster-control-plane:6443/api/v1/nodes\": dial tcp 172.18.0.3:6443: connect: connection refused" node="kind-cluster-worker"
Feb 25 13:30:17 kind-cluster-worker kubelet[223]: I0225 13:30:17.719763 223 kubelet_node_status.go:73] "Attempting to register node" node="kind-cluster-worker"
Feb 25 13:30:18 kind-cluster-worker kubelet[223]: I0225 13:30:18.840088 223 kubelet_node_status.go:112] "Node was previously registered" node="kind-cluster-worker"
Feb 25 13:30:18 kind-cluster-worker kubelet[223]: I0225 13:30:18.840147 223 kubelet_node_status.go:76] "Successfully registered node" node="kind-cluster-worker"
Feb 25 13:30:18 kind-cluster-worker kubelet[223]: I0225 13:30:18.853499 223 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="10.244.2.0/24"
Feb 25 13:30:18 kind-cluster-worker kubelet[223]: I0225 13:30:18.853953 223 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.2.0/24"
Feb 25 13:30:19 kind-cluster-worker kubelet[223]: I0225 13:30:19.206854 223 apiserver.go:52] "Watching apiserver"
Feb 25 13:30:19 kind-cluster-worker kubelet[223]: I0225 13:30:19.211523 223 topology_manager.go:215] "Topology Admit Handler" podUID="92c3fe67-ccb9-437c-8d18-c16008dfa93b" podNamespace="kube-system" podName="kindnet-wnzds"
Feb 25 13:30:19 kind-cluster-worker kubelet[223]: I0225 13:30:19.211689 223 topology_manager.go:215] "Topology Admit Handler" podUID="5a2bf15d-36fa-4c73-94a3-b491f4774e72" podNamespace="kube-system" podName="kube-proxy-9zhh2"
Feb 25 13:30:19 kind-cluster-worker kubelet[223]: I0225 13:30:19.307285 223 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Feb 25 13:30:19 kind-cluster-worker kubelet[223]: I0225 13:30:19.340272 223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a2bf15d-36fa-4c73-94a3-b491f4774e72-xtables-lock\") pod \"kube-proxy-9zhh2\" (UID: \"5a2bf15d-36fa-4c73-94a3-b491f4774e72\") " pod="kube-system/kube-proxy-9zhh2"
Feb 25 13:30:19 kind-cluster-worker kubelet[223]: I0225 13:30:19.340288 223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a2bf15d-36fa-4c73-94a3-b491f4774e72-lib-modules\") pod \"kube-proxy-9zhh2\" (UID: \"5a2bf15d-36fa-4c73-94a3-b491f4774e72\") " pod="kube-system/kube-proxy-9zhh2"
Feb 25 13:30:19 kind-cluster-worker kubelet[223]: I0225 13:30:19.340304 223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/92c3fe67-ccb9-437c-8d18-c16008dfa93b-cni-cfg\") pod \"kindnet-wnzds\" (UID: \"92c3fe67-ccb9-437c-8d18-c16008dfa93b\") " pod="kube-system/kindnet-wnzds"
Feb 25 13:30:19 kind-cluster-worker kubelet[223]: I0225 13:30:19.340316 223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92c3fe67-ccb9-437c-8d18-c16008dfa93b-xtables-lock\") pod \"kindnet-wnzds\" (UID: \"92c3fe67-ccb9-437c-8d18-c16008dfa93b\") " pod="kube-system/kindnet-wnzds"
Feb 25 13:30:19 kind-cluster-worker kubelet[223]: I0225 13:30:19.340476 223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92c3fe67-ccb9-437c-8d18-c16008dfa93b-lib-modules\") pod \"kindnet-wnzds\" (UID: \"92c3fe67-ccb9-437c-8d18-c16008dfa93b\") " pod="kube-system/kindnet-wnzds"
Feb 26 12:11:12 kind-cluster-worker kubelet[223]: I0226 12:11:12.480201 223 topology_manager.go:215] "Topology Admit Handler" podUID="1af287b4-b519-4956-995a-5cf7403e0699" podNamespace="kube-system" podName="node-shell-2a728d18-d3d7-4c59-ad22-3a763f34b1c9"
Feb 26 12:11:12 kind-cluster-worker kubelet[223]: I0226 12:11:12.567123 223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9vz9\" (UniqueName: \"kubernetes.io/projected/1af287b4-b519-4956-995a-5cf7403e0699-kube-api-access-h9vz9\") pod \"node-shell-2a728d18-d3d7-4c59-ad22-3a763f34b1c9\" (UID: \"1af287b4-b519-4956-995a-5cf7403e0699\") " pod="kube-system/node-shell-2a728d18-d3d7-4c59-ad22-3a763f34b1c9"
Feb 26 12:11:16 kind-cluster-worker kubelet[223]: I0226 12:11:16.957965 223 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/node-shell-2a728d18-d3d7-4c59-ad22-3a763f34b1c9" podStartSLOduration=1.652541497 podStartE2EDuration="4.957928607s" podCreationTimestamp="2024-02-26 12:11:12 +0000 UTC" firstStartedPulling="2024-02-26 12:11:12.853842258 +0000 UTC m=+81656.677152456" lastFinishedPulling="2024-02-26 12:11:16.159229367 +0000 UTC m=+81659.982539566" observedRunningTime="2024-02-26 12:11:16.957817629 +0000 UTC m=+81660.781127839" watchObservedRunningTime="2024-02-26 12:11:16.957928607 +0000 UTC m=+81660.781238814"
root@kind-cluster-worker:/# -
Confira também os certificados e veja se não expirou
root@kind-cluster-worker:/# openssl x509 -in /var/lib/kubelet/pki/kubelet.crt
-----BEGIN CERTIFICATE-----
MIIDTTCCAjWgAwIBAgIIdyIAO9Z5gVAwDQYJKoZIhvcNAQELBQAwLDEqMCgGA1UE
Awwha2luZC1jbHVzdGVyLXdvcmtlci1jYUAxNzA3NDMzMzY1MB4XDTI0MDIwODIy
MDI0NVoXDTI1MDIwNzIyMDI0NVowKTEnMCUGA1UEAwwea2luZC1jbHVzdGVyLXdv
cmtlckAxNzA3NDMzMzY1MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA
nXOMHMXoQiRePWMKnI6NN0VI7lhy6Te2Ia2y+QZ+qeDfMM9mi62kwbHcnCnFsptJ
8CBqv1mYpzNJaCDDiOrtB9Fv6gs6k0xARF+Tdw+CC2Mo7UJEVh4S5A1BnYTJUctm
tWA9jzUqbh3cxaubmN2AmzlmTk2+A6FZX+fR/bdNs9Gh+zrrkhF2irfs8Sxbp68f
KMB6HsgZOSdt014Dz9J5xB37Hh0R3KS0FYLcJ4TVaPGJrCypL26GezfZWjCRFm7q
wB/t7vbSNV/gFNt533Vdr6AxF8IZEVzdB2fxJ6/ofNDbsioFQ1iDhv4wQECu6jCH
6NkbzCZrPDF4KJrLXGjkNwIDAQABo3YwdDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l
BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBT53Jnk+X3i
R7lLud6Q3HnbydB0azAeBgNVHREEFzAVghNraW5kLWNsdXN0ZXItd29ya2VyMA0G
CSqGSIb3DQEBCwUAA4IBAQAnNioBu6agqKH/kDgjGfut865x8ufWw2wlmyunx5CS
njAdP/csErsSrVXlzlYhdNaXHvCYZcwXCjUpL8wNYHJqT5aRhuMr4w6ZYACWY50o
jyepzZFA8BNxA7FH5SnQbr+JZP1y+bXlF3JbfYPNAEHZBRSuayw3WdU9iSuGghnG
pQA0OjOjZ7MwYXF3NKPuS/rPi6NERjykT8VYW6G2kIJDgPf4EaJ5lEKM3ifxjW+n
vu7XpnjG+Ff48Gq47BBwxhE9p/YTFLzyGZnbArx+u6V2yui3Q3agi7f0oJT1fqkp
RfbxkFBrCCuiVbswcaf4eBFwyMNqyg9mhn8r4Wo4N2z8
-----END CERTIFICATE-----
root@kind-cluster-worker:/# openssl x509 -in /var/lib/kubelet/pki/kubelet.crt --text
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 8584424096722944336 (0x7722003bd6798150)
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN = kind-cluster-worker-ca@1707433365
Validity
Not Before: Feb 8 22:02:45 2024 GMT
Not After : Feb 7 22:02:45 2025 GMT #OK
Subject: CN = kind-cluster-worker@1707433365
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:9d:73:8c:1c:c5:e8:42:24:5e:3d:63:0a:9c:8e:
8d:37:45:48:ee:58:72:e9:37:b6:21:ad:b2:f9:06:
7e:a9:e0:df:30:cf:66:8b:ad:a4:c1:b1:dc:9c:29:
c5:b2:9b:49:f0:20:6a:bf:59:98:a7:33:49:68:20:
c3:88:ea:ed:07:d1:6f:ea:0b:3a:93:4c:40:44:5f:
93:77:0f:82:0b:63:28:ed:42:44:56:1e:12:e4:0d:
41:9d:84:c9:51:cb:66:b5:60:3d:8f:35:2a:6e:1d:
dc:c5:ab:9b:98:dd:80:9b:39:66:4e:4d:be:03:a1:
59:5f:e7:d1:fd:b7:4d:b3:d1:a1:fb:3a:eb:92:11:
76:8a:b7:ec:f1:2c:5b:a7:af:1f:28:c0:7a:1e:c8:
19:39:27:6d:d3:5e:03:cf:d2:79:c4:1d:fb:1e:1d:
11:dc:a4:b4:15:82:dc:27:84:d5:68:f1:89:ac:2c:
a9:2f:6e:86:7b:37:d9:5a:30:91:16:6e:ea:c0:1f:
ed:ee:f6:d2:35:5f:e0:14:db:79:df:75:5d:af:a0:
31:17:c2:19:11:5c:dd:07:67:f1:27:af:e8:7c:d0:
db:b2:2a:05:43:58:83:86:fe:30:40:40:ae:ea:30:
87:e8:d9:1b:cc:26:6b:3c:31:78:28:9a:cb:5c:68:
e4:37
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Server Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Authority Key Identifier:
F9:DC:99:E4:F9:7D:E2:47:B9:4B:B9:DE:90:DC:79:DB:C9:D0:74:6B
X509v3 Subject Alternative Name:
DNS:kind-cluster-worker
Signature Algorithm: sha256WithRSAEncryption
Signature Value:
27:36:2a:01:bb:a6:a0:a8:a1:ff:90:38:23:19:fb:ad:f3:ae:
71:f2:e7:d6:c3:6c:25:9b:2b:a7:c7:90:92:9e:30:1d:3f:f7:
2c:12:bb:12:ad:55:e5:ce:56:21:74:d6:97:1e:f0:98:65:cc:
17:0a:35:29:2f:cc:0d:60:72:6a:4f:96:91:86:e3:2b:e3:0e:
99:60:00:96:63:9d:28:8f:27:a9:cd:91:40:f0:13:71:03:b1:
47:e5:29:d0:6e:bf:89:64:fd:72:f9:b5:e5:17:72:5b:7d:83:
cd:00:41:d9:05:14:ae:6b:2c:37:59:d5:3d:89:2b:86:82:19:
c6:a5:00:34:3a:33:a3:67:b3:30:61:71:77:34:a3:ee:4b:fa:
cf:8b:a3:44:46:3c:a4:4f:c5:58:5b:a1:b6:90:82:43:80:f7:
f8:11:a2:79:94:42:8c:de:27:f1:8d:6f:a7:be:ee:d7:a6:78:
c6:f8:57:f8:f0:6a:b8:ec:10:70:c6:11:3d:a7:f6:13:14:bc:
f2:19:99:db:02:bc:7e:bb:a5:76:ca:e8:b7:43:76:a0:8b:b7:
f4:a0:94:f5:7e:a9:29:45:f6:f1:90:50:6b:08:2b:a2:55:bb:
30:71:a7:f8:78:11:70:c8:c3:6a:ca:0f:66:86:7f:2b:e1:6a:
38:37:6c:fc
-----BEGIN CERTIFICATE-----
MIIDTTCCAjWgAwIBAgIIdyIAO9Z5gVAwDQYJKoZIhvcNAQELBQAwLDEqMCgGA1UE
Awwha2luZC1jbHVzdGVyLXdvcmtlci1jYUAxNzA3NDMzMzY1MB4XDTI0MDIwODIy
MDI0NVoXDTI1MDIwNzIyMDI0NVowKTEnMCUGA1UEAwwea2luZC1jbHVzdGVyLXdv
cmtlckAxNzA3NDMzMzY1MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA
nXOMHMXoQiRePWMKnI6NN0VI7lhy6Te2Ia2y+QZ+qeDfMM9mi62kwbHcnCnFsptJ
8CBqv1mYpzNJaCDDiOrtB9Fv6gs6k0xARF+Tdw+CC2Mo7UJEVh4S5A1BnYTJUctm
tWA9jzUqbh3cxaubmN2AmzlmTk2+A6FZX+fR/bdNs9Gh+zrrkhF2irfs8Sxbp68f
KMB6HsgZOSdt014Dz9J5xB37Hh0R3KS0FYLcJ4TVaPGJrCypL26GezfZWjCRFm7q
wB/t7vbSNV/gFNt533Vdr6AxF8IZEVzdB2fxJ6/ofNDbsioFQ1iDhv4wQECu6jCH
6NkbzCZrPDF4KJrLXGjkNwIDAQABo3YwdDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l
BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBT53Jnk+X3i
R7lLud6Q3HnbydB0azAeBgNVHREEFzAVghNraW5kLWNsdXN0ZXItd29ya2VyMA0G
CSqGSIb3DQEBCwUAA4IBAQAnNioBu6agqKH/kDgjGfut865x8ufWw2wlmyunx5CS
njAdP/csErsSrVXlzlYhdNaXHvCYZcwXCjUpL8wNYHJqT5aRhuMr4w6ZYACWY50o
jyepzZFA8BNxA7FH5SnQbr+JZP1y+bXlF3JbfYPNAEHZBRSuayw3WdU9iSuGghnG
pQA0OjOjZ7MwYXF3NKPuS/rPi6NERjykT8VYW6G2kIJDgPf4EaJ5lEKM3ifxjW+n
vu7XpnjG+Ff48Gq47BBwxhE9p/YTFLzyGZnbArx+u6V2yui3Q3agi7f0oJT1fqkp
RfbxkFBrCCuiVbswcaf4eBFwyMNqyg9mhn8r4Wo4N2z8
-----END CERTIFICATE-----Confira também os endpoints que o kubelet está apontando para o kube-apiserver.