Component Bootstraps
Masters, workers and ETCDs have different components, but some similar ones. Let's start with the similar ones first that must be executed on all masters and workers.
During the Vagrant machine provisioning process we can call these scripts. All must be available in shared_files.
Kubernetes nodes need to have swap and firewall disabled and some modules loaded. We also need to install the container runtime, CNI, kubectl, and others.
Create the script below with the name bootstrap_nodes.sh and have it available in the shared_files folder.
echo -e "\n##### Disabling swap #####"
sudo sed -i '/swap/d' /etc/fstab
sudo swapoff -a
echo -e "\n##### Disabling firewall #####"
sudo systemctl disable --now ufw >/dev/null 2>&1
echo -e "\n##### Enabling kernel modules required for Containerd #####"
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# Apply sysctl params without reboot
sudo sysctl --system
echo -e "\n##### Check Modules #####"
lsmod | grep br_netfilter
lsmod | grep overlay
sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
sudo mkdir -p /var/lib/kubernetes/
echo -e "\n##### Installing Container Runtime Interface #####"
echo -e "\n##### 1 - Adding kubernetes repository #####"
sudo mkdir -p /etc/apt/keyrings
KUBE_LATEST=$(curl -L -s https://dl.k8s.io/release/stable.txt | awk 'BEGIN { FS="." } { printf "%s.%s", $1, $2 }')
curl -fsSL https://pkgs.k8s.io/core:/stable:/${KUBE_LATEST}/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/${KUBE_LATEST}/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list
echo -e "\n##### 2 - Installing packages #####"
sudo apt update
sudo apt install -y containerd kubernetes-cni kubectl ipvsadm ipset
echo -e "\n##### 3 - Enabling systemd Cgroups in containerd #####"
sudo cp /etc/containerd/config.toml /etc/containerd/config.toml.default
containerd config default | sed 's/SystemdCgroup = false/SystemdCgroup = true/' | sudo tee /etc/containerd/config.toml
sudo systemctl restart containerd.service
systemctl status containerd.service
A cluster doesn't work without an ETCD. It's the key piece where everything will be stored. Let's create a bootstrap for it that will be executed on each of the masters. Currently we are using ETCD inside the master itself, but we can have separate machines for this purpose. In an update to this repository I'll modify some things to make this possible and make it even harder.
#!/bin/bash
ETCD_VERSION="v3.5.12"
INTERNAL_IP="$(ip addr show enp0s8 | grep "inet " | awk '{print $2}' | cut -d / -f 1)" # IP OF THE MACHINE ITSELF INSTALLING ETCD
MASTER1_IP=$(dig +short master1)
MASTER2_IP=$(dig +short master2)
MASTER3_IP=$(dig +short master3)
ETCD_NAME=$(hostname -s)
CA_CERTIFICATE_KUBERNETES=ca
CA_CERTIFICATE_ETCD=ca
ETCD_SERVER_CERTIFICATE=etcd-server
echo -e "\n##### Downloading ETCD binaries #####"
wget -q --show-progress --https-only --timestamping \
"https://github.com/coreos/etcd/releases/download/${ETCD_VERSION}/etcd-${ETCD_VERSION}-linux-amd64.tar.gz"
tar -xvf etcd-${ETCD_VERSION}-linux-amd64.tar.gz
sudo mv etcd-${ETCD_VERSION}-linux-amd64/etcd* /usr/local/bin/
rm -rf etcd-${ETCD_VERSION}-linux-amd64*
echo -e "\n##### Creating directories used by ETCD #####"
sudo mkdir -p /etc/etcd /var/lib/etcd /var/lib/kubernetes/pki
cd /vagrant/shared_files/pki
echo -e "\n##### Copying certificates used by ETCD #####"
sudo cp ${ETCD_SERVER_CERTIFICATE}.key ${ETCD_SERVER_CERTIFICATE}.crt /etc/etcd/
sudo cp ${CA_CERTIFICATE_KUBERNETES}.crt /var/lib/kubernetes/pki/
sudo chown root:root /etc/etcd/*
sudo chmod 600 /etc/etcd/*
sudo chown root:root /var/lib/kubernetes/pki/*
sudo chmod 600 /var/lib/kubernetes/pki/*
sudo ln -s /var/lib/kubernetes/pki/${CA_CERTIFICATE_KUBERNETES}.crt /etc/etcd/${CA_CERTIFICATE_KUBERNETES}.crt
echo -e "\n##### Creating service for ETCD #####"
cat <<EOF | sudo tee /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
ExecStart=/usr/local/bin/etcd \\
--name ${ETCD_NAME} \\
--cert-file=/etc/etcd/${ETCD_SERVER_CERTIFICATE}.crt \\
--key-file=/etc/etcd/${ETCD_SERVER_CERTIFICATE}.key \\
--peer-cert-file=/etc/etcd/${ETCD_SERVER_CERTIFICATE}.crt \\
--peer-key-file=/etc/etcd/${ETCD_SERVER_CERTIFICATE}.key \\
--trusted-ca-file=/etc/etcd/${CA_CERTIFICATE_KUBERNETES}.crt\\
--peer-trusted-ca-file=/etc/etcd/${CA_CERTIFICATE_ETCD}.crt\\
--peer-client-cert-auth \\
--client-cert-auth \\
--initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
--advertise-client-urls https://${INTERNAL_IP}:2379 \\
--initial-cluster-token etcd-cluster-0 \\
--initial-cluster master1=https://${MASTER1_IP}:2380,master2=https://${MASTER2_IP}:2380,master3=https://${MASTER3_IP}:2380 \\
--initial-cluster-state new \\
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd
echo -e "\n##### Verifying ETCD #####"
sudo ETCDCTL_API=3 etcdctl member list \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/etcd/ca.crt \
--cert=/etc/etcd/etcd-server.crt \
--key=/etc/etcd/etcd-server.key || true
Now let's move on to bootstrapping the control-plane components that should be on the masters. The control-plane components will be executed as services instead of pods. In the future I may improve this documentation to install via pod.
Before creating the services we copy all certificates to the right places and create the services using these certificates. Finally we check if everything is running accordingly.
#!/bin/bash
echo -e "\n##### Downloading control-plane binaries #####"
KUBE_VERSION=$(curl -L -s https://dl.k8s.io/release/stable.txt)
#https://kubernetes.io/releases/download/#binaries
wget -q --show-progress --https-only --timestamping \
"https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/amd64/kube-apiserver" \
"https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/amd64/kube-controller-manager" \
"https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/amd64/kube-scheduler" \
"https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/amd64/kubectl"
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
CA_CERT_NAME=ca
KUBE_APISERVER_CERT_NAME=kube-apiserver
KUBE_APISERVER_CLIENT_CERT_NAME=apiserver-kubelet-client
KUBE_CONTROLLER_MANAGER_CERT_NAME=kube-controller-manager
KUBE_SCHEDULER_CERT_NAME=kube-scheduler
ETCD_CERT_NAME=etcd-server
SERVICE_ACCOUNT=service-account
echo -e "\n##### Putting certificates in the correct places #####"
sudo mkdir -p /var/lib/kubernetes/pki
cd /vagrant/shared_files/pki
sudo cp ${CA_CERT_NAME}.* /var/lib/kubernetes/pki/
sudo cp ${KUBE_APISERVER_CERT_NAME}.* /var/lib/kubernetes/pki/
sudo cp ${KUBE_APISERVER_CLIENT_CERT_NAME}.* /var/lib/kubernetes/pki/
sudo cp ${KUBE_CONTROLLER_MANAGER_CERT_NAME}.* /var/lib/kubernetes/pki/
sudo cp ${KUBE_SCHEDULER_CERT_NAME}.* /var/lib/kubernetes/pki/
sudo cp ${ETCD_CERT_NAME}.* /var/lib/kubernetes/pki/
sudo cp ${SERVICE_ACCOUNT}.* /var/lib/kubernetes/pki/
sudo chown root:root /var/lib/kubernetes/pki/*
sudo chmod 600 /var/lib/kubernetes/pki/*
echo -e "\n##### Putting kubeconfigs in the correct places #####"
sudo mkdir -p /var/lib/kubernetes/
cd /vagrant/shared_files/kubeconfigs
sudo cp ${CA_CERT_NAME}.kubeconfig /var/lib/kubernetes/
sudo cp ${KUBE_APISERVER_CERT_NAME}.kubeconfig /var/lib/kubernetes/
sudo cp ${KUBE_APISERVER_CLIENT_CERT_NAME}.kubeconfig /var/lib/kubernetes/
sudo cp ${KUBE_CONTROLLER_MANAGER_CERT_NAME}.kubeconfig /var/lib/kubernetes/
sudo cp ${KUBE_SCHEDULER_CERT_NAME}.kubeconfig /var/lib/kubernetes/
sudo cp ${ETCD_CERT_NAME}.kubeconfig /var/lib/kubernetes/
sudo cp ${SERVICE_ACCOUNT}.kubeconfig /var/lib/kubernetes/
sudo chmod 600 /var/lib/kubernetes/*.kubeconfig
INTERNAL_IP=$(ip addr show enp0s8 | grep "inet " | awk '{print $2}' | cut -d / -f 1)
LOADBALANCER=$(dig +short loadbalancer)
MASTER1_IP=$(dig +short master1)
MASTER2_IP=$(dig +short master2)
MASTER3_IP=$(dig +short master3)
POD_CIDR=10.244.0.0/16
SERVICE_CIDR=10.96.0.0/16
############################ KUBE-APISERVER ##########################################
echo -e "\n##### Creating service for kube-apiserver #####"
echo -e "\n##### Copying encryption-config.yaml to the right place #####"
cd /vagrant/shared_files/
sudo cp encryption-config.yaml /var/lib/kubernetes/
cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--advertise-address=${INTERNAL_IP} \\
--allow-privileged=true \\
--apiserver-count=3 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/audit.log \\
--authorization-mode=Node,RBAC \\
--bind-address=0.0.0.0 \\
--client-ca-file=/var/lib/kubernetes/pki/${CA_CERT_NAME}.crt \\
--enable-admission-plugins=NodeRestriction,ServiceAccount \\
--enable-bootstrap-token-auth=true \\
--etcd-cafile=/var/lib/kubernetes/pki/${CA_CERT_NAME}.crt \\
--etcd-certfile=/var/lib/kubernetes/pki/${ETCD_CERT_NAME}.crt \\
--etcd-keyfile=/var/lib/kubernetes/pki/${ETCD_CERT_NAME}.key \\
--etcd-servers=https://${MASTER1_IP}:2379,https://${MASTER2_IP}:2379,https://${MASTER3_IP}:2379 \\
--event-ttl=1h \\
--encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
--kubelet-certificate-authority=/var/lib/kubernetes/pki/${CA_CERT_NAME}.crt \\
--kubelet-client-certificate=/var/lib/kubernetes/pki/${KUBE_APISERVER_CLIENT_CERT_NAME}.crt \\
--kubelet-client-key=/var/lib/kubernetes/pki/${KUBE_APISERVER_CLIENT_CERT_NAME}.key \\
--runtime-config=api/all=true \\
--service-account-key-file=/var/lib/kubernetes/pki/${SERVICE_ACCOUNT}.crt \\
--service-account-signing-key-file=/var/lib/kubernetes/pki/${SERVICE_ACCOUNT}.key \\
--service-account-issuer=https://${LOADBALANCER}:6443 \\
--service-cluster-ip-range=${SERVICE_CIDR} \\
--service-node-port-range=30000-32767 \\
--tls-cert-file=/var/lib/kubernetes/pki/${KUBE_APISERVER_CERT_NAME}.crt \\
--tls-private-key-file=/var/lib/kubernetes/pki/${KUBE_APISERVER_CERT_NAME}.key \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
############################ KUBE-CONTROLLER-MANAGER ##########################################
echo -e "\n##### Creating service for kube-controller-manager #####"
cd /vagrant/shared_files/kubeconfigs
sudo cp ${KUBE_CONTROLLER_MANAGER_CERT_NAME}.kubeconfig /var/lib/kubernetes/
cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
--allocate-node-cidrs=true \\
--authentication-kubeconfig=/var/lib/kubernetes/${KUBE_CONTROLLER_MANAGER_CERT_NAME}.kubeconfig \\
--authorization-kubeconfig=/var/lib/kubernetes/${KUBE_CONTROLLER_MANAGER_CERT_NAME}.kubeconfig \\
--bind-address=127.0.0.1 \\
--client-ca-file=/var/lib/kubernetes/pki/${CA_CERT_NAME}.crt \\
--cluster-cidr=${POD_CIDR} \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/var/lib/kubernetes/pki/${CA_CERT_NAME}.crt \\
--cluster-signing-key-file=/var/lib/kubernetes/pki/${CA_CERT_NAME}.key \\
--controllers=*,bootstrapsigner,tokencleaner \\
--kubeconfig=/var/lib/kubernetes/${KUBE_CONTROLLER_MANAGER_CERT_NAME}.kubeconfig \\
--leader-elect=true \\
--node-cidr-mask-size=24 \\
--requestheader-client-ca-file=/var/lib/kubernetes/pki/${CA_CERT_NAME}.crt \\
--root-ca-file=/var/lib/kubernetes/pki/${CA_CERT_NAME}.crt \\
--service-account-private-key-file=/var/lib/kubernetes/pki/${SERVICE_ACCOUNT}.key \\
--service-cluster-ip-range=${SERVICE_CIDR} \\
--use-service-account-credentials=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
############################### KUBE-SCHEDULER #############################################
echo -e "\n##### Creating service for kube-scheduler #####"
cd /vagrant/shared_files/kubeconfigs
sudo cp ${KUBE_SCHEDULER_CERT_NAME}.kubeconfig /var/lib/kubernetes/
cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
--kubeconfig=/var/lib/kubernetes/${KUBE_SCHEDULER_CERT_NAME}.kubeconfig \\
--leader-elect=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
############################### START COMPONENTS #############################################
echo -e "\n##### Starting APISERVER CONTROLLER-MANAGER SCHEDULER components #####"
sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
echo -e "\n##### Checking component status #####"
cd ~
cp /vagrant/shared_files/kubeconfigs/admin.kubeconfig .
kubectl get componentstatuses --kubeconfig admin.kubeconfig
In the case of workers we can work in a different way. Instead of creating certificates.