Bootstraps de los Componentes
Masters, workers y ETCDs poseen diferentes componentes, pero algunos similares. Vamos a empezar por los similares primero que deben ser ejecutados en todos los masters y workers.
Durante el proceso de provisionamiento de las máquinas del vagrant podemos llamar a estos scripts. Todos deben estar disponibles en shared_files.
Los nodos Kubernetes necesitan tener el swap y firewall desactivados y algunos módulos cargados. También necesitamos instalar el container runtime, CNI, kubectl, y otros.
Crea el script abajo con el nombre bootstrap_nodes.sh y tenlo disponible en la carpeta shared_files.
echo -e "\n##### Desactivando el swap #####"
sudo sed -i '/swap/d' /etc/fstab
sudo swapoff -a
echo -e "\n##### Desactivando el firewall #####"
sudo systemctl disable --now ufw >/dev/null 2>&1
echo -e "\n##### Activando módulos del kernel necesarios para Containerd #####"
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# parámetros sysctl requeridos por setup, los parámetros persisten a través de reinicios
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# Aplicar parámetros sysctl sin reiniciar
sudo sysctl --system
echo -e "\n##### Verificar Módulos #####"
lsmod | grep br_netfilter
lsmod | grep overlay
sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
sudo mkdir -p /var/lib/kubernetes/
echo -e "\n##### Instalando Container Runtime Interface #####"
echo -e "\n##### 1 - Agregando el repositorio de kubernetes #####"
sudo mkdir -p /etc/apt/keyrings
KUBE_LATEST=$(curl -L -s https://dl.k8s.io/release/stable.txt | awk 'BEGIN { FS="." } { printf "%s.%s", $1, $2 }')
curl -fsSL https://pkgs.k8s.io/core:/stable:/${KUBE_LATEST}/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/${KUBE_LATEST}/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list
echo -e "\n##### 2 - Instalando los paquetes #####"
sudo apt update
sudo apt install -y containerd kubernetes-cni kubectl ipvsadm ipset
echo -e "\n##### 3 - Activando systemd Cgroups en containerd #####"
sudo cp /etc/containerd/config.toml /etc/containerd/config.toml.default
containerd config default | sed 's/SystemdCgroup = false/SystemdCgroup = true/' | sudo tee /etc/containerd/config.toml
sudo systemctl restart containerd.service
systemctl status containerd.service
Un clúster no funciona sin un ETCD. Es la pieza clave donde todo será guardado. Vamos a crear un bootstrap para él que será ejecutado en cada uno de los masters. En este momento estamos utilizando el ETCD dentro del propio master, pero podemos tener máquinas separadas para este propósito. En una actualización de este repositorio voy a modificar algunas cosas para que eso sea posible y hacerlo aún más hard.
#!/bin/bash
ETCD_VERSION="v3.5.12"
INTERNAL_IP="$(ip addr show enp0s8 | grep "inet " | awk '{print $2}' | cut -d / -f 1)" # IP DE LA PROPIA MÁQUINA QUE ESTÁ INSTALANDO EL ETCD
MASTER1_IP=$(dig +short master1)
MASTER2_IP=$(dig +short master2)
MASTER3_IP=$(dig +short master3)
ETCD_NAME=$(hostname -s)
CA_CERTIFICATE_KUBERNETES=ca
CA_CERTIFICATE_ETCD=ca
ETCD_SERVER_CERTIFICATE=etcd-server
echo -e "\n##### Descargando los binarios del ETCD #####"
wget -q --show-progress --https-only --timestamping \
"https://github.com/coreos/etcd/releases/download/${ETCD_VERSION}/etcd-${ETCD_VERSION}-linux-amd64.tar.gz"
tar -xvf etcd-${ETCD_VERSION}-linux-amd64.tar.gz
sudo mv etcd-${ETCD_VERSION}-linux-amd64/etcd* /usr/local/bin/
rm -rf etcd-${ETCD_VERSION}-linux-amd64*
echo -e "\n##### Creando directorios utilizados por ETCD #####"
sudo mkdir -p /etc/etcd /var/lib/etcd /var/lib/kubernetes/pki
cd /vagrant/shared_files/pki
echo -e "\n##### Copiando los certificados usados por ETCD #####"
sudo cp ${ETCD_SERVER_CERTIFICATE}.key ${ETCD_SERVER_CERTIFICATE}.crt /etc/etcd/
sudo cp ${CA_CERTIFICATE_KUBERNETES}.crt /var/lib/kubernetes/pki/
sudo chown root:root /etc/etcd/*
sudo chmod 600 /etc/etcd/*
sudo chown root:root /var/lib/kubernetes/pki/*
sudo chmod 600 /var/lib/kubernetes/pki/*
sudo ln -s /var/lib/kubernetes/pki/${CA_CERTIFICATE_KUBERNETES}.crt /etc/etcd/${CA_CERTIFICATE_KUBERNETES}.crt
echo -e "\n##### Creando el service para el ETCD #####"
cat <<EOF | sudo tee /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
ExecStart=/usr/local/bin/etcd \\
--name ${ETCD_NAME} \\
--cert-file=/etc/etcd/${ETCD_SERVER_CERTIFICATE}.crt \\
--key-file=/etc/etcd/${ETCD_SERVER_CERTIFICATE}.key \\
--peer-cert-file=/etc/etcd/${ETCD_SERVER_CERTIFICATE}.crt \\
--peer-key-file=/etc/etcd/${ETCD_SERVER_CERTIFICATE}.key \\
--trusted-ca-file=/etc/etcd/${CA_CERTIFICATE_KUBERNETES}.crt\\
--peer-trusted-ca-file=/etc/etcd/${CA_CERTIFICATE_ETCD}.crt\\
--peer-client-cert-auth \\
--client-cert-auth \\
--initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
--advertise-client-urls https://${INTERNAL_IP}:2379 \\
--initial-cluster-token etcd-cluster-0 \\
--initial-cluster master1=https://${MASTER1_IP}:2380,master2=https://${MASTER2_IP}:2380,master3=https://${MASTER3_IP}:2380 \\
--initial-cluster-state new \\
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd
echo -e "\n##### Verificando ETCD #####"
sudo ETCDCTL_API=3 etcdctl member list \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/etcd/ca.crt \
--cert=/etc/etcd/etcd-server.crt \
--key=/etc/etcd/etcd-server.key || true
Ahora vamos a partir para el bootstrap de los componentes del control-plane que deberán estar en los masters. Los componentes del control-plane serán ejecutados como servicios en lugar de pods. En el futuro puedo mejorar esta documentación para instalar vía pod.
Antes de crear los servicios copiamos los certificados todos a los lugares correctos y creamos los servicios utilizando esos certificados. Por fin verificamos si todo está ejecutándose de acuerdo.
#!/bin/bash
echo -e "\n##### Descargando los binarios del control-plane #####"
KUBE_VERSION=$(curl -L -s https://dl.k8s.io/release/stable.txt)
#https://kubernetes.io/releases/download/#binaries
wget -q --show-progress --https-only --timestamping \
"https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/amd64/kube-apiserver" \
"https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/amd64/kube-controller-manager" \
"https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/amd64/kube-scheduler" \
"https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/amd64/kubectl"
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
CA_CERT_NAME=ca
KUBE_APISERVER_CERT_NAME=kube-apiserver
KUBE_APISERVER_CLIENT_CERT_NAME=apiserver-kubelet-client
KUBE_CONTROLLER_MANAGER_CERT_NAME=kube-controller-manager
KUBE_SCHEDULER_CERT_NAME=kube-scheduler
ETCD_CERT_NAME=etcd-server
SERVICE_ACCOUNT=service-account
echo -e "\n##### Colocando los certificados en los lugares correctos #####"
sudo mkdir -p /var/lib/kubernetes/pki
cd /vagrant/shared_files/pki
sudo cp ${CA_CERT_NAME}.* /var/lib/kubernetes/pki/
sudo cp ${KUBE_APISERVER_CERT_NAME}.* /var/lib/kubernetes/pki/
sudo cp ${KUBE_APISERVER_CLIENT_CERT_NAME}.* /var/lib/kubernetes/pki/
sudo cp ${KUBE_CONTROLLER_MANAGER_CERT_NAME}.* /var/lib/kubernetes/pki/
sudo cp ${KUBE_SCHEDULER_CERT_NAME}.* /var/lib/kubernetes/pki/
sudo cp ${ETCD_CERT_NAME}.* /var/lib/kubernetes/pki/
sudo cp ${SERVICE_ACCOUNT}.* /var/lib/kubernetes/pki/
sudo chown root:root /var/lib/kubernetes/pki/*
sudo chmod 600 /var/lib/kubernetes/pki/*
echo -e "\n##### Colocando los kubeconfigs en los lugares correctos #####"
sudo mkdir -p /var/lib/kubernetes/
cd /vagrant/shared_files/kubeconfigs
sudo cp ${CA_CERT_NAME}.kubeconfig /var/lib/kubernetes/
sudo cp ${KUBE_APISERVER_CERT_NAME}.kubeconfig /var/lib/kubernetes/
sudo cp ${KUBE_APISERVER_CLIENT_CERT_NAME}.kubeconfig /var/lib/kubernetes/
sudo cp ${KUBE_CONTROLLER_MANAGER_CERT_NAME}.kubeconfig /var/lib/kubernetes/
sudo cp ${KUBE_SCHEDULER_CERT_NAME}.kubeconfig /var/lib/kubernetes/
sudo cp ${ETCD_CERT_NAME}.kubeconfig /var/lib/kubernetes/
sudo cp ${SERVICE_ACCOUNT}.kubeconfig /var/lib/kubernetes/
sudo chmod 600 /var/lib/kubernetes/*.kubeconfig
INTERNAL_IP=$(ip addr show enp0s8 | grep "inet " | awk '{print $2}' | cut -d / -f 1)
LOADBALANCER=$(dig +short loadbalancer)
MASTER1_IP=$(dig +short master1)
MASTER2_IP=$(dig +short master2)
MASTER3_IP=$(dig +short master3)
POD_CIDR=10.244.0.0/16
SERVICE_CIDR=10.96.0.0/16
############################ KUBE-APISERVER ##########################################
echo -e "\n##### Creando el service para el kube-apiserver #####"
echo -e "\n##### Copiando el encryption-config.yaml al lugar correcto #####"
cd /vagrant/shared_files/
sudo cp encryption-config.yaml /var/lib/kubernetes/
cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--advertise-address=${INTERNAL_IP} \\
--allow-privileged=true \\
--apiserver-count=3 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/audit.log \\
--authorization-mode=Node,RBAC \\
--bind-address=0.0.0.0 \\
--client-ca-file=/var/lib/kubernetes/pki/${CA_CERT_NAME}.crt \\
--enable-admission-plugins=NodeRestriction,ServiceAccount \\
--enable-bootstrap-token-auth=true \\
--etcd-cafile=/var/lib/kubernetes/pki/${CA_CERT_NAME}.crt \\
--etcd-certfile=/var/lib/kubernetes/pki/${ETCD_CERT_NAME}.crt \\
--etcd-keyfile=/var/lib/kubernetes/pki/${ETCD_CERT_NAME}.key \\
--etcd-servers=https://${MASTER1_IP}:2379,https://${MASTER2_IP}:2379,https://${MASTER3_IP}:2379 \\
--event-ttl=1h \\
--encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
--kubelet-certificate-authority=/var/lib/kubernetes/pki/${CA_CERT_NAME}.crt \\
--kubelet-client-certificate=/var/lib/kubernetes/pki/${KUBE_APISERVER_CLIENT_CERT_NAME}.crt \\
--kubelet-client-key=/var/lib/kubernetes/pki/${KUBE_APISERVER_CLIENT_CERT_NAME}.key \\
--runtime-config=api/all=true \\
--service-account-key-file=/var/lib/kubernetes/pki/${SERVICE_ACCOUNT}.crt \\
--service-account-signing-key-file=/var/lib/kubernetes/pki/${SERVICE_ACCOUNT}.key \\
--service-account-issuer=https://${LOADBALANCER}:6443 \\
--service-cluster-ip-range=${SERVICE_CIDR} \\
--service-node-port-range=30000-32767 \\
--tls-cert-file=/var/lib/kubernetes/pki/${KUBE_APISERVER_CERT_NAME}.crt \\
--tls-private-key-file=/var/lib/kubernetes/pki/${KUBE_APISERVER_CERT_NAME}.key \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
############################ KUBE-CONTROLLER-MANAGER ##########################################
echo -e "\n##### Creando el service para el kube-controller-manager #####"
cd /vagrant/shared_files/kubeconfigs
sudo cp ${KUBE_CONTROLLER_MANAGER_CERT_NAME}.kubeconfig /var/lib/kubernetes/
cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
--allocate-node-cidrs=true \\
--authentication-kubeconfig=/var/lib/kubernetes/${KUBE_CONTROLLER_MANAGER_CERT_NAME}.kubeconfig \\
--authorization-kubeconfig=/var/lib/kubernetes/${KUBE_CONTROLLER_MANAGER_CERT_NAME}.kubeconfig \\
--bind-address=127.0.0.1 \\
--client-ca-file=/var/lib/kubernetes/pki/${CA_CERT_NAME}.crt \\
--cluster-cidr=${POD_CIDR} \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/var/lib/kubernetes/pki/${CA_CERT_NAME}.crt \\
--cluster-signing-key-file=/var/lib/kubernetes/pki/${CA_CERT_NAME}.key \\
--controllers=*,bootstrapsigner,tokencleaner \\
--kubeconfig=/var/lib/kubernetes/${KUBE_CONTROLLER_MANAGER_CERT_NAME}.kubeconfig \\
--leader-elect=true \\
--node-cidr-mask-size=24 \\
--requestheader-client-ca-file=/var/lib/kubernetes/pki/${CA_CERT_NAME}.crt \\
--root-ca-file=/var/lib/kubernetes/pki/${CA_CERT_NAME}.crt \\
--service-account-private-key-file=/var/lib/kubernetes/pki/${SERVICE_ACCOUNT}.key \\
--service-cluster-ip-range=${SERVICE_CIDR} \\
--use-service-account-credentials=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
############################### KUBE-SCHEDULER #############################################
echo -e "\n##### Creando el service para el kube-scheduler #####"
cd /vagrant/shared_files/kubeconfigs
sudo cp ${KUBE_SCHEDULER_CERT_NAME}.kubeconfig /var/lib/kubernetes/
cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
--kubeconfig=/var/lib/kubernetes/${KUBE_SCHEDULER_CERT_NAME}.kubeconfig \\
--leader-elect=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
############################### START COMPONENTS #############################################
echo -e "\n##### Iniciando los componentes APISERVER CONTROLLER-MANAGER SCHEDULER #####"
sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
echo -e "\n##### Verificando el estado de los componentes #####"
cd ~
cp /vagrant/shared_files/kubeconfigs/admin.kubeconfig .
kubectl get componentstatuses --kubeconfig admin.kubeconfig
En el caso de los workers podemos trabajar de una manera diferente. En lugar de estar creando certificados.