Bootstraps dos Componentes
Masters, workers e ETCDs possuem diferentes componentes, mas alguns semelhantes. Vamos começar pelos semelhantes primeiro que devem ser executados em todos os masters e workers.
Durante o processo de provisionamento das máquinas do vagrant podemos chamar esses scripts. Todos devem estar disponíveis em shared_files.
Os nodes Kubernetes precisam ter o swap e firewall desativados e alguns módulos carregados. Também precisamos instalar o container runtime, CNI, kubectl, e outros.
Crie o script abaixo com o nome bootstrap_nodes.sh e tenha-o disponível na pasta shared_files.
echo -e "\n##### Desativando o swap #####"
sudo sed -i '/swap/d' /etc/fstab
sudo swapoff -a
echo -e "\n##### Desativando o firewall #####"
sudo systemctl disable --now ufw >/dev/null 2>&1
echo -e "\n##### Ativando módulos do kernel necessários para o Containerd #####"
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# Apply sysctl params without reboot
sudo sysctl --system
echo -e "\n##### Check Modules #####"
lsmod | grep br_netfilter
lsmod | grep overlay
sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
sudo mkdir -p /var/lib/kubernetes/
echo -e "\n##### Instalando Container Runtime Interface #####"
echo -e "\n##### 1 - Adicionando o repositório do kubernetes #####"
sudo mkdir -p /etc/apt/keyrings
KUBE_LATEST=$(curl -L -s https://dl.k8s.io/release/stable.txt | awk 'BEGIN { FS="." } { printf "%s.%s", $1, $2 }')
curl -fsSL https://pkgs.k8s.io/core:/stable:/${KUBE_LATEST}/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/${KUBE_LATEST}/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list
echo -e "\n##### 2 - Instalando os pacotes #####"
sudo apt update
sudo apt install -y containerd kubernetes-cni kubectl ipvsadm ipset
echo -e "\n##### 3 - Ativando o systemd Cgroups no containerd #####"
sudo cp /etc/containerd/config.toml /etc/containerd/config.toml.default
containerd config default | sed 's/SystemdCgroup = false/SystemdCgroup = true/' | sudo tee /etc/containerd/config.toml
sudo systemctl restart containerd.service
systemctl status containerd.service
Um cluster não funciona sem um ETCD. É a peça chave onde tudo será guardado. Vamos criar um bootstrap para ele que será executado em cada um dos masters. No momento estamos utilizando o ETCD dentro do próprio master, mas podemos ter máquinas separadas para esse propósito. Em uma atualização desse repositório vou modificar algumas coisas para que isso seja possível e tornar ainda mais hard.
#!/bin/bash
ETCD_VERSION="v3.5.12"
INTERNAL_IP="$(ip addr show enp0s8 | grep "inet " | awk '{print $2}' | cut -d / -f 1)" # IP DA PRÓPRIA MÁQUINA QUE ESTA INSTALANDO O ETCD
MASTER1_IP=$(dig +short master1)
MASTER2_IP=$(dig +short master2)
MASTER3_IP=$(dig +short master3)
ETCD_NAME=$(hostname -s)
CA_CERTIFICATE_KUBERNETES=ca
CA_CERTIFICATE_ETCD=ca
ETCD_SERVER_CERTIFICATE=etcd-server
echo -e "\n##### Fazendo o download dos binários do ETCD #####"
wget -q --show-progress --https-only --timestamping \
"https://github.com/coreos/etcd/releases/download/${ETCD_VERSION}/etcd-${ETCD_VERSION}-linux-amd64.tar.gz"
tar -xvf etcd-${ETCD_VERSION}-linux-amd64.tar.gz
sudo mv etcd-${ETCD_VERSION}-linux-amd64/etcd* /usr/local/bin/
rm -rf etcd-${ETCD_VERSION}-linux-amd64*
echo -e "\n##### Criando diretórios utilizados pelo ETCD #####"
sudo mkdir -p /etc/etcd /var/lib/etcd /var/lib/kubernetes/pki
cd /vagrant/shared_files/pki
echo -e "\n##### Copiando os certificados usados pelo ETCD #####"
sudo cp ${ETCD_SERVER_CERTIFICATE}.key ${ETCD_SERVER_CERTIFICATE}.crt /etc/etcd/
sudo cp ${CA_CERTIFICATE_KUBERNETES}.crt /var/lib/kubernetes/pki/
sudo chown root:root /etc/etcd/*
sudo chmod 600 /etc/etcd/*
sudo chown root:root /var/lib/kubernetes/pki/*
sudo chmod 600 /var/lib/kubernetes/pki/*
sudo ln -s /var/lib/kubernetes/pki/${CA_CERTIFICATE_KUBERNETES}.crt /etc/etcd/${CA_CERTIFICATE_KUBERNETES}.crt
echo -e "\n##### Criando o service para o ETCD #####"
cat <<EOF | sudo tee /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
ExecStart=/usr/local/bin/etcd \\
--name ${ETCD_NAME} \\
--cert-file=/etc/etcd/${ETCD_SERVER_CERTIFICATE}.crt \\
--key-file=/etc/etcd/${ETCD_SERVER_CERTIFICATE}.key \\
--peer-cert-file=/etc/etcd/${ETCD_SERVER_CERTIFICATE}.crt \\
--peer-key-file=/etc/etcd/${ETCD_SERVER_CERTIFICATE}.key \\
--trusted-ca-file=/etc/etcd/${CA_CERTIFICATE_KUBERNETES}.crt\\
--peer-trusted-ca-file=/etc/etcd/${CA_CERTIFICATE_ETCD}.crt\\
--peer-client-cert-auth \\
--client-cert-auth \\
--initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
--advertise-client-urls https://${INTERNAL_IP}:2379 \\
--initial-cluster-token etcd-cluster-0 \\
--initial-cluster master1=https://${MASTER1_IP}:2380,master2=https://${MASTER2_IP}:2380,master3=https://${MASTER3_IP}:2380 \\
--initial-cluster-state new \\
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd
echo -e "\n##### Verificando ETCD #####"
sudo ETCDCTL_API=3 etcdctl member list \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/etcd/ca.crt \
--cert=/etc/etcd/etcd-server.crt \
--key=/etc/etcd/etcd-server.key || true
Agora vamos partir para o bootstrap dos componentes do control-plane que deverão estar nos masters. Os componentes do control-plane serão executados como serviços ao invés de pods. Futuramente eu posso melhorar essa documentação para instalar via pod.
Antes de criar os serviços copiamos os certificados todos para os lugares certos e criamos os serviços utilizando esses certificados. Por fim conferimos se tudo esta rodando de acordo.
#!/bin/bash
echo -e "\n##### Fazendo o download dos binários do control-plane #####"
KUBE_VERSION=$(curl -L -s https://dl.k8s.io/release/stable.txt)
#https://kubernetes.io/releases/download/#binaries
wget -q --show-progress --https-only --timestamping \
"https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/amd64/kube-apiserver" \
"https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/amd64/kube-controller-manager" \
"https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/amd64/kube-scheduler" \
"https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/amd64/kubectl"
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
CA_CERT_NAME=ca
KUBE_APISERVER_CERT_NAME=kube-apiserver
KUBE_APISERVER_CLIENT_CERT_NAME=apiserver-kubelet-client
KUBE_CONTROLLER_MANAGER_CERT_NAME=kube-controller-manager
KUBE_SCHEDULER_CERT_NAME=kube-scheduler
ETCD_CERT_NAME=etcd-server
SERVICE_ACCOUNT=service-account
echo -e "\n##### Colocando os certificados nos lugares corretos #####"
sudo mkdir -p /var/lib/kubernetes/pki
cd /vagrant/shared_files/pki
sudo cp ${CA_CERT_NAME}.* /var/lib/kubernetes/pki/
sudo cp ${KUBE_APISERVER_CERT_NAME}.* /var/lib/kubernetes/pki/
sudo cp ${KUBE_APISERVER_CLIENT_CERT_NAME}.* /var/lib/kubernetes/pki/
sudo cp ${KUBE_CONTROLLER_MANAGER_CERT_NAME}.* /var/lib/kubernetes/pki/
sudo cp ${KUBE_SCHEDULER_CERT_NAME}.* /var/lib/kubernetes/pki/
sudo cp ${ETCD_CERT_NAME}.* /var/lib/kubernetes/pki/
sudo cp ${SERVICE_ACCOUNT}.* /var/lib/kubernetes/pki/
sudo chown root:root /var/lib/kubernetes/pki/*
sudo chmod 600 /var/lib/kubernetes/pki/*
echo -e "\n##### Colocando os kubeconfigs nos lugares corretos #####"
sudo mkdir -p /var/lib/kubernetes/
cd /vagrant/shared_files/kubeconfigs
sudo cp ${CA_CERT_NAME}.kubeconfig /var/lib/kubernetes/
sudo cp ${KUBE_APISERVER_CERT_NAME}.kubeconfig /var/lib/kubernetes/
sudo cp ${KUBE_APISERVER_CLIENT_CERT_NAME}.kubeconfig /var/lib/kubernetes/
sudo cp ${KUBE_CONTROLLER_MANAGER_CERT_NAME}.kubeconfig /var/lib/kubernetes/
sudo cp ${KUBE_SCHEDULER_CERT_NAME}.kubeconfig /var/lib/kubernetes/
sudo cp ${ETCD_CERT_NAME}.kubeconfig /var/lib/kubernetes/
sudo cp ${SERVICE_ACCOUNT}.kubeconfig /var/lib/kubernetes/
sudo chmod 600 /var/lib/kubernetes/*.kubeconfig
INTERNAL_IP=$(ip addr show enp0s8 | grep "inet " | awk '{print $2}' | cut -d / -f 1)
LOADBALANCER=$(dig +short loadbalancer)
MASTER1_IP=$(dig +short master1)
MASTER2_IP=$(dig +short master2)
MASTER3_IP=$(dig +short master3)
POD_CIDR=10.244.0.0/16
SERVICE_CIDR=10.96.0.0/16
############################ KUBE-APISERVER ##########################################
echo -e "\n##### Criando o service para o kube-apiserver #####"
echo -e "\n##### Copiando o encription-config.yaml para o lugar certo #####"
cd /vagrant/shared_files/
sudo cp encryption-config.yaml /var/lib/kubernetes/
cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--advertise-address=${INTERNAL_IP} \\
--allow-privileged=true \\
--apiserver-count=3 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/audit.log \\
--authorization-mode=Node,RBAC \\
--bind-address=0.0.0.0 \\
--client-ca-file=/var/lib/kubernetes/pki/${CA_CERT_NAME}.crt \\
--enable-admission-plugins=NodeRestriction,ServiceAccount \\
--enable-bootstrap-token-auth=true \\
--etcd-cafile=/var/lib/kubernetes/pki/${CA_CERT_NAME}.crt \\
--etcd-certfile=/var/lib/kubernetes/pki/${ETCD_CERT_NAME}.crt \\
--etcd-keyfile=/var/lib/kubernetes/pki/${ETCD_CERT_NAME}.key \\
--etcd-servers=https://${MASTER1_IP}:2379,https://${MASTER2_IP}:2379,https://${MASTER3_IP}:2379 \\
--event-ttl=1h \\
--encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
--kubelet-certificate-authority=/var/lib/kubernetes/pki/${CA_CERT_NAME}.crt \\
--kubelet-client-certificate=/var/lib/kubernetes/pki/${KUBE_APISERVER_CLIENT_CERT_NAME}.crt \\
--kubelet-client-key=/var/lib/kubernetes/pki/${KUBE_APISERVER_CLIENT_CERT_NAME}.key \\
--runtime-config=api/all=true \\
--service-account-key-file=/var/lib/kubernetes/pki/${SERVICE_ACCOUNT}.crt \\
--service-account-signing-key-file=/var/lib/kubernetes/pki/${SERVICE_ACCOUNT}.key \\
--service-account-issuer=https://${LOADBALANCER}:6443 \\
--service-cluster-ip-range=${SERVICE_CIDR} \\
--service-node-port-range=30000-32767 \\
--tls-cert-file=/var/lib/kubernetes/pki/${KUBE_APISERVER_CERT_NAME}.crt \\
--tls-private-key-file=/var/lib/kubernetes/pki/${KUBE_APISERVER_CERT_NAME}.key \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
############################ KUBE-CONTROLER-MANAGER ##########################################
echo -e "\n##### Criando o service para o kube-controller-manager #####"
cd /vagrant/shared_files/kubeconfigs
sudo cp ${KUBE_CONTROLLER_MANAGER_CERT_NAME}.kubeconfig /var/lib/kubernetes/
cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
--allocate-node-cidrs=true \\
--authentication-kubeconfig=/var/lib/kubernetes/${KUBE_CONTROLLER_MANAGER_CERT_NAME}.kubeconfig \\
--authorization-kubeconfig=/var/lib/kubernetes/${KUBE_CONTROLLER_MANAGER_CERT_NAME}.kubeconfig \\
--bind-address=127.0.0.1 \\
--client-ca-file=/var/lib/kubernetes/pki/${CA_CERT_NAME}.crt \\
--cluster-cidr=${POD_CIDR} \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/var/lib/kubernetes/pki/${CA_CERT_NAME}.crt \\
--cluster-signing-key-file=/var/lib/kubernetes/pki/${CA_CERT_NAME}.key \\
--controllers=*,bootstrapsigner,tokencleaner \\
--kubeconfig=/var/lib/kubernetes/${KUBE_CONTROLLER_MANAGER_CERT_NAME}.kubeconfig \\
--leader-elect=true \\
--node-cidr-mask-size=24 \\
--requestheader-client-ca-file=/var/lib/kubernetes/pki/${CA_CERT_NAME}.crt \\
--root-ca-file=/var/lib/kubernetes/pki/${CA_CERT_NAME}.crt \\
--service-account-private-key-file=/var/lib/kubernetes/pki/${SERVICE_ACCOUNT}.key \\
--service-cluster-ip-range=${SERVICE_CIDR} \\
--use-service-account-credentials=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
############################### KUBE-SCHEDULER #############################################
echo -e "\n##### Criando o service para o kube-scheduler #####"
cd /vagrant/shared_files/kubeconfigs
sudo cp ${KUBE_SCHEDULER_CERT_NAME}.kubeconfig /var/lib/kubernetes/
cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
--kubeconfig=/var/lib/kubernetes/${KUBE_SCHEDULER_CERT_NAME}.kubeconfig \\
--leader-elect=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
############################### START COMPONENTS #############################################
echo -e "\n##### Iniciando os componentes APISERVER CONTROLLER-MANAGER SCHEDULER #####"
sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
echo -e "\n##### Conferindo o status dos componentes #####"
cd ~
cp /vagrant/shared_files/kubeconfigs/admin.kubeconfig .
kubectl get componentstatuses --kubeconfig admin.kubeconfig
No caso dos workers podemos trabalhar de uma maneira diferente. Ao invés de ficar criando certificados.