Baremetal Local
Para essa instalação somente vamos configurar 3 nodes, 1 master e 2 workers.
Essa será a base para as instalações com multi masters e multi etcd.
Obviamente não é uma instalação com alta disponibilidade, mas serve para aprender os requisitos básicos da instalação.
Vamos utilizar o vagrant para provisionar rapidamente estas máquinas no virtualbox, desta maneira é necessário que o virtual box esteja instalado.
Os requisitos necessários:
sudo apt-get install vagrant
sudo apt-get install virtualbox
Em versões mais recentes do virtual box, precisei mexer na configuração para liberar o range de ips.
sudo mkdir /etc/vbox
sudo echo "* 10.0.0.0/8 192.168.0.0/16" >> /etc/vbox/networks.conf
sudo echo "* 2001::/64" >> /etc/vbox/networks.conf
O arquivo Vagrantfile contém a configuração das máquinas e as chamadas dos scripts necessários para a instalação.
O arquivo bootstrap.sh será o script que todos os nós precisam executar ao final do deploy de cada um dos nodes.
Por último, se analisar o vagrantfile, caso seja master ele executará o script bootstrap_master.sh e caso seja worker executará o script bootstrap_worker.sh.
No script para o master, ele cria o cluster e salva o comando join em um executável para que o worker faça a execução do mesmo através de ssh para o master.
Comandos básicos do vagrant
As máquinas definidas no vagrant file são: master, worker1, e worker2
Subir todas as máquinas.
vagrant up
Destruir todas as máquinas.
vagrant destroy
Subir uma máquina específica.
vagrant up master
Destruir uma máquina específica.
vagrant destroy worker1
Desligar todas as máquinas.
vagrant halt
Desligar uma máquina específica.
vagrant halt worker1
Entrar em uma máquina utilizando ssh
vagrant ssh master
Saída da instalação
❯ vagrant up
Bringing machine 'master' up with 'virtualbox' provider...
Bringing machine 'worker1' up with 'virtualbox' provider...
Bringing machine 'worker2' up with 'virtualbox' provider...
==> master: Importing base box 'ubuntu/jammy64'...
==> master: Matching MAC address for NAT networking...
==> master: Checking if box 'ubuntu/jammy64' version '20220718.0.0' is up to date...
==> master: Setting the name of the VM: master
==> master: Clearing any previously set network interfaces...
==> master: Preparing network interfaces based on configuration...
master: Adapter 1: nat
master: Adapter 2: hostonly
==> master: Forwarding ports...
master: 22 (guest) => 2222 (host) (adapter 1)
==> master: Running 'pre-boot' VM customizations...
==> master: Booting VM...
==> master: Waiting for machine to boot. This may take a few minutes...
master: SSH address: 127.0.0.1:2222
master: SSH username: vagrant
master: SSH auth method: private key
master:
master: Vagrant insecure key detected. Vagrant will automatically replace
master: this with a newly generated keypair for better security.
master:
master: Inserting generated public key within guest...
master: Removing insecure key from the guest if it's present...
master: Key inserted! Disconnecting and reconnecting using new SSH key...
==> master: Machine booted and ready!
==> master: Checking for guest additions in VM...
master: The guest additions on this VM do not match the installed version of
master: VirtualBox! In most cases this is fine, but in rare cases it can
master: prevent things such as shared folders from working properly. If you see
master: shared folder errors, please make sure the guest additions within the
master: virtual machine match the version of VirtualBox you have installed on
master: your host and reload your VM.
master:
master: Guest Additions Version: 6.0.0 r127566
master: VirtualBox Version: 6.1
==> master: Setting hostname...
==> master: Configuring and enabling network interfaces...
==> master: Mounting shared folders...
master: /vagrant => /home/david/projects/pessoais/study-kubernetes/Instalacoes/Baremetal Local
==> master: Running provisioner: shell...
master: Running: /tmp/vagrant-shell20220726-606777-6pw4vn.sh
master: Desativando o swap
master: Desativando o firewall
master: Ativando modulos do kernel necessarios para o containerd
master: Adicionando configuracoes do kernel para o kubernetes
master: Instalando containerd com o systemd de cgroups
master: Adicionando o repo do kubernetes
master: Instalando binarios do Kubernetes (kubeadm, kubelet and kubectl)
master: Ativando a autenticacao por ssh
master: Setando o password do root
master: Atualizando os hosts no arquivo /etc/hosts
==> master: Running provisioner: shell...
master: Running: /tmp/vagrant-shell20220726-606777-flrq4g.sh
master: Fazendo o pull das imagens necessarios para os containers no master
master: Inicializando o cluster
master: Criando a pasta .kube para o user vagrant
master: ##### Criando o .kube/config a partir do admin.conf #####
master: Deploy do cni weavenet
master: WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
master: The connection to the server localhost:8080 was refused - did you specify the right host or port?
master: Criando o comando join para os workers /joincluster.sh
==> worker1: You assigned a static IP ending in ".1" to this machine.
==> worker1: This is very often used by the router and can cause the
==> worker1: network to not work properly. If the network doesn't work
==> worker1: properly, try changing this IP.
==> worker1: Importing base box 'ubuntu/jammy64'...
==> worker1: Matching MAC address for NAT networking...
==> worker1: You assigned a static IP ending in ".1" to this machine.
==> worker1: This is very often used by the router and can cause the
==> worker1: network to not work properly. If the network doesn't work
==> worker1: properly, try changing this IP.
==> worker1: Checking if box 'ubuntu/jammy64' version '20220718.0.0' is up to date...
==> worker1: Setting the name of the VM: worker1
==> worker1: Fixed port collision for 22 => 2222. Now on port 2200.
==> worker1: Clearing any previously set network interfaces...
==> worker1: Preparing network interfaces based on configuration...
worker1: Adapter 1: nat
worker1: Adapter 2: hostonly
==> worker1: Forwarding ports...
worker1: 22 (guest) => 2200 (host) (adapter 1)
==> worker1: Running 'pre-boot' VM customizations...
==> worker1: Booting VM...
==> worker1: Waiting for machine to boot. This may take a few minutes...
worker1: SSH address: 127.0.0.1:2200
worker1: SSH username: vagrant
worker1: SSH auth method: private key
worker1:
worker1: Vagrant insecure key detected. Vagrant will automatically replace
worker1: this with a newly generated keypair for better security.
worker1:
worker1: Inserting generated public key within guest...
worker1: Removing insecure key from the guest if it's present...
worker1: Key inserted! Disconnecting and reconnecting using new SSH key...
==> worker1: Machine booted and ready!
==> worker1: Checking for guest additions in VM...
worker1: The guest additions on this VM do not match the installed version of
worker1: VirtualBox! In most cases this is fine, but in rare cases it can
worker1: prevent things such as shared folders from working properly. If you see
worker1: shared folder errors, please make sure the guest additions within the
worker1: virtual machine match the version of VirtualBox you have installed on
worker1: your host and reload your VM.
worker1:
worker1: Guest Additions Version: 6.0.0 r127566
worker1: VirtualBox Version: 6.1
==> worker1: Setting hostname...
==> worker1: Configuring and enabling network interfaces...
==> worker1: Mounting shared folders...
worker1: /vagrant => /home/david/projects/pessoais/study-kubernetes/Instalacoes/Baremetal Local
==> worker1: Running provisioner: shell...
worker1: Running: /tmp/vagrant-shell20220726-606777-nzd7b9.sh
worker1: Desativando o swap
worker1: Desativando o firewall
worker1: Ativando modulos do kernel necessarios para o containerd
worker1: Adicionando configuracoes do kernel para o kubernetes
worker1: Instalando containerd com o systemd de cgroups
worker1: Adicionando o repo do kubernetes
worker1: Instalando binarios do Kubernetes (kubeadm, kubelet and kubectl)
worker1: Ativando a autenticacao por ssh
worker1: Setando o password do root
worker1: Atualizando os hosts no arquivo /etc/hosts
==> worker1: Running provisioner: shell...
worker1: Running: /tmp/vagrant-shell20220726-606777-7qs118.sh
worker1: Executando o join do cluster
==> worker2: Importing base box 'ubuntu/jammy64'...
==> worker2: Matching MAC address for NAT networking...
==> worker2: Checking if box 'ubuntu/jammy64' version '20220718.0.0' is up to date...
==> worker2: Setting the name of the VM: worker2
==> worker2: Fixed port collision for 22 => 2222. Now on port 2201.
==> worker2: Clearing any previously set network interfaces...
==> worker2: Preparing network interfaces based on configuration...
worker2: Adapter 1: nat
worker2: Adapter 2: hostonly
==> worker2: Forwarding ports...
worker2: 22 (guest) => 2201 (host) (adapter 1)
==> worker2: Running 'pre-boot' VM customizations...
==> worker2: Booting VM...
==> worker2: Waiting for machine to boot. This may take a few minutes...
worker2: SSH address: 127.0.0.1:2201
worker2: SSH username: vagrant
worker2: SSH auth method: private key
worker2: Warning: Remote connection disconnect. Retrying...
worker2: Warning: Connection reset. Retrying...
worker2:
worker2: Vagrant insecure key detected. Vagrant will automatically replace
worker2: this with a newly generated keypair for better security.
worker2:
worker2: Inserting generated public key within guest...
worker2: Removing insecure key from the guest if it's present...
worker2: Key inserted! Disconnecting and reconnecting using new SSH key...
==> worker2: Machine booted and ready!
==> worker2: Checking for guest additions in VM...
worker2: The guest additions on this VM do not match the installed version of
worker2: VirtualBox! In most cases this is fine, but in rare cases it can
worker2: prevent things such as shared folders from working properly. If you see
worker2: shared folder errors, please make sure the guest additions within the
worker2: virtual machine match the version of VirtualBox you have installed on
worker2: your host and reload your VM.
worker2:
worker2: Guest Additions Version: 6.0.0 r127566
worker2: VirtualBox Version: 6.1
==> worker2: Setting hostname...
==> worker2: Configuring and enabling network interfaces...
==> worker2: Mounting shared folders...
worker2: /vagrant => /home/david/projects/pessoais/study-kubernetes/Instalacoes/Baremetal Local
==> worker2: Running provisioner: shell...
worker2: Running: /tmp/vagrant-shell20220726-606777-rfgvit.sh
worker2: Desativando o swap
worker2: Desativando o firewall
worker2: Ativando modulos do kernel necessarios para o containerd
worker2: Adicionando configuracoes do kernel para o kubernetes
worker2: Instalando containerd com o systemd de cgroups
worker2: Adicionando o repo do kubernetes
worker2: Instalando binarios do Kubernetes (kubeadm, kubelet and kubectl)
worker2: Ativando a autenticacao por ssh
worker2: Setando o password do root
worker2: Atualizando os hosts no arquivo /etc/hosts
==> worker2: Running provisioner: shell...
worker2: Running: /tmp/vagrant-shell20220726-606777-ffyab8.sh
worker2: Executando o join do cluster
Agora vamos entrar na master e conferir se o cluster está de pé
❯ vagrant ssh master
vagrant@master:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 10m v1.24.0
worker1 Ready <none> 8m20s v1.24.0
worker2 Ready <none> 6m39s v1.24.0
vagrant@master:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d4b75cb6d-2xwkd 1/1 Running 0 13m
kube-system coredns-6d4b75cb6d-mfbkv 1/1 Running 0 13m
kube-system etcd-master 1/1 Running 0 13m
kube-system kube-apiserver-master 1/1 Running 0 13m
kube-system kube-controller-manager-master 1/1 Running 0 13m
kube-system kube-proxy-567z7 1/1 Running 0 10m
kube-system kube-proxy-92j9g 1/1 Running 0 11m
kube-system kube-proxy-gsvd5 1/1 Running 0 13m
kube-system kube-scheduler-master 1/1 Running 0 13m
kube-system weave-net-25k8d 2/2 Running 1 (9m17s ago) 10m
kube-system weave-net-4pvlc 2/2 Running 1 (13m ago) 13m
kube-system weave-net-sxsdz 2/2 Running 1 (11m ago) 11m
vagrant@master:~$
Remote kubectl
Instale o kubectl na sua máquina.
sudo apt-get install kubectl
Copie o conteúdo do .kube/config de dentro da máquina master
vagrant ssh master
cat /home/vagrant/.kube/config
Cole toda a saída
mkdir /home/$USER/.kube
vim config # ou nano tanto faz
# cole, salve e saia
Agora é só executar da sua máquina
kubectl get nodes
Se você já tiver um .kube/config configurado para outro cluster na sua máquina, crie outro arquivo, por exemplo config-vagrant-local, e cole o conteúdo.
kubectl --kubeconfig <caminho para config-vagrant-local> get nodes
❯ kubectl --kubeconfig /home/$USER/.kube/config-vagrant-local get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 41m v1.24.0
worker1 Ready <none> 40m v1.24.0
worker2 Ready <none> 38m v1.24.0
Existem melhores maneiras para gerenciar o kube config, veremos durante o estudo.