Baremetal Local
For this installation, we will configure only 3 nodes: 1 master and 2 workers.
This will be the foundation for installations with multi-master and multi-etcd configurations.
Obviously, this is not a high availability installation, but it serves to learn the basic installation requirements.
We will use Vagrant to quickly provision these machines in VirtualBox, so it is necessary to have VirtualBox installed.
The required prerequisites:
sudo apt-get install vagrant
sudo apt-get install virtualbox
In more recent versions of VirtualBox, I had to modify the configuration to allow the IP range.
sudo mkdir /etc/vbox
sudo echo "* 10.0.0.0/8 192.168.0.0/16" >> /etc/vbox/networks.conf
sudo echo "* 2001::/64" >> /etc/vbox/networks.conf
The Vagrantfile file contains the machine configuration and the necessary script calls for the installation.
The bootstrap.sh file will be the script that all nodes need to execute at the end of each node's deployment.
Finally, if you analyze the Vagrantfile, if it's a master it will execute the bootstrap_master.sh script, and if it's a worker it will execute the bootstrap_worker.sh script.
In the master script, it creates the cluster and saves the join command in an executable file so that the worker can execute it through SSH to the master.
Basic Vagrant Commands
The machines defined in the Vagrant file are: master, worker1, and worker2
Start all machines.
vagrant up
Destroy all machines.
vagrant destroy
Start a specific machine.
vagrant up master
Destroy a specific machine.
vagrant destroy worker1
Shut down all machines.
vagrant halt
Shut down a specific machine.
vagrant halt worker1
SSH into a machine
vagrant ssh master
Installation Output
❯ vagrant up
Bringing machine 'master' up with 'virtualbox' provider...
Bringing machine 'worker1' up with 'virtualbox' provider...
Bringing machine 'worker2' up with 'virtualbox' provider...
==> master: Importing base box 'ubuntu/jammy64'...
==> master: Matching MAC address for NAT networking...
==> master: Checking if box 'ubuntu/jammy64' version '20220718.0.0' is up to date...
==> master: Setting the name of the VM: master
==> master: Clearing any previously set network interfaces...
==> master: Preparing network interfaces based on configuration...
master: Adapter 1: nat
master: Adapter 2: hostonly
==> master: Forwarding ports...
master: 22 (guest) => 2222 (host) (adapter 1)
==> master: Running 'pre-boot' VM customizations...
==> master: Booting VM...
==> master: Waiting for machine to boot. This may take a few minutes...
master: SSH address: 127.0.0.1:2222
master: SSH username: vagrant
master: SSH auth method: private key
master:
master: Vagrant insecure key detected. Vagrant will automatically replace
master: this with a newly generated keypair for better security.
master:
master: Inserting generated public key within guest...
master: Removing insecure key from the guest if it's present...
master: Key inserted! Disconnecting and reconnecting using new SSH key...
==> master: Machine booted and ready!
==> master: Checking for guest additions in VM...
master: The guest additions on this VM do not match the installed version of
master: VirtualBox! In most cases this is fine, but in rare cases it can
master: prevent things such as shared folders from working properly. If you see
master: shared folder errors, please make sure the guest additions within the
master: virtual machine match the version of VirtualBox you have installed on
master: your host and reload your VM.
master:
master: Guest Additions Version: 6.0.0 r127566
master: VirtualBox Version: 6.1
==> master: Setting hostname...
==> master: Configuring and enabling network interfaces...
==> master: Mounting shared folders...
master: /vagrant => /home/david/projects/pessoais/study-kubernetes/Instalacoes/Baremetal Local
==> master: Running provisioner: shell...
master: Running: /tmp/vagrant-shell20220726-606777-6pw4vn.sh
master: Disabling swap
master: Disabling firewall
master: Enabling kernel modules required for containerd
master: Adding kernel configurations for kubernetes
master: Installing containerd with systemd cgroups
master: Adding kubernetes repository
master: Installing Kubernetes binaries (kubeadm, kubelet and kubectl)
master: Enabling ssh authentication
master: Setting root password
master: Updating hosts in /etc/hosts file
==> master: Running provisioner: shell...
master: Running: /tmp/vagrant-shell20220726-606777-flrq4g.sh
master: Pulling required images for containers on master
master: Initializing cluster
master: Creating .kube folder for vagrant user
master: ##### Creating .kube/config from admin.conf #####
master: Deploying weavenet CNI
master: WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
master: The connection to the server localhost:8080 was refused - did you specify the right host or port?
master: Creating join command for workers /joincluster.sh
==> worker1: You assigned a static IP ending in ".1" to this machine.
==> worker1: This is very often used by the router and can cause the
==> worker1: network to not work properly. If the network doesn't work
==> worker1: properly, try changing this IP.
==> worker1: Importing base box 'ubuntu/jammy64'...
==> worker1: Matching MAC address for NAT networking...
==> worker1: You assigned a static IP ending in ".1" to this machine.
==> worker1: This is very often used by the router and can cause the
==> worker1: network to not work properly. If the network doesn't work
==> worker1: properly, try changing this IP.
==> worker1: Checking if box 'ubuntu/jammy64' version '20220718.0.0' is up to date...
==> worker1: Setting the name of the VM: worker1
==> worker1: Fixed port collision for 22 => 2222. Now on port 2200.
==> worker1: Clearing any previously set network interfaces...
==> worker1: Preparing network interfaces based on configuration...
worker1: Adapter 1: nat
worker1: Adapter 2: hostonly
==> worker1: Forwarding ports...
worker1: 22 (guest) => 2200 (host) (adapter 1)
==> worker1: Running 'pre-boot' VM customizations...
==> worker1: Booting VM...
==> worker1: Waiting for machine to boot. This may take a few minutes...
worker1: SSH address: 127.0.0.1:2200
worker1: SSH username: vagrant
worker1: SSH auth method: private key
worker1:
worker1: Vagrant insecure key detected. Vagrant will automatically replace
worker1: this with a newly generated keypair for better security.
worker1:
worker1: Inserting generated public key within guest...
worker1: Removing insecure key from the guest if it's present...
worker1: Key inserted! Disconnecting and reconnecting using new SSH key...
==> worker1: Machine booted and ready!
==> worker1: Checking for guest additions in VM...
worker1: The guest additions on this VM do not match the installed version of
worker1: VirtualBox! In most cases this is fine, but in rare cases it can
worker1: prevent things such as shared folders from working properly. If you see
worker1: shared folder errors, please make sure the guest additions within the
worker1: virtual machine match the version of VirtualBox you have installed on
worker1: your host and reload your VM.
worker1:
worker1: Guest Additions Version: 6.0.0 r127566
worker1: VirtualBox Version: 6.1
==> worker1: Setting hostname...
==> worker1: Configuring and enabling network interfaces...
==> worker1: Mounting shared folders...
worker1: /vagrant => /home/david/projects/pessoais/study-kubernetes/Instalacoes/Baremetal Local
==> worker1: Running provisioner: shell...
worker1: Running: /tmp/vagrant-shell20220726-606777-nzd7b9.sh
worker1: Disabling swap
worker1: Disabling firewall
worker1: Enabling kernel modules required for containerd
worker1: Adding kernel configurations for kubernetes
worker1: Installing containerd with systemd cgroups
worker1: Adding kubernetes repository
worker1: Installing Kubernetes binaries (kubeadm, kubelet and kubectl)
worker1: Enabling ssh authentication
worker1: Setting root password
worker1: Updating hosts in /etc/hosts file
==> worker1: Running provisioner: shell...
worker1: Running: /tmp/vagrant-shell20220726-606777-7qs118.sh
worker1: Executing cluster join
==> worker2: Importing base box 'ubuntu/jammy64'...
==> worker2: Matching MAC address for NAT networking...
==> worker2: Checking if box 'ubuntu/jammy64' version '20220718.0.0' is up to date...
==> worker2: Setting the name of the VM: worker2
==> worker2: Fixed port collision for 22 => 2222. Now on port 2201.
==> worker2: Clearing any previously set network interfaces...
==> worker2: Preparing network interfaces based on configuration...
worker2: Adapter 1: nat
worker2: Adapter 2: hostonly
==> worker2: Forwarding ports...
worker2: 22 (guest) => 2201 (host) (adapter 1)
==> worker2: Running 'pre-boot' VM customizations...
==> worker2: Booting VM...
==> worker2: Waiting for machine to boot. This may take a few minutes...
worker2: SSH address: 127.0.0.1:2201
worker2: SSH username: vagrant
worker2: SSH auth method: private key
worker2: Warning: Remote connection disconnect. Retrying...
worker2: Warning: Connection reset. Retrying...
worker2:
worker2: Vagrant insecure key detected. Vagrant will automatically replace
worker2: this with a newly generated keypair for better security.
worker2:
worker2: Inserting generated public key within guest...
worker2: Removing insecure key from the guest if it's present...
worker2: Key inserted! Disconnecting and reconnecting using new SSH key...
==> worker2: Machine booted and ready!
==> worker2: Checking for guest additions in VM...
worker2: The guest additions on this VM do not match the installed version of
worker2: VirtualBox! In most cases this is fine, but in rare cases it can
worker2: prevent things such as shared folders from working properly. If you see
worker2: shared folder errors, please make sure the guest additions within the
worker2: virtual machine match the version of VirtualBox you have installed on
worker2: your host and reload your VM.
worker2:
worker2: Guest Additions Version: 6.0.0 r127566
worker2: VirtualBox Version: 6.1
==> worker2: Setting hostname...
==> worker2: Configuring and enabling network interfaces...
==> worker2: Mounting shared folders...
worker2: /vagrant => /home/david/projects/pessoais/study-kubernetes/Instalacoes/Baremetal Local
==> worker2: Running provisioner: shell...
worker2: Running: /tmp/vagrant-shell20220726-606777-rfgvit.sh
worker2: Disabling swap
worker2: Disabling firewall
worker2: Enabling kernel modules required for containerd
worker2: Adding kernel configurations for kubernetes
worker2: Installing containerd with systemd cgroups
worker2: Adding kubernetes repository
worker2: Installing Kubernetes binaries (kubeadm, kubelet and kubectl)
worker2: Enabling ssh authentication
worker2: Setting root password
worker2: Updating hosts in /etc/hosts file
==> worker2: Running provisioner: shell...
worker2: Running: /tmp/vagrant-shell20220726-606777-ffyab8.sh
worker2: Executing cluster join
Now let's SSH into the master and check if the cluster is up and running
❯ vagrant ssh master
vagrant@master:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 10m v1.24.0
worker1 Ready <none> 8m20s v1.24.0
worker2 Ready <none> 6m39s v1.24.0
vagrant@master:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d4b75cb6d-2xwkd 1/1 Running 0 13m
kube-system coredns-6d4b75cb6d-mfbkv 1/1 Running 0 13m
kube-system etcd-master 1/1 Running 0 13m
kube-system kube-apiserver-master 1/1 Running 0 13m
kube-system kube-controller-manager-master 1/1 Running 0 13m
kube-system kube-proxy-567z7 1/1 Running 0 10m
kube-system kube-proxy-92j9g 1/1 Running 0 11m
kube-system kube-proxy-gsvd5 1/1 Running 0 13m
kube-system kube-scheduler-master 1/1 Running 0 13m
kube-system weave-net-25k8d 2/2 Running 1 (9m17s ago) 10m
kube-system weave-net-4pvlc 2/2 Running 1 (13m ago) 13m
kube-system weave-net-sxsdz 2/2 Running 1 (11m ago) 11m
vagrant@master:~$
Remote kubectl
Install kubectl on your machine.
sudo apt-get install kubectl
Copy the contents of .kube/config from inside the master machine
vagrant ssh master
cat /home/vagrant/.kube/config
Paste the entire output
mkdir /home/$USER/.kube
vim config # or nano, whichever you prefer
# paste, save and exit
Now just run from your machine
kubectl get nodes
If you already have a .kube/config configured for another cluster on your machine, create another file, for example config-vagrant-local, and paste the contents.
kubectl --kubeconfig <path to config-vagrant-local> get nodes
❯ kubectl --kubeconfig /home/$USER/.kube/config-vagrant-local get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 41m v1.24.0
worker1 Ready <none> 40m v1.24.0
worker2 Ready <none> 38m v1.24.0
There are better ways to manage kubeconfig, which we will see during the study.