Baremetal Local Multi Master
The required prerequisites:
sudo apt-get install vagrant
sudo apt-get install virtualbox
In recent versions of VirtualBox, I needed to modify the configuration to allow the IP range.
sudo mkdir /etc/vbox
sudo echo "* 10.0.0.0/8 192.168.0.0/16" >> /etc/vbox/networks.conf
sudo echo "* 2001::/64" >> /etc/vbox/networks.conf
The topology proposed here follows the Kubernetes project below. https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/ 
For this installation, we will configure 3 master nodes, 2 worker nodes, and 2 load balancers using HA Proxy.
The reason for 2 load balancers is that if only one exists and it fails, all communication with the cluster is also lost.
What's the point of having high availability for master nodes if it doesn't exist for the load balancer?
To create high availability for the load balancer as well, we will use Keepalived.
For the load balancer, we will use HA Proxy.
By analyzing the diagram below, it's possible to better understand what we will do. 
The Vagrantfile contains the machine configurations and the necessary script calls for installation.
Explaining the Vagrantfile
All machines will run the bootstrap_ssh.sh script.
This script simply copies the keys from files to the users inside the VM and authorizes the key for access with the command vagrant ssh <machine name>.
The machines created are loadbalancer1, loadbalancer2, master1, master2, master3, worker1, worker2.
It's possible to have up to:
- 239 workers (10.10.10.1 to 10.10.10.239)
- 9 masters (10.10.10.241 to 10.10.10.249)
- 4 load balancers (10.10.10.251 and the next 10.10.10.254)
and the address 10.10.10.250 will be reserved for keepalived
The first part of the vagrantfile is to bring up our load balancers (lb) because they will be the link between the masters and workers.

The file responsible for this configuration is bootstrap_lb.sh which will only be executed on load balancer machines.
Note:
The load balancers are not part of the Kubernetes cluster.
All other nodes that will be part of the Kubernetes cluster run bootstrap_nodes, whether masters or workers. This script contains the installation of containerd and Kubernetes binaries, as well as some necessary requirements that are in the Kubernetes documentation.
The execution of the first master (10.10.10.241) will be separate from the others, as only it will initialize the cluster; the others will only join the cluster.
The bootstrap.sh file will be the script that all nodes need to execute at the end of the deployment of each node.
Finally, if you analyze the vagrantfile, if it's a master it will execute the bootstrap_master.sh script, and if it's a worker it will execute the bootstrap_worker.sh script.
In the script for the master, it creates the cluster and saves the join command in an executable so that the worker can execute it via SSH to the master.
Basic Vagrant Commands
The machines defined in the vagrant file are: loadbalancer1, loadbalancer2, master1, master2, master3, worker1, and worker2
Start all machines. Be patient...
vagrant up

Destroy all machines.
vagrant destroy
Start a specific machine.
vagrant up master
Destroy a specific machine.
vagrant destroy worker1
Shutdown all machines.
vagrant halt
Shutdown a specific machine.
vagrant halt worker1
SSH into a machine
vagrant ssh master
Proposed Tests
Perform some tests to verify high availability.
Check keepalived on loadbalancer1 and loadbalancer2
vagrant ssh loadbalancer1 # to enter the machine
ip -c -br a # inside the machine to check network interfaces
 Notice that a virtual network was created on loadbalancer1. This virtual network is not present on loadbalancer2, as it was already declared on loadbalancer1.
Stop loadbalancer1 (vagrant halt loadbalancer1) and see if everything continues to work. It should, since that's why we created two load balancers. Also check that the virtual network is now present on loadbalancer2 (vagrant ssh loadbalancer2 and then ip -c -br a).
Stop loadbalancer2 (vagrant halt loadbalancer2) and watch the cluster go down by running kubectl get nodes inside one of the masters. Then bring the load balancers back up (vagrant up loadbalancer1 and vagrant up loadbalancer2) and check if everything is working again by running the same get nodes command.
Stop one master (vagrant halt master1) and check if everything is still working. In this case, you'll have 2 masters. Enter one of them (vagrant ssh master3) and run kubectl get nodes to see if you get a response. It should work...
Now stop another master (vagrant halt master2) and see what happens if you go from 3 masters to just 1. Run kubectl get nodes on master3.
Should it work? No, because of the consensus protocol. It doesn't know if it went down or if the other one went down. Study consensus again to understand quorum.
Bring up one of the masters (vagrant up master2) that was stopped and see if it starts working again... It should...