I built this environment several times when I was learning how Kubernetes and K3s worked. To save the constant re-installation on the upcoming cluster I first ran it all on a local environment using Vagrant, and automated the install using Ansible.
This made it easy to tear down, fix, and rebuild.
Here's the repo: local-k3s
Once this was matured I could then run the Ansible playbook against the production machines.
For the purposes of documentation, here's the Vagrantfile that builds a K3s cluster of 3 VMs calling Ansible during the provisioning to manage the install and config.
IMAGE_NAME = "cloud-image/ubuntu-24.04"
NODES = 3
Vagrant.configure("2") do |config|
config.ssh.insert_key = false
config.vm.disk :disk, size: "15GB", primary: true
config.vm.provider "virtualbox" do |v|
v.memory = 2048
v.cpus = 2
end
(1..NODES).each do |i|
config.vm.define "local-kube-10#{i}" do |node|
node.vm.box = IMAGE_NAME
node.vm.network "private_network", ip: "192.168.56.#{i + 100}"
node.vm.hostname = "local-kube-10#{i}"
# provision once final node is up
if i == NODES
node.vm.provision "ansible" do |ansible|
ansible.playbook = "playbooks/playbook.yml"
ansible.extra_vars = {
server_host_name: "{{ inventory_hostname }}"
}
ansible.groups = {
"control_plane" => ["local-kube-101"],
"worker" => (2..NODES).map { |n| "local-kube-10#{n}" },
}
ansible.limit = "all"
end
end
end
end
end
Once the cluster is setup and configured you should be able to start interacting with it.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
local-kube-101 Ready control-plane,master 5m52s v1.33.1+k3s1
local-kube-102 Ready <none> 4m29s v1.33.1+k3s1
local-kube-103 Ready <none> 4m29s v1.33.1+k3s1