Skip to content

Building a Kubernetes Cluster in my basement

Rack with two servers
2 servers of my kubernetes cluster in 19″ rack

I know Kubernetes. Often repeated in interviews or talks. But do I? Does anyone really? Kubernetes knowledge is such a complex topic, and encompasses a multitude of surrounding tools, deployment strategies and general networking and security skills. I have worked with Amazon EKS, Google GKE and minikube in the past. To really deepen my knowledge, it is high time to build my own baremetal cluster (including hardware) from scratch in my basement! Here’s how it went.

Hardware for my bare metal Kubernetes Cluster

For hardware I decided on a 19″ server rack, because it looks way cooler than a bunch of tower PCs. However my general strategy was using consumer hardware instead of dedicated server hardware. My reasoning was, that I’ll use the cluster for learning and teaching purposes and don’t really need a chassis intrusion sensor or redundant power supplies. If it breaks, it breaks. I decided against raspberries, because I didn’t want to learn ARM peculiarities. If the whole project fails, I could use my hardware as personal computer or media box. This decision made the whole project affordable

I decided on two servers, one controller and one worker node, which I realize is not enough for any kind of production use case, however ideal for teaching and learning. I already plan on adding another node later this year.

Motherboard and power supply

I bought AMD Ryzen 7 (AM5) as CPU and the cheapest available motherboard. I was interested in 2.5G ethernet support and a M.2 slot for PCIe M.2 SSDs. Fast networking and fast storage seems important. In hindsight, it was an unnecessary luxury to get be!quiet CPU coolers and power supplies. It is after all a basement and noise does not really matter. Also I should have gotten a 2U instead of 4U chassis, for future extensibility of my rack. A 2.5G switch completes the setup.

Software choices

After starting up, Linux was installed pretty fast. Same for containerd. Building a kubernetes cluster from scratch means kubeadm, kubelet, kubectl. Then I got sucked into the research rabbit hole of “Flannel, Calico, Kube-router… what should I use?” In the end I decided to take a shortcut by using the k0s distribution which seems to make some sensible choices regarding networking, storage, container engine etc. and probably saved me quite some time. I would have also liked microk8s, but installing snap is a no-go for me. I quickly added an nginx-ingress-controller, Kubernetes dashboard and some sample workloads.

Harbor User Interface

It took me surprisingly long to install Harbor (never have installed it before, to be honest) for my private registry. I had to copy the CA certificates to both machines (as well as all clients that access the cluster), because otherwise I’d get a certificate error on pulling. Everything regarding reloading the CA certificates that I found online was concerning docker and not containerd. In the end I realized that these are not production servers and I can just reboot them to reload my CA certificates.

Pushing to harbor and then using kubectl or helm to deploy my workloads on the kubernetes cluster works great. Next step is a proper CI/CD pipeline. I wonder if I should install JenkinsX or GitLab – or something else entirely? Feel free to tell me your thoughts in the comments.

Leave a Reply