Kubernetes

2025
  As usual I have an aversion to creating “pet” VMs, and as such I decided to deploy my Forgejo runners on RKE2. Sticking to the Rancher philosophy of “cattle not pets” (a philosophy I strongly agree with), I decided to go this route. Of course deploying Forgejo runners in Kubernetes comes with its own issues, mainly, unlike GitLab, Forgejo can not create pods in a cluster natively. The recommended way to deploy runners in Kubernetes is via DinD (Dock in Docker), this means my workflows will be in containers, in an DinD container (on RKE2), in a container (on Harvester, also RKE2). It really is containers all the way down!
  To save you from thinking too hard, dear reader, the answer is: incredibly lazy, I am pretty sure that’s how I got into this career in the first place. As the lazy man I am, I have become tired of remembering to restart my deployments, and spell check my posts when I make changes to this site. So I did what any proper lazy person would do, I spent more time automating my tasks, rather than just being careful, and diligent.
  As much as I love Proxmox, if it is anything, it is basic. This is a double edge sword of course, it is rock solid and stable (foreshadowing), and it is also exceptionally basic with few features I want, especially as a Kubernetes user.
2024
As discussed in the previous post I set out with redeploying my cluster and intended on enabling my cluster with backups. This time around I wanted to try out Kasten K10 (from Veeam), I fully admit at times all I want is a GUI. I just want to take a quick peak and make sure everything is good to go. This is why this time around I decided to give K10 a shot, Velero works well enough but the eye candy caught my attention.
It has been about four months since I last re-deployed my homelab, and I have grown bored of my setup once again. In the previous series “Deploying Harvester” I went over the process I went through to deploy Harvester (the right way). My initial impressions of Harvester were not good but after taking a step back and redeploying with a more appropriate configuration (and buying backup batteries) everything became much more stable. So why the switch?
If you have read the previous two posts in this series you will know the migration to Harvester has had its speed bumps. Most of the issues had come down to using too many experimental features and deploying the cluster in an unsupported fashion. At this point it seems it would be best to completely redo not just the physical nodes but also make tweaks to the VMs. The goal for this redeployment will be the following:
If like me you generally prefer NOT to use Helm charts but would rather use Kustomize, you obviously wont be able to escape the ubiquity of Helm. This is not a condemnation of Helm, it is the superior way of distributing software to consumers when done right (GitLab, you know what you did). However in my opinion Kustomize is the better long term solution, so I prefer to template Helm charts out, then split them by Kind and have Kustomize reference each file separately. Below is a very simple bash function to split a templated Helm chart into individual files and place them into directory named “base”. Note: you will need yq.
This post was not written during the struggles, some details may be missing as the details fade from memory. It seems best to write about my failures here as the process of moving to Harvester actually resulted in me redeploying my lab twice. My first Harvester implementation was severely flawed.
For sometime I have been in need not only to study for my CKA but to find an easier way to deploy new clusters, and test new applications. Before this endeavour the best option was to use K3d, and this worked well for trying out applications but if I wanted to play with a new CNI I was out of luck. The next option was to deploy a cluster to VirtualBox, which again, works but my desktop only has so many resources available to it and the setup even with vagrant was far from bullet proof. As a result it was time to move on to finding a solution for the lab. The goal was to find a solution to my cluster building woes, I wanted it to be easy to get a cluster up and running with as little friction as possible, but I didn’t want a cluster with a bunch of caveats, this needed to be a full cluster with all the bells and whistles.
Add externalTrafficPolicy: Local to a Kubernetes service to capture source IPs.