Summary

As discussed in the previous post I set out with redeploying my cluster and intended on enabling my cluster with backups. This time around I wanted to try out Kasten K10 (from Veeam), I fully admit at times all I want is a GUI. I just want to take a quick peak and make sure everything is good to go. This is why this time around I decided to give K10 a shot, Velero works well enough but the eye candy caught my attention.

Summary

It has been about four months since I last re-deployed my homelab, and I have grown bored of my setup once again. In the previous series “Deploying Harvester” I went over the process I went through to deploy Harvester (the right way). My initial impressions of Harvester were not good but after taking a step back and redeploying with a more appropriate configuration (and buying backup batteries) everything became much more stable. So why the switch?

Summary

Recently I have deployed HomeBox to my cluster. Homebox is a fairly nifty program for home inventory management. One feature of it that I like in particular is the ability to print QR codes for items and locations. This “quickbit” isn’t intended to go over Homebox itself but its worth mentioning as it seems like a useful tool and spurred this need. One problem is that HomeBox can’t directly print to any printer, you need to download the QR code and print it yourself. This has been stated in other posts but I simply do not use Windows and this means more obsurce things like label printing support is limited. Thankfully I found brother_ql and this allows you to print jpegs to your printer. Below are some sample commands for printing said jpegs.

Adding TMPFS To NixOS

- 1 min read

Summary

During the install process for NixOS it will not typically (at least in my experience) add a tmpfs entry to /tmp, as a result I need to add this post install. Simply add the following to your nix config:

fileSystems."/tmp" = { fsType = "tmpfs"; };

For some reason the NixOS search site does not display any information about the specialFSTypes that you can specify in fileSystems.<name>.fsType so I had to find that myself. Thankfully as usual all modules have a “Declared in” section where you can look through the code and thats where I found this specialFSTypes.

Encrypting Unencrypted ZFS Datasets

- 2 mins read

Summary

I recently discovered ZFS datasets are sent unencrypted by default and that they need to be sent with a flag to preserve their encrypted status. Well too late, I had already migrated my data, and deleted the old datasets by the time I had discovered this. After doing some research I found encrypting a dataset that is already created is not possible, so in order to fix this you need to send the unencrypted data set to an encrypted dataset.

Summary

If you have read the previous two posts in this series you will know the migration to Harvester has had its speed bumps. Most of the issues had come down to using too many experimental features and deploying the cluster in an unsupported fashion. At this point it seems it would be best to completely redo not just the physical nodes but also make tweaks to the VMs. The goal for this redeployment will be the following:

Split Helm Charts

- 1 min read

Summary

If like me you generally prefer NOT to use Helm charts but would rather use Kustomize, you obviously wont be able to escape the ubiquity of Helm. This is not a condemnation of Helm, it is the superior way of distributing software to consumers when done right (GitLab, you know what you did). However in my opinion Kustomize is the better long term solution, so I prefer to template Helm charts out, then split them by Kind and have Kustomize reference each file separately. Below is a very simple bash function to split a templated Helm chart into individual files and place them into directory named “base”. Note: you will need yq.

BPG Proxmox Terraform Provider

- 1 min read

While I have moved away from using Proxmox in my home lab I noticed a large number of Terraform users had been using Telmate/proxmox provider. In my experience this provider is almost abandoned and seriously lags behind bpg/proxmox, bpg in my experience has fewer bugs and more features.

Summary

This post was not written during the struggles, some details may be missing as the details fade from memory. It seems best to write about my failures here as the process of moving to Harvester actually resulted in me redeploying my lab twice. My first Harvester implementation was severely flawed.

Hardware setup

Due to the hardware I had available I thought it would be best to assign the three SFF machines I had to be Harvester master nodes. In hindsight this provided me no real benefit, Harvester handles the promotion and demotion of nodes to master or worker by default. This did not truly provide me any advantage, while it is good practice to keep workloads off of master nodes in traditional Kubernetes clusters Harvester can somewhat break this model. Harvester does not support/condone deploying workloads to the Harvester cluster itself (same as Rancher MCM), the intended model is to deploy downstream clusters for said workloads, this most likely provides the security benefit an operator would be looking for. This is not to say, if someone has the hardware available then they should not dedicate certain nodes as master, but that in my lab with the limited hardware I did not stand to gain much.

Summary

For sometime I have been in need not only to study for my CKA but to find an easier way to deploy new clusters, and test new applications. Before this endeavour the best option was to use K3d, and this worked well for trying out applications but if I wanted to play with a new CNI I was out of luck. The next option was to deploy a cluster to VirtualBox, which again, works but my desktop only has so many resources available to it and the setup even with vagrant was far from bullet proof. As a result it was time to move on to finding a solution for the lab. The goal was to find a solution to my cluster building woes, I wanted it to be easy to get a cluster up and running with as little friction as possible, but I didn’t want a cluster with a bunch of caveats, this needed to be a full cluster with all the bells and whistles.