As much as I love Proxmox, if it is anything, it is basic. This is a double edge sword of course, it is rock solid and stable (foreshadowing), and it is also exceptionally basic with few features I want, especially as a Kubernetes user.
I had been planning to move from Proxmox to Harvester (again) for a little while, however this was accelerated by a bug1 I happened across when upgrading from v8 to v9. The Ceph cluster became unstable post upgrade, with the manager services seg faulting in a continuous loop. Turns out this was an issue with the python interpreter2 and before the solution became available my Ceph cluster toppled over and became totally unresponsive, and I certainly played a role in the final failure (whoops). At this time I decided to simply tear down the Proxmox cluster as a whole and begin setting Harvester up.
I recently needed to flash some Sonoff S31 power switches, and realized I forgot to write down what I did and how to do it. Thankfully the process (thanks to ESPHome) is exceptionally easy. So this is a quick write up to explain the disassembly and flashing process.
Disassembly
First pull the grey power button plate off the S31 switch, this can be done with a small flathead screwdriver.
In the previous post (Installing NixOS on BTRFS: Part 1) I realized only after installing NixOS that I had forgot to encrypt my installation, so I have decided to reinstall. This is a small follow up to that post.
Setting up the storage
Like before I opted for BTRFS RAID-0 with two 1TB NVME SSDs, and I made a small change, I will not be using a swap partition this time around. After some consideration I don’t see the value in adding swap to my HTPC system.
It has been many months since I had decided to update my HTPC running NixOS. Sadly, failing to update the HTPC machine as new NixOS versions became available left my machine’s OS version too far back and now no longer compatible with the current version of my NixOS config. I was left with many errors when attempting to upgrade and some may not have been resolvable due to some packages requiring newer compilers that were not available on the system. After a couple hours of debugging I realized this was not worth the effort and decided to simply scrap the system and reinstall. This new install will be just about the simplest install possible, so it is a good starting point.
If like me you have a very small Synapse deployment but have joined some very large rooms in the past and left since, you may have your server reaching out still to those previous rooms servers. I discovered this when I noticed my IDS/IPS was catching connections outbound to certain GeoIP restricted locations (Iran, Russia, China, Saudi Arabia, etc), thankfully most of the destination ports were 8448 (the default synapse port) so it was fairly obvious what the service was. At first I attempted to take a look at the database and I saw about 10k lines worth of destinations Synapse was reaching out to, I had joined some large rooms so this was not shocking. This is generating many false positives and I left these rooms a long time ago so there is no need for my servers to be communicating with them. Below is how I cleared the rooms from my server.
In my previous post “Backup Server Woes” I finally setup an S3 backup box, with some trials and tribulations of course. Sadly through this process I learned a couple things about my backup strategy I was currently using and they weren’t good. Previously I was using K10, K10 has a nice little web UI I can proxy through Rancher MCM, which makes management nice as the UI never gets exposed other than through the Rancher UI.
So whats wrong with K10? Technically there’s not really anything wrong with K10, but the combination of K10 + Ceph seems to be the problem. K10 creates a PVC to offload data to your destination, which isn’t abnormal, but its the way it does this that confuses and annoys me.
It occurred to me that I am long past due for deploying an actual backup box for the lab. I have for years now relied either on a third party (S3), external HDDs, or my desktop (what else will I do with 20TB of ZFS?), to temporarily hold data before moving it back to what ever cluster or application I am going to redeploy. This mean I actually haven’t had a true reliable device dedicated to backups, ever. While my site here may not show it (yet), in the past year I have redeployed my lab four times so far. This is going from bare metal rke2, harvester, harvester (again), to Proxmox, and before that in the years prior the story was roughly the same. At this point I think I have the janky way of redeploying a cluster down to a science, but its time for that to change.
As discussed in the previous post I set out with redeploying my cluster and intended on enabling my cluster with backups. This time around I wanted to try out Kasten K10 (from Veeam), I fully admit at times all I want is a GUI. I just want to take a quick peak and make sure everything is good to go. This is why this time around I decided to give K10 a shot, Velero works well enough but the eye candy caught my attention.
It has been about four months since I last re-deployed my homelab, and I have grown bored of my setup once again. In the previous series “Deploying Harvester” I went over the process I went through to deploy Harvester (the right way). My initial impressions of Harvester were not good but after taking a step back and redeploying with a more appropriate configuration (and buying backup batteries) everything became much more stable. So why the switch?
Recently I have deployed HomeBox to my cluster. Homebox is a fairly nifty program for home inventory management. One feature of it that I like in particular is the ability to print QR codes for items and locations. This “quickbit” isn’t intended to go over Homebox itself but its worth mentioning as it seems like a useful tool and spurred this need. One problem is that HomeBox can’t directly print to any printer, you need to download the QR code and print it yourself. This has been stated in other posts but I simply do not use Windows and this means more obsurce things like label printing support is limited. Thankfully I found brother_ql and this allows you to print jpegs to your printer. Below are some sample commands for printing said jpegs.