If like me you have a very small Synapse deployment but have joined some very large rooms in the past and left since, you may have your server reaching out still to those previous rooms servers. I discovered this when I noticed my IDS/IPS was catching connections outbound to certain GeoIP restricted locations (Iran, Russia, China, Saudi Arabia, etc), thankfully most of the destination ports were 8448 (the default synapse port) so it was fairly obvious what the service was. At first I attempted to take a look at the database and I saw about 10k lines worth of destinations Synapse was reaching out to, I had joined some large rooms so this was not shocking. This is generating many false positives and I left these rooms a long time ago so there is no need for my servers to be communicating with them. Below is how I cleared the rooms from my server.
In my previous post “Backup Server Woes” I finally setup an S3 backup box, with some trials and tribulations of course. Sadly through this process I learned a couple things about my backup strategy I was currently using and they weren’t good. Previously I was using K10, K10 has a nice little web UI I can proxy through Rancher MCM, which makes management nice as the UI never gets exposed other than through the Rancher UI.
So whats wrong with K10? Technically there’s not really anything wrong with K10, but the combination of K10 + Ceph seems to be the problem. K10 creates a PVC to offload data to your destination, which isn’t abnormal, but its the way it does this that confuses and annoys me.
It occurred to me that I am long past due for deploying an actual backup box for the lab. I have for years now relied either on a third party (S3), external HDDs, or my desktop (what else will I do with 20TB of ZFS?), to temporarily hold data before moving it back to what ever cluster or application I am going to redeploy. This mean I actually haven’t had a true reliable device dedicated to backups, ever. While my site here may not show it (yet), in the past year I have redeployed my lab four times so far. This is going from bare metal rke2, harvester, harvester (again), to Proxmox, and before that in the years prior the story was roughly the same. At this point I think I have the janky way of redeploying a cluster down to a science, but its time for that to change.
As discussed in the previous post I set out with redeploying my cluster and intended on enabling my cluster with backups. This time around I wanted to try out Kasten K10 (from Veeam), I fully admit at times all I want is a GUI. I just want to take a quick peak and make sure everything is good to go. This is why this time around I decided to give K10 a shot, Velero works well enough but the eye candy caught my attention.
It has been about four months since I last re-deployed my homelab, and I have grown bored of my setup once again. In the previous series “Deploying Harvester” I went over the process I went through to deploy Harvester (the right way). My initial impressions of Harvester were not good but after taking a step back and redeploying with a more appropriate configuration (and buying backup batteries) everything became much more stable. So why the switch?
Recently I have deployed HomeBox to my cluster. Homebox is a fairly nifty program for home inventory management. One feature of it that I like in particular is the ability to print QR codes for items and locations. This “quickbit” isn’t intended to go over Homebox itself but its worth mentioning as it seems like a useful tool and spurred this need. One problem is that HomeBox can’t directly print to any printer, you need to download the QR code and print it yourself. This has been stated in other posts but I simply do not use Windows and this means more obsurce things like label printing support is limited. Thankfully I found brother_ql and this allows you to print jpegs to your printer. Below are some sample commands for printing said jpegs.
During the install process for NixOS it will not typically (at least in my experience) add a tmpfs entry to /tmp, as a result I need to add this post install. Simply add the following to your nix config:
fileSystems."/tmp" = { fsType = "tmpfs"; };
For some reason the NixOS search site does not display any information about the specialFSTypes that you can specify in fileSystems.<name>.fsType so I had to find that myself. Thankfully as usual all modules have a “Declared in” section where you can look through the code and thats where I found this specialFSTypes.
I recently discovered ZFS datasets are sent unencrypted by default and that they need to be sent with a flag to preserve their encrypted status. Well too late, I had already migrated my data, and deleted the old datasets by the time I had discovered this. After doing some research I found encrypting a dataset that is already created is not possible, so in order to fix this you need to send the unencrypted data set to an encrypted dataset.
If you have read the previous two posts in this series you will know the migration to Harvester has had its speed bumps. Most of the issues had come down to using too many experimental features and deploying the cluster in an unsupported fashion. At this point it seems it would be best to completely redo not just the physical nodes but also make tweaks to the VMs. The goal for this redeployment will be the following:
If like me you generally prefer NOT to use Helm charts but would rather use Kustomize, you obviously wont be able to escape the ubiquity of Helm. This is not a condemnation of Helm, it is the superior way of distributing software to consumers when done right (GitLab, you know what you did). However in my opinion Kustomize is the better long term solution, so I prefer to template Helm charts out, then split them by Kind and have Kustomize reference each file separately. Below is a very simple bash function to split a templated Helm chart into individual files and place them into directory named “base”. Note: you will need yq.