Featured Posts
K10, when creating a PVC to copy data from, will only use the parent PVCs settings and as a result must create a PVC with the same access mode. This means if you backup a RWX or RWO PVC K10 will create a clone with the same access mode, and there’s the rub. Normally this would be fine but the way the Ceph CSI handles this is a little different than other CSIs (I think). When the new PVC is created based on the snapshot of another PVC, and the access mode is set to RWX or RWO, Ceph CSI will then clone the volume entirely, this causes Ceph to attempt to duplicate the entire volume. So if you have a 3TB volume and you would like to backup the volume it must be cloned in full before it will become available to a pod. As one can imagine this causes massive delays in backups and requires all the storage to be duplicated before it can be backed up.
Recent Posts
Only relevant devices are displayed, “leaf” switches and APs are excluded.
As usual I have an aversion to creating “pet” VMs, and as such I decided to deploy my Forgejo runners on RKE2. Sticking to the Rancher philosophy of “cattle not pets” (a philosophy I strongly agree with), I decided to go this route. Of course deploying Forgejo runners in Kubernetes comes with its own issues, mainly, unlike GitLab, Forgejo can not create pods in a cluster natively. The recommended way to deploy runners in Kubernetes is via DinD (Dock in Docker), this means my workflows will be in containers, in an DinD container (on RKE2), in a container (on Harvester, also RKE2). It really is containers all the way down!
To save you from thinking too hard, dear reader, the answer is: incredibly lazy, I am pretty sure that’s how I got into this career in the first place. As the lazy man I am, I have become tired of remembering to restart my deployments, and spell check my posts when I make changes to this site. So I did what any proper lazy person would do, I spent more time automating my tasks, rather than just being careful, and diligent.
As much as I love Proxmox, if it is anything, it is basic. This is a double edge sword of course, it is rock solid and stable (foreshadowing), and it is also exceptionally basic with few features I want, especially as a Kubernetes user.
I recently needed to flash some Sonoff S31 power switches, and realized I forgot to write down what I did and how to do it. Thankfully the process (thanks to ESPHome) is exceptionally easy. So this is a quick write up to explain the disassembly and flashing process.
In the previous post “Installing NixOS on BTRFS: Part 1” I realized only after installing NixOS that I had forgot to encrypt my installation, so I have decided to reinstall. This is a small follow up to that post.
It has been many months since I had decided to update my HTPC running NixOS. Sadly, failing to update the HTPC machine as new NixOS versions became available left my machine’s OS version too far back and now no longer compatible with the current version of my NixOS config. I was left with many errors when attempting to upgrade and some may not have been resolvable due to some packages requiring newer compilers that were not available on the system. After a couple hours of debugging I realized this was not worth the effort and decided to simply scrap the system and reinstall. This new install will be just about the simplest install possible, so it is a good starting point.
If like me you have a very small Synapse deployment but have joined some very large rooms in the past and left since, you may have your server reaching out still to those previous rooms servers. I discovered this when I noticed my IDS/IPS was catching connections outbound to certain GeoIP restricted locations (Iran, Russia, China, Saudi Arabia, etc), thankfully most of the destination ports were 8448 (the default synapse port) so it was fairly obvious what the service was. At first I attempted to take a look at the database and I saw about 10k lines worth of destinations Synapse was reaching out to, I had joined some large rooms so this was not shocking. This is generating many false positives and I left these rooms a long time ago so there is no need for my servers to be communicating with them. Below is how I cleared the rooms from my server.