Encrypting Unencrypted ZFS Datasets

- 2 mins read

Summary

I recently discovered ZFS datasets are sent unencrypted by default and that they need to be sent with a flag to preserve their encrypted status. Well too late, I had already migrated my data, and deleted the old datasets by the time I had discovered this. After doing some research I found encrypting a dataset that is already created is not possible, so in order to fix this you need to send the unencrypted data set to an encrypted dataset.

Summary

If you have read the previous two posts in this series you will know the migration to Harvester has had its speed bumps. Most of the issues had come down to using too many experimental features and deploying the cluster in an unsupported fashion. At this point it seems it would be best to completely redo not just the physical nodes but also make tweaks to the VMs. The goal for this redeployment will be the following:

Split Helm Charts

- 1 min read

Summary

If like me you generally prefer NOT to use Helm charts but would rather use Kustomize, you obviously wont be able to escape the ubiquity of Helm. This is not a condemnation of Helm, it is the superior way of distributing software to consumers when done right (GitLab, you know what you did). However in my opinion Kustomize is the better long term solution, so I prefer to template Helm charts out, then split them by Kind and have Kustomize reference each file separately. Below is a very simple bash function to split a templated Helm chart into individual files and place them into directory named “base”. Note: you will need yq.

Summary

This post was not written during the struggles, some details may be missing as the details fade from memory. It seems best to write about my failures here as the process of moving to Harvester actually resulted in me redeploying my lab twice. My first Harvester implementation was severely flawed.

Hardware setup

Due to the hardware I had available I thought it would be best to assign the three SFF machines I had to be Harvester master nodes. In hindsight this provided me no real benefit, Harvester handles the promotion and demotion of nodes to master or worker by default. This did not truly provide me any advantage, while it is good practice to keep workloads off of master nodes in traditional Kubernetes clusters Harvester can somewhat break this model. Harvester does not support/condone deploying workloads to the Harvester cluster itself (same as Rancher MCM), the intended model is to deploy downstream clusters for said workloads, this most likely provides the security benefit an operator would be looking for. This is not to say, if someone has the hardware available then they should not dedicate certain nodes as master, but that in my lab with the limited hardware I did not stand to gain much.

Summary

For sometime I have been in need not only to study for my CKA but to find an easier way to deploy new clusters, and test new applications. Before this endeavour the best option was to use K3d, and this worked well for trying out applications but if I wanted to play with a new CNI I was out of luck. The next option was to deploy a cluster to VirtualBox, which again, works but my desktop only has so many resources available to it and the setup even with vagrant was far from bullet proof. As a result it was time to move on to finding a solution for the lab. The goal was to find a solution to my cluster building woes, I wanted it to be easy to get a cluster up and running with as little friction as possible, but I didn’t want a cluster with a bunch of caveats, this needed to be a full cluster with all the bells and whistles.

Configure FreshRSS With OIDC Auth

- 3 mins read

Summary

FreshRSS has relatively good documentation but I did find a couple things confusing when attempting to add OIDC authentication via Keycloak. As a result the below set of steps should guide other in setting up their instance.

Deployment

Versions

  • FreshRSS: 1.23.1
    • Apache version, Alpine does not work with OIDC auth at this time
  • Keycloak: 21.1.2

Setup Keycloak OIDC

In the realm you intend on using, create a new OIDC client.

Summary

I have come to the realization that online documentation around configuring Nextcloud to use SAML is lacking. I am not an expert by ANY means but I know enough to get things working with some trial and error. The following post is more or less a TL;DR of what to set to enable SAML auth in Nextcloud via Keycloak.

Deployment

Versions

  • Nextcloud: 14.5.0
    • SSO & SAML authentication: 6.0.1
  • Keycloak: 21.1.2

Setup Nextcloud SAML

Below are the settings needed for each section of the SAML settings page, do note most settings are hidden so you will need to expand them.

IP Whitelisting in Traefik

- 2 mins read

TL;DR

Add externalTrafficPolicy: Local to a kubernetes service to capture source IPs.

The Issue

Kubernetes by default will not pass the source IP to a LoadBalancer service. Usually this is not particularly an issue most days, however this has become an issue as I re-combine all my rke2 clusters. It was simple before to place internal only applications on one of the two internal only clusters, as I move back to a single cluster Traefik needs to be made aware of source IPs so white listing can be used.

Summary

Currently there appears to be a lack of options for home cameras that meet my current needs/expectations. As a result I set out to build my own cameras to fill this need. The intent is to build basic home security cameras that meet the following needs:

  • Ethernet
  • POE/POE+
  • A “moderate” or better camera quality (think 1080 or higher resolution)
  • No microphone
  • No PTZ functions
  • No “cloud” functions
  • Frigate compatible

Justifications

The majority of these needs are driven by a need for privacy/security.