Summary

This post was not written during the struggles, some details may be missing as the details fade from memory. It seems best to write about my failures here as the process of moving to Harvester actually resulted in me redeploying my lab twice. My first Harvester implementation was severely flawed.

Hardware setup

Due to the hardware I had available I thought it would be best to assign the three SFF machines I had to be Harvester master nodes. In hindsight this provided me no real benefit, Harvester handles the promotion and demotion of nodes to master or worker by default. This did not truly provide me any advantage, while it is good practice to keep workloads off of master nodes in traditional Kubernetes clusters Harvester can somewhat break this model. Harvester does not support/condone deploying workloads to the Harvester cluster itself (same as Rancher MCM), the intended model is to deploy downstream clusters for said workloads, this most likely provides the security benefit an operator would be looking for. This is not to say, if someone has the hardware available then they should not dedicate certain nodes as master, but that in my lab with the limited hardware I did not stand to gain much.

Struggles

Some of the issues described below are explicitly discouraged by the Harvester team and some were simply not well thought out.

Rancher vCluster

The Rancher vCluster addon was not (at the time of this writing) recommended and was technically an alpha feature. Of course that wouldn’t stop me from making poor decisions. One of the primary issues I encountered with the vCluster add on was the memory constraints. By default the memory constraints were far too low, simply deploying a second or third cluster can cause the vCluster to be reaped by OOM killer. So mid Terraform deploy, Rancher would crash and crash loop generally until the guest cluster would become available then stabilize. While this was not a show stopper this would cause Terraform to fail, this would then need to be rerun in order to get Terraform to save the current state.

The second issue I encountered with the vCluster was that without some rework Rancher MCM itself in the vCluster could not be scaled up for redundancy. Scaling Rancher up would cause the builtin helm chart to deploy whole new instances of Rancher MCM I was now left with three separate MCMs all running in the vCluster clamoring for control. It was easy enough to scale back down to one but this obviously lead to down time and a fair bit of frustration. Without a redundant MCM any node going down could leave me with an inaccessible cluster until Rancher recovered.

kube-vip Blackouts

Related (but separate) to the vCluster is an issue with how Rancher and Harvester are tied to one another. When Harvester is imported into Rancher you are able to use projects for scoping load balancer pools. Without Rancher, Harvester can only scope load balancer pools to namespaces, and when Rancher goes down, there is no way for a guest cluster to ensure its load balancer is available for use. The Harvester cloud provider appears to not totally revoke the IPs from guest clusters, however while the load balancer is being checked the connection will drop. For something as simple as a webpage this was almost never noticed, however I do host some applications that are sensitive to service disruptions, when these checks would occur the service would become unavailable for a few seconds. This is worth taking note of though, this is true even if/when Rancher MCM goes down, you (should) still see this issue. This means Ranchers uptime is directly tied to the uptime of the guest clusters, a loss of Rancher will result in the loss of guest cluster load balancers. This is an issue I intend on taking up with the Harvester team if not already on their radar.

Update: After redeploying and hours of testing I was unable to reproduce this. It would seem (thankfully) Rancher access does not impact the guest cluster vIP.

Actual Blackouts

It turns out the new apartment I moved to does not have the most stable power, there appears to be a certain set of conditions that when met can reliably trip the circuit breaker for the server rack. Sadly I discovered this after migrating to Harvester and already implementing Rocky 9.3 Cloud VMs. Why are Rocky 9.3 Cloud VMs a problem? XFS, the single least fault tolerant filesystem I have ever had the displeasure of using. For years I have avoided using it ever since my time as a consultant for ISP’s. XFS does NOT tolerate power faults at all! XFS is the default FS used for most RHEL based distros and as a result I was stuck with half a dozen XFS based VMs. It is fair to say you can almost always recover XFS filesystems post power failure, in fact I have never had one not be recoverable to my recollection. You know what is better than having to recover a failed FS though? Not needing to recover it in the first place! My years long aversion to using the FS has been reignited once again, the issue was partially remedied before the rebuild by buying backup batteries, but my hatred of XFS persists and will not be left as is.

Missing VM Statistics

My first deployment of Harvester used their at the time new version of 1.3.0, this wasn’t a huge issue but this version was missing the proper VM statistics for CPU usage (among other things). This was not the end of the world but annoying.

Relevant issues

Below is a collection of relevant issues either discussed above or too small to mention in its own section. A few are worthy of small notes though:

  1. At this time Harvester CSI does not support RWX
    1. As a home lab not too many applications actually needed this, however NeuVector does require RWX, so for the time being I am unable to deploy this.
  2. At this time Harvester CSI does not support Snapshotting
    1. This is sadly a fairly big issue for me as this make backups much more difficult. I have been wanting to try out Kasten K10 for a while now and this requires snapshotting so k10 is out for now. I could default back to Velero backups via restic but my experience with this backup model and databases was not good. Velero restic based backups of databases were fairly prone to failure, this is not restic or Veleros problem, databases just need their own special gloves.
  3. There are areas where the Harvester Terraform provider seriously lags behind the options available in the Harvester UI. These all however either should or for sure do have workarounds, Harvester under the hood is still just RKE2 and all the areas where the Terraform Provider is missing can be covered by the Kubernetes providers manifest resource.
    1. Namespaces can not be made via the Harvester provider
    2. Cloud-init configs can not be saved as “Cloud Config Templates”
    3. IPPools can not be created via the Harvester provider

GitHub Issues

Key takeaways

To be blunt this was not a fun experience with Harvester, my initial opinion was low and Harvester does still need a few things I believe are essential. Almost all primary concerns are on Harvesters cloud provider though, RWX and Snapshots being my personal hot button concerns and these set for the next release. All other concerns are either minor or honestly unrelated to Harvester itself (XFS being……what it is). In the previous post I did state Harvester was the least mature option, so it does feel like this is a natural consequence and I was hopeful a proper deployment would resolve my issues and concerns. So I planned a new migration with all my lessons learned.