Backup Server Woes

- 9 mins read

Summary

It occurred to me that I am long past due for deploying an actual backup box for the lab. I have for years now relied either on a third party (S3), external HDDs, or my desktop (what else will I do with 20TB of ZFS?), to temporarily hold data before moving it back to what ever cluster or application I am going to redeploy. This mean I actually haven’t had a true reliable device dedicated to backups, ever. While my site here may not show it (yet), in the past year I have redeployed my lab four times so far. This is going from bare metal rke2, harvester, harvester (again), to Proxmox, and before that in the years prior the story was roughly the same. At this point I think I have the janky way of redeploying a cluster down to a science, but its time for that to change.

Hardware Selections

As will become apparent this was a little haphazard. I had a few goals:

  • I have a short depth server rack so I needed to find a server that would fit
  • I would like the server to be 3U or less
  • I want the server to take as little power as possible
  • Must have S3

I don’t particularly enjoy building computers anymore, and I am finally at the point in my life where I was honestly “willing to pay a little extra for plug and play”. So I looked into the Synology RS1221+, and the TrueNAS Mini R

Synology RS1221+

At the outset the RS1221+ looked like a pretty sweet deal, at least according to the specs it can drop the power draw to as low as 22 watts fully loaded (with generic 1TB drives), it is a short depth chassis, and sells for about $1365. All in the price seemed fair until I realized, it comes with 4GB of RAM, only 1 Gig interfaces, and a proprietary OS. Pretty steep price for not much, and a locked down system. I moved on, the idea of loading some unknown OS that is locked down and has some kid gloves on does not appeal to me.

TrueNAS Mini R

The Synology was a little lack luster, the price was about on target but the RAM and OS were huge turn offs. I know TrueNAS is all on the ZFS bandwagon and so am I. So after checking out the TrueNAS Mini R, it seems like a great box! It starts at $1850, which seems a little steep but fair, the device may not be as power efficient, but it has dual 10 Gig interfaces, an 8 core atom processor, 32 GB of RAM, IPMI, and 12 total bays. Given it is TrueNAS I know it is at least mostly open source, they have helm charts for TrueNAS scale so I know I can get S3 installed, great! One small problem, the chassis is a full depth, so that sucks. To be honest I came close to just buying it and tossing it on top of the rack, but my compulsion for organization would not have allowed that.

Custom

I realized eventually I would just have to build what I want. The good news is I have bought most of what I need for this build before, so I just needed to find a chassis and buy some of the parts I bought a years ago. I settled on the SilverStone RM21-308 for my chassis, and purchased an old Rocket Raid HBA, and Broadcom powered 10Gtek dual 10 Gig Nic.

Assembling the Rig

After everything was shipped to my home I pieced it all to together, and……the NIC wont bring up the SFP+ module, and the HBA drivers are not in tree or in any package I can find. After hours of tracking down the HBA drivers I realized their binary only ends up downloading a kernel module from the internet and is not compatible with the ZFS enabled kernel from epel-release, and for the life of me I can not get the NIC to see the SFP+ module.

angry-pepe

After a while I was fairly fed up and decided I would just send these back and grab a new HBA and NIC. I ended up grabbing an LSI 9211-8I and replaced the 10Gtek NIC with a different model, the X520-10G-1s. After both arrived I swapped them in and the NIC came up no problem, but the HBA was not visible.

Install HBA drivers

The LSI 9211-9I is a fairly old card so I was not too surprised that the driver was not available, thankfully the driver is easily found in the “elrepo” for Rocky, and was easily installed:

sudo dnf install elrepo-release -y
sudo dnf install kmod-mpt3sas

after running modprobe mpt3sas I was able to bring the HBA up and see all my disks. As a final sanity check I like to reboot whenever I have any kernel change, and post reboot, all is well and I finally have my server hardware all working, NIC, and HBA.

Setting up the OS

At this point the OS is installed and the HBA drivers work, however my primary complaints with the Synology device I looked at before was the obvious lack of ZFS (I mean 4Gigs is obviously not enough for pretty much anything and certainly not ZFS). So now the goal is to setup ZFS and Minio for S3.

Installing ZFS

Installing ZFS on Rocky is thankfully fairly straight forward and in my experience reliable:

sudo dnf install https://zfsonlinux.org/epel/zfs-release-2-3$(rpm --eval "%{dist}").noarch.rpm
sudo dnf install -y epel-release
sudo dnf install -y kernel-devel
sudo dnf install -y zfs

I want the kernel module to always be loaded at boot so I enabled that like so:

sudo echo zfs > /etc/modules-load.d/zfs.conf

and rebooted, now that everything is up and zfs loaded I created an eight disk RAIDZ pool and two data sets for Minio in the pool:

sudo zpool create pool raidz /dev/disk/by-id/<id>
sudo zfs create pool/minio -o mountpoint=/srv/minio
sudo zfs create pool/minio/data -o mountpoint=/srv/minio/data
Note:
In the next section I will be using Podman to deploy Minio, I will also be running Minio as a standard user, because of this design decision it is not possible to pass Podman a ZFS zvol directly as doing so would require the user have the rights to mount directories. As a result mounted datasets are the only available option here.

Setting up Minio

Now that ZFS is up and running I need to create a Minio user and install Podman:

sudo dnf install podman -y

Setting up the user

I created the user, setting the home to the “pool/minio” dataset I created before:

sudo adduser minio -U \
  -c "Minio podman user" \
  -d /srv/minio \
  -u 1001

The user will need to have some subuids and subgids assigned to it to be able to create pods [1]:

sudo usermod --add-subuids 100000-165535 --add-subgids 100000-165535 minio

You will also run into issues trying to run Podman as the minio user if you login via su - minio, we can fix that with the following [2]:

sudo loginctl enable-linger 1001

and finally the last step before setting up the user:

sudo chown -R 1001:1001 /srv/minio

Setting up the Minio service

Before continuing, consider logging into a user via a different method (i.e. not su - minio) I found out about while running into issues attempting to create a systemd service as the minio user. In short machinectl allows you to login as another user, not just run commands as a user, you get an actual login session (to my understanding).

sudo dnf install systemd-container -y
sudo machinectl shell --uid minio

After logging in to the minio user account (using the above method) I proceeded to write a Podman pod config. If this is news to you checkout the docs for podman-kube-play. If you are already a Kubernetes user a lot of this will look familiar to you:

---
apiVersion: apps/v1
kind: Pod
metadata:
  name: s3
  annotations:
    io.podman.annotations.userns: keep-id
spec:
  securityContext:
    fsGroup: 1001
  restartPolicy: OnFailure
  containers:
    - name: minio
      image: docker.io/bitnami/minio:2024.12.18-debian-12-r0
      imagePullPolicy: "IfNotPresent"
      securityContext:
        capabilities:
          drop:
            - ALL
        runAsNonRoot: true
        runAsUser: 1001
        allowPrivilegeEscalation: false
        seccompProfile:
          type: "RuntimeDefault"
      env:
        - name: BITNAMI_DEBUG
          value: "false"
        - name: MINIO_SCHEME
          value: "https"
        - name:  MINIO_BROWSER_REDIRECT_URL
          value: "https://minio.lab.lan:9001"
        - name: MINIO_FORCE_NEW_KEYS
          value: "no"
        - name: MINIO_ROOT_USER
          value: "admin"
        - name: MINIO_ROOT_PASSWORD
          value: "asuperdupersecurepassword"
        - name: MINIO_BROWSER
          value: "on"
        - name: MINIO_CONSOLE_PORT_NUMBER
          value: "9001"
      ports:
        - name: minio-api
          containerPort: 9000
          hostPort: 9000
          protocol: TCP
        - name: minio-console
          containerPort: 9001
          hostPort: 9001
          protocol: TCP
      volumeMounts:
        - name: data
          mountPath: /bitnami/minio/data:z
        - name: ssl
          mountPath: /certs:z
  volumes:
    - name: data
      hostPath:
        path: /srv/minio/data
        type: Directory
    - name: ssl
      hostPath:
        path: /srv/minio/ssl
        type: Directory

While trying to get Podman and SELinux to play nice together I learned a few tidbits. For example, when running kube-play with SELinux enabled you can skip the need to add “–userns=keep-id” to the “podman kube-play” command, by adding the annotation “io.podman.annotations.userns: keep-id” to “metadata.name.annotations”. You can also pass files into the pod without tripping SELinux by ensuring you pass the file labels by adding “:z” to the end of a “volumeMounts” “mountPath”. Both of these findings are in the config above. With all that in mind the pod can be started now:

podman kube-play ./s3.yaml

Everything should start, and assuming it all does we can easily create the systemd service like so:

escaped=$(systemd-escape /srv/minio/s3.yaml)
systemctl --user enable --now podman-kube@$escaped.service

or you can use the example below:

systemctl --user enable --now podman-kube@-srv-minio-s3.yaml.service

if you check the output of escaped=$(systemd-escape /srv/minio/s3.yaml) that ends up being “-srv-minio-s3.yaml” which is simply converting slashes in the path to dashes. Now open the firewall ports for Minio

sudo firewall-cmd --permanent --zone=public --add-port=9000/tcp
sudo firewall-cmd --permanent --zone=public --add-port=9001/tcp
sudo firewall-cmd --reload

and its all done. I gave the server a quick reboot to make sure the users systemd service autostarts and it did, I now have a rootless podman pod running Minio. My only goal now is to begin migrating backups and my other S3 needs to my internal host.

Performance

I have not conducted an in depth performance test but have tried using Velero to send a few terabytes of data to the server. Overall performance is pretty good, I was able to muster about 3Gbp/s to 3.5Gbp/s from my cluster to the backup host. Its not incredible but for nothing but leftover HDDs I think it is pretty decent.


Sources

  1. Podman Rootless Docs

  2. Podman warns cgroupv2 manager is set to systemd without a user session

  3. Rootfull, rootless containers on Btrfs and ZFS

  4. storage.conf - Syntax of Container Storage configuration file

  5. podman-generate-systemd