Automating Hugo Updates
How Lazy Can I Be?
To save you from thinking too hard, dear reader, the answer is: incredibly lazy, I am pretty sure that’s how I got into this career in the first place. As the lazy man I am, I have become tired of remembering to restart my deployments, and spell check my posts when I make changes to this site. So I did what any proper lazy person would do, I spent more time automating my tasks, rather than just being careful, and diligent.
This is of course mostly facetious. In reality I wanted a simple test case to help me work on learning a new CI system (I am used to GitLab CI, and just a little Jenkins), and give me a reason to finally deploy Forgejo Runners into my cluster.
Linting Workflow
First I wanted to ensure I had a linting workflow, this workflow should run on any push. I am still very new to the GitHub style CI but it does seem simple enough. As you can see I use Vale for linting, and I opted to run the Alpine based node image.
name: Lint
on: [push]
jobs:
Vale-Lint:
runs-on: docker
container:
image: node:24.9.0-alpine3.22
steps:
- name: Check out repository code
uses: actions/checkout@v4
- run: apk add vale
- run: vale sync --config ./.vale.ini
- run: vale ./content --glob='*.{md,txt}'
This CI style is very simple and feels almost closer to Jenkins than it is to GitLab. Either way my “Hello World” workflow works, so I moved on to creating a new workflow to restart Hugo.
Redeploy Workflow
The Hugo deployment (this site) is very simple, it’s a basic Kubernetes deployment, there is an ‘initContainer’ that is responsible for pulling the sites contents from git, and this is why the deployment needs to be restarted. The contents of the site are only downloaded at the start of the pod’s lifecycle and that obviously falls out of the scope of something like ArgoCD.
Setup Forgejo Service Account
Before anything can be done the Forgejo Runner is going to need a service account on the cluster hosting the Hugo deployment, so this can be easily made by deploying some manifests, and then creating a kubeconfig file:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: forgejo-role
rules:
- resourceNames:
- hugo
apiGroups:
- apps
resources:
- deployments
verbs:
- get
- patch
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: forgejo-service-account
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: forgejo-role-binding
namespace: hugo
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: forgejo-role
subjects:
- namespace: hugo
kind: ServiceAccount
name: forgejo-service-account
---
apiVersion: v1
kind: Secret
metadata:
name: forgejo-token
namespace: hugo
annotations:
kubernetes.io/service-account.name: forgejo-service-account
type: kubernetes.io/service-account-token
I use Kustomize, so I simply added this to my kustomization.yaml
, and deployed it. After that I prepared the kubeconfig file:
server=https://api.kube.lan:6443
namespace=hugo
ca=$(kubectl get -n $namespace secret/forgejo-token -o jsonpath='{.data.ca\.crt}')
token=$(kubectl get -n $namespace secret/forgejo-token -o jsonpath='{.data.token}' | base64 -d)
cat > ./kubeconfig.yaml << EOF
apiVersion: v1
kind: Config
clusters:
- name: cluster
cluster:
certificate-authority-data: ${ca}
server: ${server}
contexts:
- name: context
context:
cluster: cluster
namespace: $namespace
user: user
current-context: context
users:
- name: user
user:
token: ${token}
EOF
From this point I made sure to test the kubeconfig to ensure it had the needed permissions to restart a deployment (“get”, and “patch” appear to be all that is needed): KUBECONFIG=$(realpath ./kubeconfig.yaml) kubectl -n hugo rollout restart deployment hugo
, this confirmed my permissions were enough.
Redeploy Workflow
To prepare for the redeploy workflow I made sure to base64 encode the kubeconfig file I created, then added that as a secret into my Hugo repository. I wanted to make sure the redeploy workflow would only trigger on a push to ‘main’ and only after it has been linted:
name: Redeploy
on:
push:
branches:
- main
workflow_run:
workflows:
- Lint
types:
- completed
jobs:
Restart-Deployment:
runs-on: docker
container:
image: node:24.9.0-alpine3.22
steps:
- run: apk add kubectl
- run: echo "${{ secrets.KUBECONFIG_CONTENTS }}" | base64 -d > ./kubeconfig.yaml
- run: KUBECONFIG=$(realpath ./kubeconfig.yaml) kubectl -n hugo rollout restart deployment hugo
Updating ArgoCD
As a result of writing this post I also ended up learning ArgoCD can prevent rollout restarts from taking place, this was news to me, and is relevant as ArgoCD is responsible for ensuring the deployment of Hugo itself is running. It took me sometime to realize my permissions were fine, and ArgoCD was the real cause of my issues. Thankfully I am not the first person to come across this, and all I needed to do was go into my application and add an ignoreDifferences
key1, like so:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
namespace: argocd
spec:
ignoreDifferences:
- group: apps
jsonPointers:
- /spec/template/metadata/annotations
kind: Deployment
After setting the above key to ignore changes to annotations, the workflow was able to restart the Hugo deployment without issue. Note that I left out the whole path given in the source, that’s because the rollout command clears the whole annotations key, so targeting kubectl.kubernetes.io/restartedAt
specifically was too precise. I suspect, if I added an annotation manually, I could then target the kubectl.kubernetes.io/restartedAt
annotation specifically as it would no longer be an empty map. It is also worth noting the ‘jsonPointer’ has a ‘~1’ in /spec/template/metadata/annotations/kubectl.kubernetes.io~1restartedAt
, that is not a typo2! Just like that I can ensure my site is redeployed when it is updated, and linted. The final task was to protect the main branch and require merges into main so I can not push directly to it.