2023-11-10

- 4 mins read

Summary

The primary purpose of this page is to detail the infrastructure in use. This is useful as a resource when a post explaining some issue leaves out details that may be of value to the post. Every post should detail any relevant infrastructure but at times this could be missed, as a result this post should hopefully resolve any gaps left out.


Hardware

Network Hardware

Only relevant devices are displayed, “leaf” switches and APs are excluded.

Device Role
DEC850 Firewall
USW Pro 24 PoE Switch
USW Aggregation Switch

DEC850

The DEC850 is an opnsense appliance. Opnsense can fill many roles, and in this environment this device acts as the DHCP server, DNS server, and the labs root CA. The bultin DHCP server also makes VM hostnames available via Unbound DNS.

Server Hardware

Host CPU RAM HDDs SSDs NICs
pve-01 i7-8809G 32Gb N/A 2 x 1Tb 1 x 1Gb
pve-02 i5-10400 128Gb 3 x 10Tb 1 x 250Gb, 1 x 1Tb 1 x 1Gb, 1 x 10Gb
pve-03 i5-10400 128Gb 1 x 8Tb, 2 x 10Tb 1 x 250Gb, 1 x 1Tb 1 x 1Gb, 1 x 10Gb
pve-04 i5-10400 128Gb 1 x 8Tb, 2 x 10Tb 1 x 250Gb, 1 x 1Tb 1 x 1Gb, 1 x 10Gb
pve-05 i5-10400 128Gb 3 x 8Tb 1 x 250Gb, 1 x 1Tb 1 x 1Gb, 1 x 10Gb

PVE-01

PVE-01 is actually an Intel NUC8i7HVK.

This NUC can be found on Intel’s site: here. It truly is a beast of a system.

Storage
  • 1Tb NVME: OS
  • 1Tb NVME: ZFS pool

PVE-0(2-5)

The remaining four server nodes are all identical and all run on off the shelf commodity hardware.

NICs

These nodes are running a 10GTek 10 Gig nic, that can be found here. At the time the NICs were installed an SFF card was needed due to size constraints. These NICs replaced the previous dual SFP+ nic here (this was not an upgrade).

The NIC in use works “as expected” after installing the linux quantic driver, and some tweaks were needed to get them working on Debian/Proxmox. Unless this driver gets upstreamed its best to avoid this device.

Storage
  • 250Gb NVME: OS
  • 1Tb NVME: ZFS pool
  • 3 x 8Tb HDD: Ceph Pool

servers


Networking

Layer 1

layer1

All 10 Gig links are MMF SFP+ LC

Layer 2

layer2

VLAN Roles

Tag NAT DHCP Purpose
Untagged True True Infrastructure type devices, this includes the networking devices, hypervisor UIs, etc
4 True True The default network for all VMs
5 False True IOT devices only
6 False False This is an L2 only network dedicated towards VM Migrations.
7 False False This is the Ceph backend communications network.

Layer 3

VLAN Routed
Untagged True
4 True
5 True
6 False
7 False

VM Templates

Every VM (except for a select few) come from a packer build process.

flowchart TD; run[Run `packer build`] --> S{OS Selection} S --> CB(CentOS Build) S --> RB(Rocky Build) S --> ab(Alma Build) run --> VS{OS Version Selection} VS --> 9(9) VS --> 8(8) CB --> DV[Deploy VM] RB --> DV[Deploy VM] ab --> DV[Deploy VM] 9 --> DV[Deploy VM] 8 --> DV[Deploy VM] DV --> RK[Run Kickstart] RK --> CS[Confirm SSH] CS --> PO[Power Off] PO --> CT[Convert to Template]

The above process results in:

  • CentOS Stream 8
  • CentOS Stream 9
  • Alma 8
  • Alma 9
  • Rocky 8
  • Rocky 9

Each VM is deployed with root SSH access, QEMU guest agent, and cloud-init.

  • QEMU guest agent is necessary for Terraform to be able to pick up an IP address, this is useful when building Ansible inventory files via terraform.
  • cloud-init is useful for creating a default Ansible user that can be used to configure and lock down the VM.

The default VM template is not considered secure (given it has a default password). As a result it is up to an Ansible “generic” role to disable the root user and disable root ssh login post deployment.