Promox

Overview

While most of my workload these days has shifted over to OKD (OpenShift), I do still maintain two Proxmox Nodes on Dell Micro PCs. I largely used these to host core infrastructure services while I was originally using Harvester which was unfortunately unreliable in my environment.

Hardware

           .://:`              `://:.
         `hMMMMMMd/          /dMMMMMMh`
          `sMMMMMMMd:      :mMMMMMMMs`
  `-/+oo+/:`.yMMMMMMMh-  -hMMMMMMMy.`:/+oo+/-`
  `:oooooooo/`-hMMMMMMMyyMMMMMMMh-`/oooooooo:`
    `/oooooooo:`:mMMMMMMMMMMMMm:`:oooooooo/`
      ./ooooooo+- +NMMMMMMMMN+ -+ooooooo/.
        .+ooooooo+-`oNMMMMNo`-+ooooooo+.
          -+ooooooo/.`sMMs`./ooooooo+-
            :oooooooo/`..`/oooooooo:
            :oooooooo/`..`/oooooooo:
          -+ooooooo/.`sMMs`./ooooooo+-
        .+ooooooo+-`oNMMMMNo`-+ooooooo+.
      ./ooooooo+- +NMMMMMMMMN+ -+ooooooo/.
    `/oooooooo:`:mMMMMMMMMMMMMm:`:oooooooo/`
  `:oooooooo/`-hMMMMMMMyyMMMMMMMh-`/oooooooo:`
  `-/+oo+/:`.yMMMMMMMh-  -hMMMMMMMy.`:/+oo+/-`
          `sMMMMMMMm:      :dMMMMMMMs
         `hMMMMMMd/          /dMMMMMMh
           `://:`              `://:`

 nanderson@pve02
 OS: Proxmox VE 11 bullseye
 Kernel: x86_64 Linux 5.15.108-1-pve
 Uptime: 41d 5h 6m
 Packages: 926
 Shell: bash 5.1.4
 Disk: 13G / 103G (14%)
 CPU: Intel Core i7-8700T @ 12x 4GHz [59.0°C]
 GPU: UHD Graphics 630
 RAM: 3039MiB / 15811MiB

Output provided by screenfetch

Both of my Proxmox nodes are Dell OptiPlex 3060 Micro PCs. These two nodes were among the first pieces of my homelab, and are still running great! They are not completely symmetrical, the node above, pve02, has 16 GB of RAM while pve03 has 32 GB of RAM. Both are currently running NVMe SSDs since the Ceph Mons needed more than the 2.5" hard drives could provide. However, the next section will go over the most exciting piece of this hardware.

10G MicroPCs

At one stage of my homelab’s evolution, I had a need for speed that could only be solved by interlinking all of my devices with 10 Gigabit connectivity. Knowing that the NVMe slots on the MicroPCs acted as a PCIe 3.0 x4 connector, I figured someone had to have made a raiser. And it turns out they are readily available for cheap on Amazon . (Not affiliated, not an ad, just what I got.)

Next, I just needed a network adapter that played well with the form factor and bandwidth constraints. For that, I found that the Mellanox ConnectX-3 EN adapters were cheap and perfectly compatible!

The last, and most janky, part of this setup was providing power to the raisers. While standard PCIe ports provide up to 75 Watts of power, M.2 ports can provide a maximum of 15 Watts. And I imagine out of an abundance of saftey, the power pins from the M.2 port are not carried by the raiser.

My solution was the tried and true method of taking an old power supply and use a small jumper cable to ground the sense pin. This did produce some issues with the timing of the adapter coming online being out of sync with the rest of the system coming online. But these were universally solved with a soft reboot of the PVE nodes.

While this was an excellent solution for a while, it was a little more jank than I wanted to maintain, and I have since moved the more expensive work-loads off of these nodes. They now have their cases back on and are populated with NVMe SSDs, as were originally intended.

Software

As is probably pretty clearly indicated above, I am using Proxmox VE 7.4 on Debian 11 “Bullseye”. But the more interesting date is what I’m running within my cluster and how it’s configured.

Configuration

The two nodes are running as a clustered setup, but given the lack of a third, they are not configured for HA, nor are they configured to run their integrated Ceph cluster. I am primarily using local-lvm these days since everything running within the cluster is fully redundant and replacable.

Workload

Proxmox is primarily running pairs of various core services. These represent some of the oldest items in the cluster, but have proven to be an essential stepping stone into more complex and capable infrastructure.

  • lb01 / lb02 - HAProxy nodes, responsible for handling all ingress to the lab.
  • mirror01 / mirror02 - Caching HTTP proxies to public mirrors.
  • net01 / net02 - DNS and DHCP services for the network.
  • jump01 / jump02 - Don’t use these much, but the landing point when remoting in.
  • ceph-mon05 / ceph-mon06 - Warm spares for my Ceph cluster
  • salt01 - SaltStack primary node
  • homeassistant - HomeAssistant HAss.IO virtual appliance.

Load Balancers

These are Ubuntu LXCs simply running HAProxy and KeepAliveD. Every incoming connection to my homelab network hits these nodes which will handle health checks and distribute requests to the healthy nodes.

Mirror

These are Ubuntu LXCs running nginx caching proxies to standard HTTP/HTTPS software mirrors, such as Ubuntu, Docker, Debian, and Fedora. Since a large amount of traffic to the internet is just binary packages, it makes all the sense to cache them!

These used to be thick mirrors which would regularly rsync content from Tier-2 mirrors, but it became a large amount of maintenance (and storage) overhead to shuffle things around as various mirrors would come and go. I may attempt to revive this project with a smarter implementation to reduce overhead. I like the idea of being able to build most of my homelab without internet access.

Network Services

These are Ubuntu LXCs running Unbound, isc-dhcp-server, and WireGuard. They are the only nodes attached to all networks and provide DNS and DHCP services for both my personal/leisure/IoT devices, but also to the lab.

Jump

These are the landing systems when I remote into the network. I’ve seeded them with various tools and configuration to allow me to handle essential remote management tasks.

Ceph Mons

These are Ubuntu KVMs which are kept around offline, and are ready to be managed by cephadm should I need to take other compute capacity offline for maintenance. As a fun note, these needed to be VMs due to unsafe containerization support within LXCs.

Salt

I should probably write a whole section on this, but Salt manages the configuration for all the above services. It is also responsible for coordinating updates to systems.

HomeAssistant

This is a HAss.IO virtual appliance running HomeAssistant. This will eventually have its own project page! Proxmox was the best home for this since the MicroPCs have USB3 ports unlike my servers.