hp microserver gen8 with PS1810-G8 switch and a Dell optiplex 3050 sff

It has been a few months since I looked at my homelab infrastructure in all it’s glory and marvel at how much fun I am having. So let me give you an overview at what happened and what is the status quo.

Microservers

Since May 2023 the hardware hasn’t changed much. I am still rocking my HPE Microservers Gen8 in their basic configuration.

  • Intel Xeon E3-1260L
  • 16 GB of Unbuffered ECC RAM
  • 256GB SSD boot drive
  • Booting from SD Card (grub only)
  • 4x512 GB SSD from AliExpress in RAID 5

I invested in three Mellanox Connect-X2 cards (from Aliexpress) for all three servers to connect in a mesh network with 10Gbits in between them. This way Proxmox transfers of VMs or storage access would be quick and snappy. Configuration was easy and the cards worked out of the box, one of my friends recommended to enable Jumbo Frames, which I did. As a result I could get to nearly 9.8 Gbit/s with iperf3 from server to server.

Getting rid of proxmox

After a few months I recognized that I mostly run my workloads and apps as either docker containers or as kubernetes pods, and that running Proxmox was an overhead that wasn’t worth it, since the Microservers only had 16 GB RAM either way. Additionally the overhead of maintaining multiple VMs, OSes and Images wasn’t worth it as well. So I went back to bare-metal kubernetes.

Getting comfortable with the *arr stack for maintaining Linux ISO images also meant that 1.5TB of RAID 5 SSD storage wasn’t cutting it as well, so invested into 4 refurbished 12 TB Seagate enterprise datacenter HDDs. The HDDs arrived very well packaged and worked right out of the box. The S.M.A.R.T values were all within reason, so I wasn’t worried too much and would go down that route again. I have some other HDDs that I got refurbished that have been spinning for nearly 10 years now.

I also recognized that the Gen8 Microservers with their onboard HBA and SATA interfaces (remember: only 2xSATA6 and 2x SATA3) will never max out even the shittiest AliExpress SSDs, so the speed advantage was marginal at best.

Not wanting to get into clustering too much again, I decided that instead of treating all 3 servers the same, they will have different roles. One application server running K3s with 1.5TB RAID 5 storage, one storage server with 36TB RAID 5 storage and a fuck-around-find-out server. I started to invest some time into repeatable installs and setups and managed to get a iPXE+Debian preseeding setup working, everything after I automated using ansible.

Since all servers are connected via 10 Gbit/s fiber connection and the onboard HBA only supports 2xSATA6 and 2xSATA3, it would basically behave like local storage in terms of latency and speed. I already made some experiences using the Hetzner Storage Box SMB Share as a K3s Volume with the official SMB-CSI, so I decided to use the official NFS-CSI to connect my k3s with the storage server for storing bigger datasets. For what? I don’t know, because it’s cool, it’s a homelab, it doesn’t have to make sense.

Routers and Networking

I downgraded the LACP bonded ethernet connection back to a single ethernet connection with a dumb switch. I had so many problems with the TP-Link SmartSwitch with either forgetting about the LACP Groups or not properly forwarding DHCP packages. I don’t know if it was me or the switch, but I decided that I was done experimenting with that.

For my router I am still using the BananaPi R3 with as stable OpenWRT 23.05. I have some problems with the wifi being unstable close to the router and other smaller bugs, which are sometimes annoying, but all in all I have been a lot happier than with any ISP provided router, although their wifi may be a little bit more stable.

Moa Powa!

Last but not least I added a Small Form Factor Dell Optiplex with 8 GB RAM and a 120 GB SSD to run a dedicated Home Assistant instance. It doesn’t do anything yet really, but I plan to play around with ESPHome and Zigbee Thermostats. I am really missing the in-band management option here, but WebKVM options still seem to be super expensive, although TinyKVM or PiKVM look promising.

Wasting Investing more money?

I have been looking again into getting 10 Gbit Fiber to my home and I have been eyeing a few Dell R740xd servers on ebay for a while, but realistically I have neither the space, nor the usecase for so much computing power. The Microservers are working for now and will keep working for the forseeable future. Silent, Power Efficient, with a high design factor, so I can have them in my home office without being constantly disturbed. But who knows, what deals will show up at my doorstep.

There have also been a few offers for LSI 9211-8i HBAs that I was considering, but since the Microservers only have 1 PCIe slot, it would not make much sense to replace the Connect-X2 cards with the HBA, as the spinning rust HDDs can’t max out the SATA6 interface either way (I get a max of 140 Mbit/s with software RAID 5).

Conclusion

For now I am focusing on further automating my setup with Ansible and start with integrating my home with Home Assistant. Sonoff, Zigbee, ESPHome, all the buzzwords.

Hardware wise I am good for now, as I stay an absolute fan of the Gen8 Microservers, the best microservers since the N36L.