Home Server Rebuild 2022

Background and Motivation

For the last 20+ years, I've always had multiple PCs running where I live. Typically, I have one as my actual daily driver, and one or more behind-the-scenes systems doing infrastructure and/or other household service duties. While extra PCs doing various things have come and gone over the years, two main infrastructure-type systems have always remained (even if their form has morphed over time): a firewall/router, and a fileserver.

For the router/firewall role, I've preferred to use very low power, fanless systems running some kind of open source operating system (or firewall "appliance"-type software). If memory serves (and it doesn't always!), my first foray into this was using a Via Eden-based system. At least I think it was Eden, I know it was Via, and it was fanless. I think it also ran off a compact flash card! Way back then, I used OpenBSD for the router. While OpenBSD is a great project, it was different enough from Linux (my bread and butter) that I didn't feel confident I was configuring it optimally (or even correctly). The most secure operating system in the world isn't secure at all if configured incorrectly.

Eventually I found my way to pfSense, which essentially turns the hardware into an "appliance": a slick web-based user interface, a huge community, and good documentation, particularly for the simple/common use-case (such as mine).

Firewall/Router Rebuild

I've been happily using the fanless, ultra-low power systems sold by PC Engines for years. Until the upgrade, I was using an apu2c4, sporting the AMD X-412TC CPU, 4GB of RAM, and three Intel i211AT NICs. A very wimpy system in terms of compute power, but perfectly adequate for a home firewall/router, and typically uses less than 10 watts of power. But as I am afflicted by GAS (gear acquisition syndrome), reading this ServeTheHome article convinced me I needed to upgrade my router. Of course I didn't really need to upgrade, but having more capable hardware came with a few perks:

The ever-awesome site ServeTheHome has several articles about these cheap, lower-power, fanless systems:

The gist is as follows:

After reading the STH articles, I immediately placed an order with the Topton AliExpress Store for an n5105-based system with four network ports. A few days after placing the order --- which had not yet shipped --- I read through this long STH Forum thread: Topton Jasper Lake Quad i225V Mini PC Report.
The thread took me several days to read in its entirety (and is still growing as I write this), but the first thing I realized is that Topton is not the best AliExpress vendor. In particular, several forum thread contributors documented problems with the units purchased from Topton:

It turns out, Topton is simply a reseller. It appears the actual manufacturer is "Chenwang", who also have their own AliExpress store, CWWK. By the time I finished reading the thread, seven days had passed, and my Topton order still had not shipped. So I cancelled that order, and placed an order through the CWWK store. The order shipped same-day!

Though the thread is long, it's worth a read if you own or are thinking about buying one of these devices. Some high-level points:

Fileserver Rebuild

I've been a Linux enthusiast for well over two decades. Since high school, I've used Linux for both real work as well as a high-tech "toy", with its enormous collection of open-source software. Eventually, my Linux experience resulted in my being coerced, err, offered, to do the Linux system administration work at my company. While I had previously changed Linux distributions every few years, I changed all my home systems to CentOS, as that is what we were using at work. I was working long hours and also became a father, so time available to "play" with Linux at home was in increasingly short supply. I needed to streamline my mental bandwidth utilization, and using the same system at home and work seemed like an easy win.

That model worked pretty well for a many years. I never particularly liked CentOS, but I knew it well enough to get stuff done quickly, without too much fuss. However, the nature of my job caused the shine of Linux-as-a-playground to dim considerably. That's a fancy way of saying that I didn't particularly enjoy my sysadmin for-pay work, which increasingly spilled into my home Linux sentiment. I got to the point where I didn't want to spend any more time configuring and troubleshooting Linux systems than absolutely necessary.

The problem for my home server was that, a custom CentOS configuration, tweaked and configured to do all kinds of duties, does require some degree of maintenance. I had been using the same CentOS 7.x installation for many years, running all kinds of services on it:

The Backup Server

For many years, I had another server running in parallel to the main server. This was a backup server. For years, the backup server was another CentOS system, and I had written custom scrips (based on rsync) to copy important files from the main server to the backup server. While this worked, it was yet another custom solution that required somewhat regular review and maintenance (or "care and feeding" as we like to say). It was yet another technical feature that I wanted to have in my home, but didn't want to have to actively maintain.

To compound the problem, I had previously used CrashPlan as an offsite backup service for the most critical files. I was fairly happy with CrashPlan, but in 2017-2018, Crash plan retired their "CrashPlan for Home" service (what I was using) to refocus their business on enterprise and small business customers. The nice thing about CrashPlan for Home was, for the most part, it was a "plug and play": install the service on your device, click through the configuration GUI, and it just works. With that gone, I was unable to find an alternative, and I moved to Backblaze.

While Backblaze is great, they are essentially just a pure cloud storage provider. They don't have a nice canned backup daemon that you can run. So I was forced to develop some custom scripts to perform my backups. This became yet another piece of custom home infrastructure requiring care and feeding.

Somehow (likely via STH), I learned about TrueNAS (formerly FreeNAS). I had always known that FreeNAS, as well as other "fileserver appliance" distributions existed, but never really considered them an option for me. I was always a DIY-type, preferring to "roll my own" fileserver functionality, using Linux as a base. But, life circumstances were changing: thanks largely to my job, I no longer saw Linux system maintenance as a joy, but as a chore; I wanted things to just work and require little to no care and feeding. What I wanted was an appliance, something that did its job without requiring the user to be well versed in the behind-the-scenes details.

So I installed TrueNAS on my backup server, and I was immediately thrilled. The installation was painless, the GUI was not only beautiful, but I found it to be very intuitive. In very short order I had automated jobs setup to backup my main fileserver. I also discovered that TrueNAS includes built-in support for cloud storage providers! So I was able to quickly set up my offsite backup to Backblaze, with only a few clicks through the slick GUI.

A side note about the backup server hardware. As of this writing (October, 2022), specs are as follows:

Originally, the motherboard was a Tyan S5533 (S5533GM2NR-LE). Functionally, the motherboard was fine, but one thing that always annoymed me was that it's IPMI console redirection implementation was the old Java-based one. Not only did that version only work with an old version of Java (1.7), I was also never able to make it work under Java on Linux. I had a Windows virtual machine on my Linux PC just for running Java 1.7 so that I could use the console redirection feature of this motherboard. Newer out-of-band management systems use HTML5 for console redirection, which, at least in my experience, just works under any modern browser, on any platform. As this was a backup system that otherwise worked fine, I felt it was a waste to spend any money just for convenience (of a feature I rarely needed). But one day, out of curiosity, I looked at the specs of one of Supermicro's competing motherboards, the X10SLL: this, though it's a similar "vintage" as the Tyan board, has the newer HTML5-based console redirection tool. I then checked eBay, just to see what such a board would cost: I got one for $55 shipped! An indulgence, yes, but a huge quality of life improvement for the few times a year I actually use it. I was able to reuse all the other hardware (CPU and memory). The other takeaway from this experience was that if you're willing to buy used, previous-generation hardware, you can acquire true server-grade equipment for next to nothing!

Proxmox

I now had two major pieces of my home infrastructure running effectively as appliances: a pfSense router, and a TrueNAS backup fileserver. However, those systems were essentially "single purpose", where the appliance metaphor is apt. But my main server had many roles, so what single open-source software distribution covered all the requirements and services I wanted to run? None that I know of! One solution, then, would be to buy or build a dedicated server for each service: like kitchen appliances, there's a gadget for just about any kitchen task; so it is with software, there is more often than not a dedicated system for performing some function. But, having a bunch of dedicated physical systems introduces all kinds of new problems: space, power, cost, heat, noise... This is the classic use-case for virtualization and/or containers. So step one was to turn my server into a virtualization/container host, upon which I could build an arbitrary number of independent guest systems.

Pretty much any modern Linux system can be used as a virtualization or container host. But I wanted such a system to itself be an appliance. That's where Proxmox comes in: it's essentially just Debian Linux, but with a slick GUI that makes all the hypervisor functions easy to use (the same way that pfSense/opnSense and FreeNAS are essentially just slick GUIs on top of FreeBSD). Proxmox had a lot of good press from ServeTheHome (which I consider a big endorsement). I also installed it on a spare PC just to take it for a "test drive".

My Proxmox test drive was indeed a success. A considerable amount of time passed (months, maybe even a year) before I actually did the rebuild. There were a few reasons for this:

Anway, I finally bit the bullet and did the install. The following sections talk about specific technical details I learned, struggled with, or at least found interesting during the rebuild process.

LXC container loadavg same as host

I kept noticing that, within an LXC container, the load average, e.g. as shown by top, was the same as the host. This in itself isn't a real problem, but one of my containers is running Pi-hole, which has some builtin self-diagnostics. One of those diagnostics is to check the recent load average versus the number of CPU cores. So while Pi-hole itself uses very little CPU, in aggregate, all services running on my Proxmox system cause the aggregate system load to be much higher than Pi-hole expects for itself.

The solution is, at the Promox host level, to invoke lxcfs with the "-l" (lower case el) option, in particular, per Bug 1870 - test and potentially enable lxcfs loadavg feature:

Edit the following file:
/lib/systemd/system/lxcfs.service

And adding the '-l' flag in ExecStart. Then restart service and
containers.

Prometheus

I previously used Munin as my system monitoring tool. But my friend informed me of the Prometheus, node_exporter, and Grafana. There's not a whole lot to say here, as in a matter of minutes, I had node_exporter running on all my LXC containers (including the Proxmox host), and a dedicated Prometheus+Grafana LXC container.

No Linux md (software RAID) support in Proxmox!

When planning out the migration from CentOS to Proxmox, I assumed Proxmox supported the Linux's native (and most excellent) software RAID. I'm pretty sure every major Linux distribution natively supports mdraid... but not Proxmox! Of course you can use mdraid in Proxmox, but you'll have to do most of the administration manually via the commandline. The mdraid tool mdadm is not installed by default.

When I first booted into my fresh Proxmox install, and did not see any devices in /proc/mdstat, I had a moment of panic. After a quick web search revealed that Proxmox does not support mdraid by default, I manually installed mdadm and quickly had my mdraid devices online and available.

That threw my migration plan for a loop. While I ultimately did want to migrate my disk arrays from Linux md to zfs, I expected to be able to do it as a separate project, later down the line. But not having native Linux md support in Proxmox means I was essentially working with a server that was essentially crippled, from the perspective of what I wanted to do with it. So the first task was migrate all my filesystems to zfs. That was easy, both conceptually and in practice. But I was very methodical and over-cautious, copying the most important data, forcing backups to run, verifying backups, etc... Ultimately, it took several days to do the md to zfs migration (though the overwhelming majority of that time was copying/backing-up/restoring data).

Keeweb

Many years ago, I started using KeePass Password Safe to store longish, randomly-generated unique passwords for every site or system that requires a password. I share this database with my wife.

The problem we continually ran into is that, by default, KeePass does not having any multi-user synchronization or locking support. I would use KeePass via the commandline (kpcli) on Linux and my wife would access the same database via the KeePass GUI on Windows. If either of us forgot to close and re-open the database every time we modified it, we ran the risk of over-writing (and therefore losing) the other person's changes.

Enter KeeWeb, a web-based service which allows a single KeePass database to be shared across multiple individuals.

Other

  1. Read-only LXC bind mounts https://forum.proxmox.com/threads/proxmox4-lxc-bind-mounts-read-only.25995/

  2. Linux md raid to zfs How to add a drive to a ZFS mirror: https://www.sotechdesign.com.au/how-to-add-a-drive-to-a-zfs-mirror/

  3. dpkg-reconfigure tzdata LXC not getting host time https://forum.proxmox.com/threads/lxc-not-getting-host-time.38255/

  4. Jellyfin server move https://www.reddit.com/r/jellyfin/comments/khriew/how_to_move_a_jellyfin_server_from_windows_to/

  5. LXC Fileserver Considerations

    • Unprivileged vs privileged https://pve.proxmox.com/wiki/Unprivileged_LXC_containers
    • Bind mount points https://pve.proxmox.com/wiki/Linux_Container#_bind_mount_points
    • Mount NFS within LXC https://theorangeone.net/posts/mount-nfs-inside-lxc/
    • NFS server on LXC container https://gist.github.com/soulmachine/6310916333df55d91d59ddaec1e90c4f
    • Is it possible to run a NFS server within a LXC? https://forum.proxmox.com/threads/is-it-possible-to-run-a-nfs-server-within-a-lxc.24403/
  6. Necessary tools on Ubuntu/Debian https://itsfoss.com/add-apt-repository-command-not-found/

  7. UniFi controller move

  8. New pfSense architecture

  9. cpufrequtils / sysfsutils https://forum.proxmox.com/threads/c-states-not-working.72630/

  10. Double-NAT, i.e. AT&T pass-through

  11. Samba config for compatibility with Apple devices