Once you have more than a few Docker containers running in your homelab, you’ll notice some applications have implemented their own authentication, requiring you to keep those credentials organized (hopefully in a password manager like 1Password). However, some applications don’t support authentication at all. What do you do? How do you make it less annoying to access your stuff?
Fortunately, a small cottage industry has developed around Single Sign-On (SSO) for homelab applications. Some of these are offered for free, with paid versions available for commercial use – a perfect use case for homelabs!
I started out with Authelia. Wherever possible, I disabled authentication for my apps, and chose to use Authelia instead. When I’d visit my app, my reverse proxy would check that I’m logged into Authelia, and if not, I’d be redirected there before I could access my application. I set it up with this guide from the LinuxServer team, and it worked well for quite a while.
However, I found that every update would include breaking changes, requiring me to comb through the config file (a massive YAML file) to make sure my settings still worked. Per the guide, I was also using a sqlite database to store my users (just myself and my wife), though Authelia’s documentation warns that I should set up a PostgreSQL database for production use.
Thanks to the r/selfhosted and r/homelab subreddits, I found Authentik. Authentik includes a GUI for configuration, which was a huge improvement to my experience with Authelia. However, it required a PostgreSQL database and a Redis instance, but those didn’t require much effort to set up. Authentik’s interface can be very, very confusing if you’re not intimately familiar with SSO, so I found this guide to be instrumental in getting things working (I’m using SWAG instead of Traefik, but most of the guide is still applicable). My goal was to have most applications forward to Authentik without having to create them individually as separate entries in the admin interface.
Updating Authentik is harder than Authelia. For each update, you have to see if the dependencies have been changed – namely, PostgreSQL and Redis. Redis is easy to update (just modify the version number of the image and rebuild your container), but PostgreSQL is a real hassle. You have to export your database, stop the container, delete your data, bump the version of the image you’re using, rebuild your container, then import your database dump. Recently, an Authentik update set the session length to ~24 hours, which annoyed me a whole lot. Unable to figure out why that changed or how to configure that, I became motivated to try something else. I even considered going back to Authelia.
Then, I found a replacement I think I’ll stick with for a while – Tinyauth. Tinyauth is all I ever wanted in an authentication backend: it’s lightweight and incredibly easy to configure. Although it supports the creation of local accounts, I’ve disabled that and outsourced the whole thing to Google. Thanks to this pull request, it’s available in SWAG, and it wasn’t difficult to add to my reverse proxy configurations. Now, when I access an application via SSO, my reverse proxy passes my request to Tinyauth. If I’m not logged into my Google account, I’m passed to Google, where I can login with my passkey. Super simple. I highly recommend Tinyauth if you’ve found other SSO methods to be too complex.
In my last blog post, I detailed how I built myself a beefy file and application server. In this post, I wanted to share how I organize and run Docker containers on Unraid.
Orchestrating Services with Docker Compose
I’m a huge fan of docker compose. Although the docker run command is fine when you’re just getting started, it becomes unwieldy when managing more than a couple of Docker containers.
Although Unraid includes Docker, you’ll need to install docker compose separately via this plugin. You can ignore the plugin’s settings page – there’s nothing to configure via Unraid’s interface after installing. The plugin includes the CLI tool, which is all you need.
With docker compose, you can use formatted text files – yaml files – to store exactly how you want your containers to run. If you need to change something, just edit the file, run docker compose with the path to your yaml file, and your changes will be applied.
Because Docker containers are ephemeral, you’ll need a place to store the stuff you want to keep as you start/stop/delete/recreate your containers. Unraid really wants you to keep that at /mnt/user/appdata, and I suggest it’s easiest not to fight that (though if you can, keep this on an SSD). Here’s the hierarchy I’ve set up in there:
__docker-compose.yml: This is the file where I keep all of the information about my Docker containers. I add a couple of underscores to the front of this, so it sorts at the top.
example-app: I have a folder set up for each container, named after the container. The app must be compliant with DNS formatting, so no underscores.
config.env: This is the file with most of the environment variables.
data: This contains all files that I want to persist when a container is rebuilt.
secrets.env: This contains the environment variables that might be sensitive – passwords, for example. Long-term, I plan to move this to Docker Secrets, but the first step was to separate them into their own files.
__docker-compose.yml might contain something like this:
config.env and secrets.env, if populated, might contain some environment variables:
FOO=BAR
ROOT_PASSWORD=BAZ
From there, it’s only a few commands to have docker compose spin up or tear down services based on these files:
#!/bin/bash
# This updates your Docker images
/usr/bin/docker compose -f "/mnt/user/appdata/__docker-compose.yml" pull
# This rebuilds your Docker containers to match the yaml file. Be careful, as it deletes any containers not mentioned in your yaml file!
/usr/bin/docker compose -f "/mnt/user/appdata/__docker-compose.yml" up -d --remove-orphans
# This deletes any Docker images that are old/unused.
/usr/bin/docker image prune -a -f
exit
To keep things up to date, I run a variation of this script every night via the User Scripts plugin (which uses cron).
I’ve found this method of organization scales very well, while providing the most flexibility. When I want to add a new app, I start with the YAML, then create the directory and config files. Heck, I’ve even managed to automate that through OliveTin.
Further Expansion
To securely expose services outside of my home network, I use a reverse proxy / load balancer called SWAG – there’s a great setup guide available. I recommend Cloudflare for both their at-cost domain registration services, as well as free wildcard DNS. Thanks to that guide, as soon as I spin up a new service, it’s immediately available at https://example-app.mydomain.com with a free wildcard Let’s Encrypt certificate that automatically renews itself. Also, rather than rely on built-in authentication, I’ve put most services behind Authentik, which provides SSO and MFA (including passkey support!).
Once you accumulate enough services, you can really tie everything together with a dashboard. I’ve gone through several over the years, but my current favorite is called Homepage. You can add an arbitrary amount of bookmarks, arrange them however you’d like, and even display live data in some of them (they’re called “widgets”). Homepage is clean, lightweight, and is actively getting better all of the time.
Even before I called it a “homelab,” I found uses for home servers – mostly to replace multiple external USB hard drives. Although I grew up with Mac desktops, I became a laptop user in high school. Imagine being able to access my data from anywhere using my PowerBook G4! Or even better, being able to have tasks running without tying up my main computer. I bought a used Power Mac G3 tower (I later upgraded it to a G4 tower), removed the optical and Zip drives, then added ATA/IDE expansion cards and additional hard drive bays. That worked well for a few years. As a repair technician, I even used this method to build NetBoot and DeployStudio servers at work.
At some point, I decided to complicate things a bit more. I picked up a liquid-cooled Power Mac G5, a Sonnet Tempo E4P eSATA card, and a couple of eSATA drive enclosures. Port multipliers were such cool technology. I later moved that to a secondhand Mac Pro.
Power Mac G5 tower with eSATA enclosure (June 2009)Mac Pro tower with two eSATA enclosures (March 2010)
Consolidation
After that, I consolidated everything onto a brand new 2010 Mac Pro – the idea was that it’d be my primary Mac, my gaming PC (booting to Windows via Boot Camp), and my file server via the eSATA enclosures.
Transferring data from 2007 iMac to 2010 Mac Pro tower with two eSATA enclosures (August 2010)
After a couple of years of that, I realized consolidation had too many drawbacks – for example, if I played Borderlands on Windows for several days at a time, I couldn’t as easily browse the web, check my email, or access my storage without rebooting to the macOS. I needed to split things up.
Un-Consolidation
First, after a lot of research, I purchased a Synology DS1815+. Although I had dabbled with RAID on macOS, this was much more stable – SHR and SHR2 meant that if a drive failed, I could remove it and replace it with no data loss. In addition, I could access my storage via SMB, as well as Synology’s included apps. The OS, DSM (DiskStation Manager), is Linux-based – built on top of BusyBox. After a couple of years, I bought a DS1817+, and kept the DS1815+ for backups.
My basement homelab (October 2020)
I also built a gaming PC from discarded parts. Through that experience, I learned that most games don’t demand a lot of CPU or RAM – a fast SSD and a decent GPU are generally enough. I connected it to my TV via HDMI, then used Steam’s Big Picture mode and a Steam Controller to play games from my couch. Finally, I was able to downsize my Mac to a MacBook Pro, then a Mac mini.
After becoming familiar with running Docker containers on my Synology NAS, I hit yet another ceiling – the Intel Atom processor just couldn’t keep up with the number of containers I had accumulated. In fact, Synology’s UI for Docker refused to load at some point due to the number of containers, so I had to manage Docker completely through the command line.
Application Server vs. File Server
By 2021, I obtained a Dell PowerEdge R720 for learning VMware ESXi and vSphere. At the time, there was a strong homelab culture at Saint Joe’s, so we traded ideas and helped each other learn new skills. Matt Ferro (Mateo) helped me configure ESXi, as well as iDRAC for Lights Out Management. While I kept my data on the Synology DS1817+, I moved Docker to an Ubuntu VM on the Dell, which increased performance considerably. I used NFS and autofs to keep things working seamlessly. I bought some plastic shelving at Home Depot that was wide enough to accommodate the R720, but was uncomfortable with how much it swayed (though it never collapsed, thankfully). I repurposed Mac minis for AutoPkg / App Store caching / uptime monitoring.
My basement homelab (September 2021)
After a couple of years, I realized I outgrew both the R720 and the DS1817+. Three separate systems (ESXi, Ubuntu, and Synology’s DSM) made it difficult to patch – I had to take things down and bring them back in a certain order, so it couldn’t be fully automated. In 2014, the Synology NAS’s 8 bays seemed limitless, but I was almost out of disk space a decade later. I calculated that if I replaced half of the drives, it wouldn’t be worth the cost for the amount of disk space I’d gain. I really just needed more bays, so I could buy cheaper drives. It’d be smarter to put that money towards a new build instead.
The Redesign
I started off with the approach that I’d buy a rack and mount everything in there. When I looked at cases, I found some that could store 30+ drives! The idea of being able to buy so many cheap drives was enticing. However, they’re huge, heavy, and it could be hard to access disks if I needed to swap one out.
I also had to make the decision if I was going to use Unraid or TrueNAS. I had dabbled with TrueNAS back when it was called FreeNAS, but had a couple of bad experiences on the forums by a (now deactivated) moderator, so I didn’t have fond memories of the project. On top of that, I used the software during a period where it suddenly received a major redesign. I was frustrated for a bit, as I tried to figure out where everything was moved. On the other hand, I’d heard nothing but good things about Unraid, and I wanted an OS that made it easy to expand my disk array or replace failing drives. TrueNAS’s ZFS support sounded great, but I couldn’t tell if the OS would be flexible enough for my Docker requirements. It really helped that the LinuxServer.io crew frequently recommends Unraid in their Docker image README files.
I posted to the Unraid Discord server about buying another old server, and received strong feedback that I should consider building things myself instead. Mateo suggested I build a “proof of concept” Unraid server, just to see how it works. I had a spare PC tower lying around, so I installed Unraid on a USB stick and experimented with the OS. It was very easy to get up and running, and seemed to do what I needed without much modification. This could definitely work.
Building the New Server
I remember reading a few years ago that John Carmack has an interesting approach to developing games – it takes years to build a game, but he wants the game to require cutting-edge technology when it’s released. To do that, he has to plan for hardware that doesn’t exist yet.
For computing projects like this, I’ve found that if I spec to my current needs, I’ll outgrow it faster. On the other hand, if I spec more than I need, I’ll find new use cases that push my setup farther than I had originally planned. My goal with this server build was to build something as future-proof as possible.
While gathering ideas, I searched PCPartPicker for Unraid builds. I found a couple of excellent ones that really helped shape my project. One was also local to the Philly burbs and mentioned the nearby Micro Center. The timing was excellent, as they were having a sale on motherboard / CPU / RAM bundles for gaming PCs. I hadn’t anticipated that I’d use an Intel i9 processor here, but I was replacing dual Xeon processors, so I had hoped the difference in age would make up for any performance gaps. Later, I found a benchmark website that confirmed my hunch. Not only is the i9 more powerful, but it also supports Intel’s Quick Sync, so video transcoding tasks could be offloaded onto the built-in GPU.
Another mentioned the Fractal Design Meshify 2 XL case, which is surprisingly flexible. Things were starting to come together. This case holds sixteen 2.5″ or 3.5″ drives, with room for two 2.5″ drives mounted to the back. While that’s not the 36 bays I was originally hoping for, it’s still more than I’d actually need. I ended up using both 2.5″ bays on the back, and eight of the sixteen 2.5″/3.5″ bays. B&H is located in New York City, so shipping the case and extra drive carriers was fast and convenient.
Since the motherboard had slots for M.2 SSDs, I added a few as a cache pool in Unraid, speeding up access to recently added files (the “mover” task seamlessly offloads them to the disk array overnight). I had put together something similar on my Synology NAS, but it required manual work – Unraid’s automated approach is significantly better.
Lastly, I bought new shelves. These shelves are incredibly sturdy and have a very clean look – I highly recommend them. I even added a $20 monitor from Facebook Marketplace!
My basement homelab (January 2024)
Please take a look at my completed build, which includes part links, prices, and pictures. Overall, I’m very happy with this setup, and hope it’ll last for years to come!