Tag: Docker

My Homelab Hardware Journey

The Beginning

Even before I called it a “homelab,” I found uses for home servers – mostly to replace multiple external USB hard drives. Although I grew up with Mac desktops, I became a laptop user in high school. Imagine being able to access my data from anywhere using my PowerBook G4! Or even better, being able to have tasks running without tying up my main computer. I bought a used Power Mac G3 tower (I later upgraded it to a G4 tower), removed the optical and Zip drives, then added ATA/IDE expansion cards and additional hard drive bays. That worked well for a few years. As a repair technician, I even used this method to build NetBoot and DeployStudio servers at work.

At some point, I decided to complicate things a bit more. I picked up a liquid-cooled Power Mac G5, a Sonnet Tempo E4P eSATA card, and a couple of eSATA drive enclosures. Port multipliers were such cool technology. I later moved that to a secondhand Mac Pro.

Mac Pro tower with two eSATA enclosures (March 2010)

Consolidation

After that, I consolidated everything onto a brand new 2010 Mac Pro – the idea was that it’d be my primary Mac, my gaming PC (booting to Windows via Boot Camp), and my file server via the eSATA enclosures.

Transferring data from 2007 iMac to 2010 Mac Pro tower with two eSATA enclosures (August 2010)

After a couple of years of that, I realized consolidation had too many drawbacks – for example, if I played Borderlands on Windows for several days at a time, I couldn’t as easily browse the web, check my email, or access my storage without rebooting to the macOS. I needed to split things up.

Un-Consolidation

First, after a lot of research, I purchased a Synology DS1815+. Although I had dabbled with RAID on macOS, this was much more stable – SHR and SHR2 meant that if a drive failed, I could remove it and replace it with no data loss. In addition, I could access my storage via SMB, as well as Synology’s included apps. The OS, DSM (DiskStation Manager), is Linux-based – built on top of BusyBox. After a couple of years, I bought a DS1817+, and kept the DS1815+ for backups.

My basement homelab (October 2020)

I also built a gaming PC from discarded parts. Through that experience, I learned that most games don’t demand a lot of CPU or RAM – a fast SSD and a decent GPU are generally enough. I connected it to my TV via HDMI, then used Steam’s Big Picture mode and a Steam Controller to play games from my couch. Finally, I was able to downsize my Mac to a MacBook Pro, then a Mac mini.

After becoming familiar with running Docker containers on my Synology NAS, I hit yet another ceiling – the Intel Atom processor just couldn’t keep up with the number of containers I had accumulated. In fact, Synology’s UI for Docker refused to load at some point due to the number of containers, so I had to manage Docker completely through the command line.

Application Server vs. File Server

By 2021, I obtained a Dell PowerEdge R720 for learning VMware ESXi and vSphere. At the time, there was a strong homelab culture at Saint Joe’s, so we traded ideas and helped each other learn new skills. Matt Ferro (Mateo) helped me configure ESXi, as well as iDRAC for Lights Out Management. While I kept my data on the Synology DS1817+, I moved Docker to an Ubuntu VM on the Dell, which increased performance considerably. I used NFS and autofs to keep things working seamlessly. I bought some plastic shelving at Home Depot that was wide enough to accommodate the R720, but was uncomfortable with how much it swayed (though it never collapsed, thankfully). I repurposed Mac minis for AutoPkg / App Store caching / uptime monitoring.

My basement homelab (September 2021)

After a couple of years, I realized I outgrew both the R720 and the DS1817+. Three separate systems (ESXi, Ubuntu, and Synology’s DSM) made it difficult to patch – I had to take things down and bring them back in a certain order, so it couldn’t be fully automated. In 2014, the Synology NAS’s 8 bays seemed limitless, but I was almost out of disk space a decade later. I calculated that if I replaced half of the drives, it wouldn’t be worth the cost for the amount of disk space I’d gain. I really just needed more bays, so I could buy cheaper drives. It’d be smarter to put that money towards a new build instead.

The Redesign

I started off with the approach that I’d buy a rack and mount everything in there. When I looked at cases, I found some that could store 30+ drives! The idea of being able to buy so many cheap drives was enticing. However, they’re huge, heavy, and it could be hard to access disks if I needed to swap one out.

I also had to make the decision if I was going to use Unraid or TrueNAS. I had dabbled with TrueNAS back when it was called FreeNAS, but had a couple of bad experiences on the forums by a (now deactivated) moderator, so I didn’t have fond memories of the project. On top of that, I used the software during a period where it suddenly received a major redesign. I was frustrated for a bit, as I tried to figure out where everything was moved. On the other hand, I’d heard nothing but good things about Unraid, and I wanted an OS that made it easy to expand my disk array or replace failing drives. TrueNAS’s ZFS support sounded great, but I couldn’t tell if the OS would be flexible enough for my Docker requirements. It really helped that the LinuxServer.io crew frequently recommends Unraid in their Docker image README files.

I posted to the Unraid Discord server about buying another old server, and received strong feedback that I should consider building things myself instead. Mateo suggested I build a “proof of concept” Unraid server, just to see how it works. I had a spare PC tower lying around, so I installed Unraid on a USB stick and experimented with the OS. It was very easy to get up and running, and seemed to do what I needed without much modification. This could definitely work.

Building the New Server

I remember reading a few years ago that John Carmack has an interesting approach to developing games – it takes years to build a game, but he wants the game to require cutting-edge technology when it’s released. To do that, he has to plan for hardware that doesn’t exist yet.

For computing projects like this, I’ve found that if I spec to my current needs, I’ll outgrow it faster. On the other hand, if I spec more than I need, I’ll find new use cases that push my setup farther than I had originally planned. My goal with this server build was to build something as future-proof as possible.

While gathering ideas, I searched PCPartPicker for Unraid builds. I found a couple of excellent ones that really helped shape my project. One was also local to the Philly burbs and mentioned the nearby Micro Center. The timing was excellent, as they were having a sale on motherboard / CPU / RAM bundles for gaming PCs. I hadn’t anticipated that I’d use an Intel i7 processor here, but I was replacing dual Xeon processors, so I had hoped the difference in age would make up for any performance gaps. Later, I found a benchmark website that confirmed my hunch. Not only is the i7 more powerful, but it also supports Intel’s Quick Sync, so video transcoding tasks could be offloaded onto the built-in GPU.

Another mentioned the Fractal Design Meshify 2 XL case, which is surprisingly flexible. Things were starting to come together. This case holds sixteen 2.5″ or 3.5″ drives, with room for two 2.5″ drives mounted to the back. While that’s not the 36 bays I was originally hoping for, it’s still more than I’d actually need. I ended up using both 2.5″ bays on the back, and eight of the sixteen 2.5″/3.5″ bays. B&H is located in New York City, so shipping the case and extra drive carriers was fast and convenient.

Since the motherboard had slots for M.2 SSDs, I added a few as a cache pool in Unraid, speeding up access to recently added files (the “mover” task seamlessly offloads them to the disk array overnight). I had put together something similar on my Synology NAS, but it required manual work – Unraid’s automated approach is significantly better.

Lastly, I bought new shelves. These shelves are incredibly sturdy and have a very clean look – I highly recommend them. I even added a $20 monitor from Facebook Marketplace!

My basement homelab (January 2024)

Please take a look at my completed build, which includes part links, prices, and pictures. Overall, I’m very happy with this setup, and hope it’ll last for years to come!

MunkiReport in Azure

Following up on my last post – up until a couple of months ago, our production MunkiReport server was running Windows Server 2012 R2. Yep, MunkiReport was running in IIS, and MySQL was installed in the same VM. The server was about 8 years old, and while it had served us well, it was time to migrate to something more modern.

As we’re pushing to move more stuff into Azure, and containers are the future of these types of deployments, I spent a bunch of time figuring out how to get MunkiReport running as a Docker container in Azure. Even better: I automated it, so you can do it, too!

Please check out my GitHub repo for the script.

We’ve had this running in production for a couple of months now, and it’s averaged out to about $4.60/day for ~700 clients.

Due to some upcoming life changes, I’m not sure how much further development the script will receive from me. I intend to add some documentation, but there are definitely improvements that could be made (such as migrating to an ARM/BICEP template, or making some portions of the script optional). Please check out the script and let me know what you think!

MacDevOpsYVR 2022 Workshop

It’s been really quiet here, but that’s because I’ve been busy!

For starters, I participated in a workshop in June for the consistently excellent MacDevOpsYVR conference. We discussed various ways of deploying MunkiReport. I strongly encouraged everyone to take a look at Docker!

Many, many thanks to Mat X for inviting me to share my experiences, and for his skillful editing of the video recording.

My diagrams are included in the video, but I’m posting them here for posterity. 😎

More to come on this topic!

Securing MunkiReport with Let’s Encrypt on Synology DSM

Update, 2020-06-11: I’m now using Synology’s built-in NGINX-based reverse proxy instead. The instructions below may not work.


Continuing my series on using Docker with a Synology NAS, I now have MunkiReport v3 working – and you can, too!

Some background: MunkiReport is a companion project to Munki (which we set up with Squirrel last week). MunkiReport v3 was released recently, and has a huge list of improvements, thanks to a dedicated group of contributors – especially @bochoven and @mosen, who have overhauled large portions of the project. MunkiReport v3 has some new requirements that weren’t present with v2 – this is the perfect use case for Docker! Docker will handle all of this for us.

Briefly, here’s what we’re going to do: we’re going to set up MySQL, Adminer, and MunkiReport using Docker Compose. Then, we’re going to use DSM 6.x’s certificate and reverse proxy support to secure MunkiReport. Let’s go!

  1. Enable SSH to your Synology server. Open the Terminal and connect to your server (I’m using root, but your admin account should also do fine). Leave that open for later.
  2. Install Docker through Package Center, if you don’t already have it.
  3. Add a certificate to DSM. I like Let’s Encrypt – DSM can walk you through the certificate creation process, and will renew it automatically. You’ll need a domain name for this. You might be able to use Synology’s QuickConnect service for this. (I ended up setting up a CNAME for my QuickConnect address with a subdomain that I already own, then used the CNAME for the certificate)
  4. Create a shared folder for your Docker data. I named mine ‘docker’. Create two directories inside of it: ‘MunkiReport’ and ‘MySQL’.
  5. Create a file called ‘docker-compose.yml’ in your ‘docker’ shared folder. Populate it with this data, to start:
version: '3.2'
networks:
default:
driver: bridge
services:
Adminer:
container_name: Adminer
image: adminer
# https://hub.docker.com/_/adminer/
ports:
– "3307:8080"
networks:
– default
restart: on-failure
MunkiReport:
container_name: MunkiReport
image: munkireport/munkireport-php:release-latest
# https://hub.docker.com/r/munkireport/munkireport-php/
volumes:
– /volume1/docker/MunkiReport/config.php:/var/munkireport/config.php:ro
ports:
– "4443:80"
networks:
– default
restart: on-failure
depends_on:
– MySQL
MySQL:
container_name: MySQL
image: mysql:5.7
# https://hub.docker.com/_/mysql/
volumes:
– /volume1/docker/MySQL:/var/lib/mysql:rw
ports:
– "3306:3306"
environment:
– MYSQL_ROOT_PASSWORD=secretpassword
networks:
– default
restart: on-failure
  1. Change line 41, your MySQL root password, to something random. You can also change the port numbers if you’d like, but I’m going to finish this tutorial with the assumption that you haven’t touched those (it can get confusing very quickly).
  2. Switch over to your Terminal window and run these two commands. The first will download the Docker images for Adminer, MunkiReport, and MySQL. The second command will create Docker containers, which contain your custom settings. If you change any of the settings in your docker-compose.yml file, re-running these commands will destroy the Docker containers and recreate them with your new specifications. Pretty cool. You can monitor all of this with the Docker application in DSM.
    /usr/local/bin/docker-compose  -f /volume1/docker/docker-compose.yml pull
    /usr/local/bin/docker-compose -f /volume1/docker/docker-compose.yml up -d
  3. Now, let’s create the MySQL database for MunkiReport. Go to your Synology server’s IP address, but add :3307 to the end. You’ll reach a login page. Here are the relevant details:
    1. Server is your NAS’s IP address, but with :3306 at the end.
    2. Username is root.
    3. Password is whatever you set in Step 6.
    4. Database can be left blank.
  4. After you login, click ‘Create database’. Name the database whatever you’d like – I went with ‘mreport’. For ‘Collation’, pick ‘utf8_general_ci’. Close the Adminer tab.
  5. Open a new tab, with your server’s IP address followed by :4443 at the end.  You should be greeted with an empty MunkiReport installation. Nice!
  6. In your ‘docker’ shared folder, you had created a ‘MunkiReport’ folder in Step 4. Inside of that, create a file named ‘config.php’. This is how we’ll configure MunkiReport – by overriding values specified in config_default.php (click to see MunkiReport’s default values). I’ll skip this part of the tutorial, as it’s documented much better on MunkiReport’s wiki. At a minimum, I’d strongly suggest setting up authentication, MySQL connection settings, and the modules you’d like to enable.
  7. Before you can expose your MunkiReport container to the outside world, you’ll want to secure it. You’ll do this with a reverse proxy – basically, another web server put in front of your MunkiReport container (which itself contains a web server). The reverse proxy will add encryption, but otherwise leave your MunkiReport installation alone. DSM 6.0 includes a reverse proxy, so let’s use that.
  8. Check out the bottom of this Synology knowledge base article. Unfortunately, the documentation leaves a lot to be desired, so I’ll suggest some settings:
    1. Description: MunkiReport
    2. Source Protocol: HTTPS
    3. Source Hostname: *
    4. Source Port: 4444
    5. (leave both the HSTS and HTTP/2 boxes unchecked)
    6. Destination Protocol: HTTP
    7. Destination Hostname: 127.0.0.1
    8. Destination Port: 4443
  9. Click OK to save.
  10. In your router, forward port 4444 (TCP) to your Synology server. If you haven’t given your Synology server a static IP address, that’d be a good idea.
  11. Visit your secure MunkiReport installation in a web browser:
    https://yourdomain.com:4444

From there, you can create a MunkiReport installation package (I like using the AutoPkg recipe for this). Push it to your clients, then watch as they check in with sweet, sweet data.

Securing Squirrel with Let’s Encrypt on Synology DSM

Update, 2020-06-11: I’m now using Synology’s built-in NGINX-based reverse proxy instead. The instructions below may not work.


Yep, this is another Docker blog post…but this time, we’re covering Munki!

It’s pretty common knowledge that a Munki server is just a web server. This allows tons of flexibility with hosting your Munki repository – basically anything that can run Apache, NGINX, IIS, etc. can act as your Munki server (even macOS Server, but I wouldn’t recommend it). Today, I’m going to continue the series of blog posts about Docker, but we’re going to discuss something called Squirrel.

Squirrel, written by Victor Vrantchan (@groob), is described as a simple HTTPS server for Munki. While you can set up your own Apache server (or Docker container), Squirrel comes prebuilt for hosting a Munki server.

As with Ubooquity, I’m going to use Synology DSM’s certificates.  That way, we can leverage Let’s Encrypt without needing to do any additional setup.

  1. First, set up Let’s Encrypt with in DSM’s Control Panel. Synology has excellent documentation on that.
  2. Before we go any further, I’d recommend creating a directory for Squirrel to save files (such as certificates). Separately, you’ll also want to create a Munki repository (see the Demonstration Setup, but skip the Apache config stuff). If you already have a repo, that’s great too.
  3. Next, add the Docker image for micromdm/squirrel. Follow Synology’s instructions.
  4. Create a Docker container, following those same instructions.
    1. You’ll want to specify two volumes, both of which you created in Step 2: where to store Squirrel’s data, and your Munki repository. I have a shared folder named ‘docker’, and I like creating directories for each service within that: for example, /volume1/docker/Squirrel. I made a ‘certs’ directory within that, as well.
    2. You’ll also want to pick a port. If you’re comfortable exposing port 443 to the world, go for it. Otherwise, use 443 internally to the Docker container, and pick another port for the outside world. Be sure to forward this port on your router!
    3. The environmental variables you’ll want to override are:SQUIRREL_MUNKI_REPO_PATH (this is the path to your Munki repo, which you specified in Step 4a).
      SQUIRREL_BASIC_AUTH (this is a randomly generated password for your Munki repo)
      SQUIRREL_TLS_CERT (/path/to/cert.pem)
      SQUIRREL_TLS_KEY (/path/to/privkey.pem)
  5. But wait, where do we get the cert? I wrote a really simple script to copy the Let’s Encrypt certs to Squirrel’s config location: get it here. Be sure to edit line 6! I run this once a night, with DSM’s Task Scheduler.
  6. After you start up your Squirrel container, check the Docker logs by selecting your container, clicking the Details button, then the Log tab. You’ll see the Basic Authentication string that you’ll need to provide to your Munki clients. You can find out more information on the Munki wiki.

After that, you’re done! Your clients have a secure Munki repo, and you don’t have to bother with Apache config files, a reverse proxy for securing your web server, or any of that.

Securing Ubooquity with Let’s Encrypt on Synology DSM

Update, 2020-06-11: I’m now using Synology’s built-in NGINX-based reverse proxy instead. The instructions below may not work.


Whew, that’s a very specific title. I don’t know if this will be useful to anyone else, but it took a fair amount of work to figure it out, so I figured I’d document it. There will be more Mac stuff soon, I promise!

If you haven’t heard, Let’s Encrypt is an excellent service, with the aim of securing the internet by offering free HTTPS certificates to anyone who requests one. In fact, I’m using one on this website right now. 🙂

With DSM 6.0, Synology added the ability to request a free certificate from Let’s Encrypt to secure your NAS. DSM handles renewing your certificate, which must happen every 90 days (one of the limitations of the free certificate, but nothing that can’t be automated).

Unrelated for the moment, but I’ve been using Ubooquity (through Docker!) for the past few months, and it’s been pretty neat. You can point Ubooquity to a directory of ePub and PDF files, and it’ll allow you to access the files remotely using reader apps like Marvin, KyBook, or Chunky. I have a habit of buying tech books and comics through Humble Bundle sales, but transferring the files to my iPad through iTunes/iBooks is clunky and requires a fair amount of disk space upfront.

Although Ubooquity supports user authentication, you’ll want that to happen over HTTPS, to keep your passwords secure. Luckily, Ubooquity supports HTTPS, but requires the certificate (and other associated files) to be in a format called a “keystore”. What?!

Here’s how to leverage DSM’s Let’s Encrypt support to secure Ubooquity, automatically.

  1. First, you’ll want to set up Let’s Encrypt in DSM’s Control Panel. See Synology’s documentation.
  2. Next, you’ll want to get Ubooquity up and running (I recommend the Docker image mentioned above). Synology’s documentation covers that, too. If your eBook library is a mess Calibre will make quick work of that.
  3. For this to work, you’ll also need the Java 8 JDK installed. This will give you access to the ‘keytool’ command you’ll need to create your keystore. Once again, see Synology’s documentation.
  4. Now, you’ll put all of this together. In a nutshell: you’re going to use the Let’s Encrypt certs that DSM has helpfully obtained for you, convert those to a keystore, put the keystore in Ubooquity’s config directory, and tell Ubooquity to use it to secure its interface. Here’s a script to get you started – note that you’ll need to edit lines 11, 12, and 15 for your environment. Thanks to Saltypoison on the Ubooquity forums for most of the code that became this script!
  5. Once you’ve successfully run the script, I recommend using DSM’s Task Scheduler to have it run once a day. This way, Ubooquity’s certificate will always be up to date with DSM’s certificate. That’s right, I’m going to link you to Synology’s documentation.
  6. Finally, you’ll need to tell Ubooquity where to find your keystore. Login to the Ubooquity admin interface, then click the Security tab. You’ll see two boxes – one for the path to your keystore, and one for the keystore password. Enter both. Click ‘Save and Restart’ at the top-right corner.
  7. Now, try accessing your Ubooquity instance using https and your FQDN! If it doesn’t work, make sure you’re forwarding the appropriate ports from your router to your Synology server – you’ll need to do this for both the eBook interface, and the admin interface (which are accessible via two different ports).

I’ll probably post more Synology/Docker stuff in the future, as I’ve been spending a lot of time with both. They’re really awesome!

Powered by WordPress & Theme by Anders Norén