Some personal news

It’s been a while since I’ve posted anything non-technical here, but I have some news! I’m excited to announce that next week, I’ll be joining DoorDash’s IT team! I’ll be working as a Client Platform Engineer, helping to manage all of the devices. I seriously can’t wait!

I was at Saint Joseph’s University for almost nine years, and I couldn’t be more proud of the work I’ve done there. My CIO, Fran DiSanti, sent this to everyone in the Office of Information Technology (and gave permission for me to repost it publicly):

Hello Colleagues,

Many of you may already know that Mike Solin will be leaving SJU this Friday, October 21 to pursue a new job opportunity as a Client Platform Engineer with DoorDash.  Mike is very excited to be joining a new team of client engineers which has been taking shape at DoorDash for the past year.  I’m confident that Mike will do great things for DoorDash just as he has for SJU over the past 9 years. 

Mike started as Technology Support Specialist in OIT and over time was promoted to his current role as Senior Client Platform Engineer.  Throughout his tenure in OIT, Mike has contributed much to our organization and to the University community. He completely reimagined and reengineered the way that we manage our macOS and iOS environments.  When he started at SJU, Mike envisioned a zero-touch, modern approach to device management and he successfully realized this vision by delivering an out-of-the-box deployment experience for Mac users.  His approach was secure, highly automated and allowed users to select and install pre-packaged apps from a software catalog.  In addition to his Mac expertise, Mike became very proficient with our Windows environments and Active Directory.  

Mike has been instrumental in the design, development and deployment of a number of strategic technologies which have had a significant impact on the way in which we manage our endpoint devices, including:

  • An automated data-backup solution (Code42)
  • Endpoint detection and response software (Malwarebytes)
  • Adobe Creative Cloud implementations
  • Mobile Device Management software (Workspace One)
  • Computer encryption
  • Microsoft Azure environment
  • Automated delivery of iPads to users

Clearly, Mike has made many important contributions through the years and along the way, he has continually developed his knowledge and skills.  I am truly grateful for all that Mike has done for our division and the community.  Please join me in thanking Mike and wishing him well in the next chapter of his chapter. 

Fran

Prior to joining SJU, I had moved from Philadelphia to State College, PA, then Richmond, VA. Being a Mac admin is very specialized, and at the time, remote work wasn’t as common as it is now. I missed my family terribly, and regularly used all of my vacation time to drive back for visits. I was incredibly lucky that the opportunity at Saint Joe’s opened up – it brought me home.

Moreover, it gave me the chance to work with a great team. One of the best things about working at SJU was that nothing was off-limits – I was encouraged to learn anything that interested me, and to use those skills to make things better for the university. When the position is posted, I absolutely recommend applying.

Going forward, I’ll still be local to the Philly area! I’m still involved with Greater Philadelphia Mac Admins, and plan on continuing to post to this blog, present at conferences, and participate in the MacAdmins Slack. šŸ˜„

Controlling Munki via Workspace ONE and Active Directory

I got something working recently, and I thought it was interesting enough that it’d be worth sharing.

Our MDM server is a SaaS instance of Workspace ONE UEM, and we have the AirWatch Cloud Connector installed in an on-prem VM to provide integration with Active Directory. Although WS1 bundles its own (modified) version of Munki, we don’t use it – we have a separate on-prem VM for our vanilla Munki server.

Unfortunately, this post is partially about printers (sorry). The challenge with setting up LPD printers on the macOS, is that the drivers need to be installed before the printer is added (or the printer is added with a generic driver, and must be removed and reinstalled). Munki is an excellent use case for this, as the requires and update_for pkginfo keys are perfect for setting up dependencies.

For several years, I used Graham Gilbert’s printer-pkginfo script to deploy printers with Munki. That, combined with my NoMAD group condition script, allowed me to deploy printers to only certain people’s devices – their user accounts in AD needed to be a member of a particular group.

With macOS 12.3 dropping Python 2 from the OS, I needed another solution. I landed on wyncomco’s fork of Nick McSpadden’s PrinterGenerator script. It works well, but with our move from NoMAD to Jamf Connect, how would we be able to leverage our AD groups to deploy these printers?

Thanks to the AirWatch Cloud Connector, I was able to add the AD security group to WS1 (in Accounts > User Groups > List View). The group in WS1 syncs periodically with AD, so users added to AD will appear in the WS1 group after a few hours.

In my case, though, I needed a Smart Group (sometimes called an “Assignment Group”) to actually make use of the user group. In Groups & Settings > Groups > Assignment Groups, add a new Smart Group where the first criteria is the Organization Group that contains your devices. Scroll down to User Group, and select the group you’re synching from AD. Name your Smart Group and click Save.

The last piece was how I’d get the printer to these users. Around the same time, VMware added the ability to run scripts through Workspace ONE. I had remembered Nick McSpadden’s post about Local-Only Manifests in Munki, which was perfect for this. I’d set up a separate manifest for WS1 to write to, and Munki would install the printer driver and the printer automatically.

First, in your Munki configuration profile, add this:

<key>LocalOnlyManifest</key>
<string>LocalOnlyManifest.plist</string>

This tells Munki to check this additional manifest for potential items to install. There’s no need to create the file – if it doesn’t exist, Munki proceeds as normal, without printing any warnings/errors.

Lastly, add this script to WS1 (in Resources > Scripts), and assign it to your Smart Group. Set the language to Bash, and the execution context to System.

#!/bin/bash

defaults="/usr/bin/defaults"
grep="/usr/bin/grep"

printer_installed=$(${defaults} read "/Library/Managed Installs/manifests/LocalOnlyManifest" managed_installs | ${grep} "MyPrinter")

if [ ! "${printer_installed}" ]; then
 ${defaults} write "/Library/Managed Installs/manifests/LocalOnlyManifest" managed_installs -array-add "MyPrinter"
else
    exit 0
fi

exit

In my case, I have it run immediately upon device enrollment, as well as when the network interface changes. The code runs a check to see if the Munki item MyPrinter is in the LocalOnlyManifest, and if not, it adds it. The next time Munki runs a background check, it will install the driver and printer automatically.

The end result is that when a user requires our printer, any AD admin can add the user to a particular group. Some time later, the user will receive the printer without needing to do anything. If the user already has our printer, but receives a new computer, the printer will be added as soon as the computer is set up – no additional admin work necessary.

I hope someone finds this useful for more than just printers!

MunkiReport in Azure

Following up on my last post – up until a couple of months ago, our production MunkiReport server was running Windows Server 2012 R2. Yep, MunkiReport was running in IIS, and MySQL was installed in the same VM. The server was about 8 years old, and while it had served us well, it was time to migrate to something more modern.

As we’re pushing to move more stuff into Azure, and containers are the future of these types of deployments, I spent a bunch of time figuring out how to get MunkiReport running as a Docker container in Azure. Even better: I automated it, so you can do it, too!

Please check out my GitHub repo for the script.

We’ve had this running in production for a couple of months now, and it’s averaged out to about $4.60/day for ~700 clients.

Due to some upcoming life changes, I’m not sure how much further development the script will receive from me. I intend to add some documentation, but there are definitely improvements that could be made (such as migrating to an ARM/BICEP template, or making some portions of the script optional). Please check out the script and let me know what you think!

MacDevOpsYVR 2022 Workshop

It’s been really quiet here, but that’s because I’ve been busy!

For starters, I participated in a workshop in June for the consistently excellent MacDevOpsYVR conference. We discussed various ways of deploying MunkiReport. I strongly encouraged everyone to take a look at Docker!

Many, many thanks to Mat X for inviting me to share my experiences, and for his skillful editing of the video recording.

My diagrams are included in the video, but I’m posting them here for posterity. 😎

More to come on this topic!

Modern Bootstrapping Presentation

I had the honor of presenting at the University of Utah’s May 2021 MacAdmins Meeting this week.

The slides and video are already up – check them out here!

Modern Bootstrapping: Part 2 (Building the Packages)

This is the second post in my multi-part series on modern bootstrapping with Workspace ONE UEM. If you haven’t read the first one, you can find it here.

Modern Bootstrapping: Part 1 (Intro)

For a while now, I’ve been meaning to post about how I’m bootstrapping our Macs using Workspace ONE UEM and several open source tools. This will be a multi-part series, and will culminate with a presentation at the University of Utah’s MacAdmins meeting for May 2021. I feel that it’d be best to start with some historical context and how bootstrapping has evolved since I joined the industry.

Smart Home, Part Two

It’s been just over a year since my last post about smart home stuff, and I wanted to write about some of the stuff we’ve changed since then. Here we go!

Setting up Synology’s reverse proxy server

Update: I’ve since moved on to using LinuxServer.io’s SWAG. You can run SWAG on a Synology NAS (if it supports Docker), but I’m running it in Ubuntu on other hardware. I’ve learned a lot since posting this, but I’m leaving it up in case it’s still helpful to anyone else.


In several previous posts, I detailed how to secure various services with a Let’s Encrypt certificate, installed in DSM. I approached each one individually, figuring out how to link the certificate in a way that each application accepted.

On my post about securing Ubooquity, jcd suggested I use Synology’s built-in reverse proxy server instead (and linked to this tutorial). Honestly, this was the best advice, and I’ve since switched away from the various methods I detailed before. Please check out Graham Leggat’s tutorial – this post isn’t meant to be a retelling, but hopefully adds some useful notes that I learned along the way.

Essentially, here’s how a reverse proxy works: you have a service running inside of your firewall over HTTP (not secured). Here are some of your options for opening that service outside of your network:

  • Open it as-is, unsecured. This is generally not a good idea.
  • Secure the service and open it outside of your network. You’ll need to read documentation, and possibly convert the certificate and put it in a place the application expects. Furthermore, as you open up more services outside of your network, you’ll need to open separate ports for each – it’s a lot to manage when you just want to access your service outside of your firewall.
  • You can use a reverse proxy.

A reverse proxy is a separate server, sitting in between your service and the internet, which will encrypt all traffic, seamlessly. When you connect from outside of your firewall, you’ll communicate securely to your reverse proxy, which will then pass along your traffic to your unencrypted applications.

There are many benefits to this approach: this works with nearly every application, requires very little configuration (past the initial setup), allows you to set up memorable URLs without using weird ports, etc.

Some prerequisites:

  • First, I’m going to assume you have an application/service that you want to open outside of your network, securely. Set it up on an unused TCP port. I recommend checking this list to avoid commonly used ports.
  • You’ll need a domain name, and be able to add custom DNS entries.
  • You’re also going to need a wildcard certificate. I paid for one, but Let’s Encrypt offers them for free, too (you’ll probably need to use this script).
  • If you don’t pay your ISP for a static IPv4 address, you’ll need to set up Synology’s QuickConnect service. It’s free, but requires a Synology.com account.

Now that you’ve got all of that squared away, let’s proceed.

  1. First, we’ll need to forward port 443 to your Synology server. See here for instructions on how to do that for most types of routers.
  2. Add your wildcard certificate to DSM.
  3. At your domain registrar, edit the DNS settings for your domain name. Add an entry with the following:
    1. Type: CNAME
    2. Name: application.yourdomain.com
    3. Value: (your QuickConnect address – example.synology.me)
  4. Unless your domain name is brand new, it shouldn’t take long for your new subdomain to resolve to your Synology server’s IP address.
  5. In DSM, click Control Panel, then Application Portal, then the Reverse Proxy tab. Click the Create button. Fill in these details:
    1. Description: (name of your application)
    2. Source
      1. Protocol: HTTPS
      2. Hostname: application.yourdomain.com
      3. Port: 443
      4. (don’t check any of the boxes)
    3. Destination
      1. Protocol: HTTP
      2. Hostname: (the local IP address of the server running your application, such as 192.168.1.3 or 127.0.0.1)
      3. Port: (the port you’re currently using to access your application, such as 80 or 8080)
  6. Click Save, then try to access https://application.yourdomain.com in a web browser. If you did everything right (and I didn’t miss any steps!), you should be able to load your application and see that the connection is secure. If you click the lock, you should see your wildcard certificate.

Going forward, you can do this for multiple applications – and each one can use port 443, so you don’t need to open additional ports outside of your firewall or remember anything more than your unique subdomain for each application.

Using Munki to enable sudo for Touch ID

Ever since I got my MacBook Pro with a Touch Bar, I’ve avoided typing in my password as much as possible. macOS 10.14 and 10.15 added more places in the OS that accept Touch ID, which has been a welcome change. As part of my job, I tend to use the sudo command quite a bit, and this post from Rich Trouton has been a godsend. Just edit the appropriate file, restart your Terminal session, and you’re all set.

However, with many macOS patches and security updates, /etc/pam.d/sudo is reset back to defaults. I don’t know why this happens, but it’s quite annoying. After manually applying the change to this file again, I finally decided to script it.

Now, there are a handful of files that can really ruin your day if they become damaged or invalid. This is one of those files. Please proceed with caution, keep good backups, and be prepared to reinstall your OS if things get really messed up. That said, this worked for me on macOS 10.15.5, and will hopefully continue to work for years to come.

Since I use Munki, I decided to build a nopkg file that checks for the appropriate line in /etc/pam.d/sudo, and inserts it if it’s not present. To download the code, please see my GitHub repository.

Page 1 of 4

Powered by WordPress & Theme by Anders Norén