Category: Tutorials Page 1 of 2

Finding Balance While Working Remotely

Alright, back to the technical stuff. Well, sort of.

Something that’s been new to me is working remotely for a company where many of my coworkers are in different time zones. Although I was fully remote at SJU for the last few years of my time there, everyone I worked with started and ended their day at around the same time. That doesn’t happen when you’re working for a global company! To have a work / life balance these days, I need to be mindful of my own schedule. Here’s how I’ve used technology to help me do that.

Focus

macOS Ventura, iOS 16, and iPadOS 16 arrived at exactly the right time for me. I had just started at DoorDash, and was already familiar with Do Not Disturb mode and using the Health app to set a sleep schedule. I’m really glad Apple gave this feature so much attention with the Fall 2022 releases.

To get started, Apple has excellent documentation for iOS / iPadOS and macOS. You have a lot of flexibility to create different Focus modes, but I’ve settled on four: Sleep, Do Not Disturb, Personal, and Off. I work from 10 AM until 6 PM Monday through Friday, so I’ve built my Focus modes around those times.

Sleep: Sleep is a good place to start, since it has to be set up in the Health app on your iPhone. Pick what time you want to go to sleep, and what time you want to wake up. On the weekends, I give myself a slightly later bedtime, and a later wake time. You can pick an alarm if you want to, but I rely on our bedroom Sonos speaker for that, instead, so I can wake up to music. πŸ˜„

I’d recommend setting “wind down” to 0 minutes. It just activates Sleep focus early, which is somewhat unnecessary.

In Settings > Focus > Sleep, you can customize a number of things. For me, Sleep focus is my most restrictive – I have a custom Lock Screen (I’m using “Astronomy” which looks great at night), and the brightness is significantly dimmed. I only allow some apps to send push notifications – mostly ones like 1Password, in case I need an MFA code. I also made a page of apps solely comprised of ones that I’d need if I woke up at night or was getting ready for bed. I also have some shortcuts for actions such as the “good night” scene in Home or to quickly make a new to-do item in OmniFocus. I filter out my work email, too. Lastly, all badges are disabled.

Do Not Disturb: I want this to activate at 10 PM on weeknights, and 11 PM on weekends, well ahead of my actual bedtime. The end time doesn’t matter, since Sleep focus will take over. This is my own “wind down” time, where all notifications are silenced (again, except for apps like 1Password). I have a custom Lock Screen here too, so I can tell at a glance that I’ve activated Do Not Disturb. I picked an excellent wallpaper from Wallaroo and set it to greyscale, taking a colorful beach scene and turning it into a snowy evening. I also filter out my work email here, so I only see my personal email.

Personal: For obvious reasons, this is my favorite. I have a custom Lock Screen with a picture of my wife. It activates at 6 PM each weekday, but also in the mornings – my wake up time is at 9 AM, so it also covers from 9 AM until 10 AM (so I’m not hit with work emails as soon as I get out of bed).

Off: This is what’s in place during my work hours. “Off” is simply no focus activated – the default behavior for an iPhone. Since I manage Macs, I have a Apple-themed Lock Screen and Home Screen. All email accounts are shown in a unified inbox, and no notifications are silenced. I experimented with creating a “Work” focus, but for my purposes, it was kind of overkill to create a separate focus just for that.

Off Lock Screen

Outside of those schedules, I’ll frequently toggle Do Not Disturb during the work day if I’m joining a Zoom call and don’t want to be distracted by notifications. When I’m on vacation, I manually toggle Personal on, so I don’t see any work emails. I used to fully remove my work account from my phone while on vacation, but this is significantly easier!

One of the best additions to macOS Ventura is that you can add a menu bar icon for Focus mode, allowing you to quickly switch to a different mode. All of your iCloud-connected devices will instantly adopt the same mode.

Slack

Slack has an excellent guide to configuring notifications. I set my work hours in there, so I don’t receive any notifications in my off-hours. Coworkers can still push DM notifications through if it’s an emergency, but otherwise, it’s all silenced at the end of the day.

One additional consideration: since I have both my work Slack and the Mac Admins Slack on my phone, I found that I was still seeing badge notifications for DMs on my work Slack, even in my off-hours. This became hard to ignore, so my solution was to disable badges for Slack on iOS altogether. For similar reasons, I don’t have my work Slack on my home computer, as I found myself checking work notifications in my off-hours just to clear the badge.

Google

You can set your work hours in Google Calendar, too. My main recommendation here is to pad the time – in my case, I set my work hours from 10:30 AM to 5:30 PM. That gives me 30 minutes at the start of the day to catch up, as well as 30 minutes at the end of my day to wind things down.

Note that I’m not signed into my work email on my personal computer, and I’m not signed into my personal email on my work computer. However, I am signed into all of my calendars on both computers and my phone – this prevents me from double-booking events and makes it easy to block time on my work calendar as necessary.

Smart Home

I’m extremely lucky to have my own home office – that was one of the reasons we bought our house in the first place. Even though that’s where I work from during the day, it’s also where I keep my personal computer and video game systems. I typically spend a lot of non-work time in my home office.

We picked up some Nanoleaf Shapes LED panels on sale a year or two ago, and I’ve grown really attached to them. I made an ugly fish with big teeth! They provide a lot of great light, but since they’re so customizable, I’ve set them to change on a schedule:

9:30 AM (30 minutes before I start work): Be Productive

6:00 PM: Jungle

10:00 PM (or 11:00 PM on the weekends): Starlight

This helps provide visual signals when my day has changed. The moment the panels go from light blue to green, I know my work day is over. Since Nanoleaf supports HomeKit, I also have the panels turn off as part of the “good night” scene when I go to bed.

Conclusion

If you’re working remotely, I hope this helps give you ideas on how you can use technology to have a better work / life balance. It’s certainly helped me!

Modern Bootstrapping: Part 2 (Building the Packages)

This is the second post in my multi-part series on modern bootstrapping with Workspace ONE UEM. If you haven’t read the first one, you can find it here.

Modern Bootstrapping: Part 1 (Intro)

For a while now, I’ve been meaning to post about how I’m bootstrapping our Macs using Workspace ONE UEM and several open source tools. This will be a multi-part series, and will culminate with a presentation at the University of Utah’s MacAdmins meeting for May 2021. I feel that it’d be best to start with some historical context and how bootstrapping has evolved since I joined the industry.

Smart Home, Part Two

It’s been just over a year since my last post about smart home stuff, and I wanted to write about some of the stuff we’ve changed since then. Here we go!

Setting up Synology’s reverse proxy server

Update: I’ve since moved on to using LinuxServer.io’s SWAG. You can run SWAG on a Synology NAS (if it supports Docker), but I’m running it in Ubuntu on other hardware. I’ve learned a lot since posting this, but I’m leaving it up in case it’s still helpful to anyone else.


In several previous posts, I detailed how to secure various services with a Let’s Encrypt certificate, installed in DSM. I approached each one individually, figuring out how to link the certificate in a way that each application accepted.

On my post about securing Ubooquity, jcd suggested I use Synology’s built-in reverse proxy server instead (and linked to this tutorial). Honestly, this was the best advice, and I’ve since switched away from the various methods I detailed before. Please check out Graham Leggat’s tutorial – this post isn’t meant to be a retelling, but hopefully adds some useful notes that I learned along the way.

Essentially, here’s how a reverse proxy works: you have a service running inside of your firewall over HTTP (not secured). Here are some of your options for opening that service outside of your network:

  • Open it as-is, unsecured. This is generally not a good idea.
  • Secure the service and open it outside of your network. You’ll need to read documentation, and possibly convert the certificate and put it in a place the application expects. Furthermore, as you open up more services outside of your network, you’ll need to open separate ports for each – it’s a lot to manage when you just want to access your service outside of your firewall.
  • You can use a reverse proxy.

A reverse proxy is a separate server, sitting in between your service and the internet, which will encrypt all traffic, seamlessly. When you connect from outside of your firewall, you’ll communicate securely to your reverse proxy, which will then pass along your traffic to your unencrypted applications.

There are many benefits to this approach: this works with nearly every application, requires very little configuration (past the initial setup), allows you to set up memorable URLs without using weird ports, etc.

Some prerequisites:

  • First, I’m going to assume you have an application/service that you want to open outside of your network, securely. Set it up on an unused TCP port. I recommend checking this list to avoid commonly used ports.
  • You’ll need a domain name, and be able to add custom DNS entries.
  • You’re also going to need a wildcard certificate. I paid for one, but Let’s Encrypt offers them for free, too (you’ll probably need to use this script).
  • If you don’t pay your ISP for a static IPv4 address, you’ll need to set up Synology’s QuickConnect service. It’s free, but requires a Synology.com account.

Now that you’ve got all of that squared away, let’s proceed.

  1. First, we’ll need to forward port 443 to your Synology server. See here for instructions on how to do that for most types of routers.
  2. Add your wildcard certificate to DSM.
  3. At your domain registrar, edit the DNS settings for your domain name. Add an entry with the following:
    1. Type: CNAME
    2. Name: application.yourdomain.com
    3. Value: (your QuickConnect address – example.synology.me)
  4. Unless your domain name is brand new, it shouldn’t take long for your new subdomain to resolve to your Synology server’s IP address.
  5. In DSM, click Control Panel, then Application Portal, then the Reverse Proxy tab. Click the Create button. Fill in these details:
    1. Description: (name of your application)
    2. Source
      1. Protocol: HTTPS
      2. Hostname: application.yourdomain.com
      3. Port: 443
      4. (don’t check any of the boxes)
    3. Destination
      1. Protocol: HTTP
      2. Hostname: (the local IP address of the server running your application, such as 192.168.1.3 or 127.0.0.1)
      3. Port: (the port you’re currently using to access your application, such as 80 or 8080)
  6. Click Save, then try to access https://application.yourdomain.com in a web browser. If you did everything right (and I didn’t miss any steps!), you should be able to load your application and see that the connection is secure. If you click the lock, you should see your wildcard certificate.

Going forward, you can do this for multiple applications – and each one can use port 443, so you don’t need to open additional ports outside of your firewall or remember anything more than your unique subdomain for each application.

Our Smart Home Setup

Last spring, my fiancΓ©e and I bought a house. We lived in an apartment for two years, and experimented with smart home stuff, but wanted to do a bit more with our house. We’ve had enough people ask about our setup that I figured I’d write a blog post.

Securing MunkiReport with Let’s Encrypt on Synology DSM

Update, 2020-06-11: I’m now using Synology’s built-in NGINX-based reverse proxy instead. The instructions below may not work.


Continuing my series on using Docker with a Synology NAS, I now have MunkiReport v3 working – and you can, too!

Some background: MunkiReport is a companion project to Munki (which we set up with Squirrel last week). MunkiReport v3 was released recently, and has a huge list of improvements, thanks to a dedicated group of contributors – especially @bochoven and @mosen, who have overhauled large portions of the project. MunkiReport v3 has some new requirements that weren’t present with v2 – this is the perfect use case for Docker! Docker will handle all of this for us.

Briefly, here’s what we’re going to do: we’re going to set up MySQL, Adminer, and MunkiReport using Docker Compose. Then, we’re going to use DSM 6.x’s certificate and reverse proxy support to secure MunkiReport. Let’s go!

  1. Enable SSH to your Synology server. Open the Terminal and connect to your server (I’m using root, but your admin account should also do fine). Leave that open for later.
  2. Install Docker through Package Center, if you don’t already have it.
  3. Add a certificate to DSM. I like Let’s Encrypt – DSM can walk you through the certificate creation process, and will renew it automatically. You’ll need a domain name for this. You might be able to use Synology’s QuickConnect service for this. (I ended up setting up a CNAME for my QuickConnect address with a subdomain that I already own, then used the CNAME for the certificate)
  4. Create a shared folder for your Docker data. I named mine ‘docker’. Create two directories inside of it: ‘MunkiReport’ and ‘MySQL’.
  5. Create a file called ‘docker-compose.yml’ in your ‘docker’ shared folder. Populate it with this data, to start:
version: '3.2'
networks:
default:
driver: bridge
services:
Adminer:
container_name: Adminer
image: adminer
# https://hub.docker.com/_/adminer/
ports:
"3307:8080"
networks:
default
restart: on-failure
MunkiReport:
container_name: MunkiReport
image: munkireport/munkireport-php:release-latest
# https://hub.docker.com/r/munkireport/munkireport-php/
volumes:
/volume1/docker/MunkiReport/config.php:/var/munkireport/config.php:ro
ports:
"4443:80"
networks:
default
restart: on-failure
depends_on:
MySQL
MySQL:
container_name: MySQL
image: mysql:5.7
# https://hub.docker.com/_/mysql/
volumes:
/volume1/docker/MySQL:/var/lib/mysql:rw
ports:
"3306:3306"
environment:
MYSQL_ROOT_PASSWORD=secretpassword
networks:
default
restart: on-failure
  1. Change line 41, your MySQL root password, to something random. You can also change the port numbers if you’d like, but I’m going to finish this tutorial with the assumption that you haven’t touched those (it can get confusing very quickly).
  2. Switch over to your Terminal window and run these two commands. The first will download the Docker images for Adminer, MunkiReport, and MySQL. The second command will create Docker containers, which contain your custom settings. If you change any of the settings in your docker-compose.yml file, re-running these commands will destroy the Docker containers and recreate them with your new specifications. Pretty cool. You can monitor all of this with the Docker application in DSM.
    /usr/local/bin/docker-compose  -f /volume1/docker/docker-compose.yml pull
    /usr/local/bin/docker-compose -f /volume1/docker/docker-compose.yml up -d
  3. Now, let’s create the MySQL database for MunkiReport. Go to your Synology server’s IP address, but add :3307 to the end. You’ll reach a login page. Here are the relevant details:
    1. Server is your NAS’s IP address, but with :3306 at the end.
    2. Username is root.
    3. Password is whatever you set in Step 6.
    4. Database can be left blank.
  4. After you login, click ‘Create database’. Name the database whatever you’d like – I went with ‘mreport’. For ‘Collation’, pick ‘utf8_general_ci’. Close the Adminer tab.
  5. Open a new tab, with your server’s IP address followed by :4443 at the end.  You should be greeted with an empty MunkiReport installation. Nice!
  6. In your ‘docker’ shared folder, you had created a ‘MunkiReport’ folder in Step 4. Inside of that, create a file named ‘config.php’. This is how we’ll configure MunkiReport – by overriding values specified in config_default.php (click to see MunkiReport’s default values). I’ll skip this part of the tutorial, as it’s documented much better on MunkiReport’s wiki. At a minimum, I’d strongly suggest setting up authentication, MySQL connection settings, and the modules you’d like to enable.
  7. Before you can expose your MunkiReport container to the outside world, you’ll want to secure it. You’ll do this with a reverse proxy – basically, another web server put in front of your MunkiReport container (which itself contains a web server). The reverse proxy will add encryption, but otherwise leave your MunkiReport installation alone. DSM 6.0 includes a reverse proxy, so let’s use that.
  8. Check out the bottom of this Synology knowledge base article. Unfortunately, the documentation leaves a lot to be desired, so I’ll suggest some settings:
    1. Description: MunkiReport
    2. Source Protocol: HTTPS
    3. Source Hostname: *
    4. Source Port: 4444
    5. (leave both the HSTS and HTTP/2 boxes unchecked)
    6. Destination Protocol: HTTP
    7. Destination Hostname: 127.0.0.1
    8. Destination Port: 4443
  9. Click OK to save.
  10. In your router, forward port 4444 (TCP) to your Synology server. If you haven’t given your Synology server a static IP address, that’d be a good idea.
  11. Visit your secure MunkiReport installation in a web browser:
    https://yourdomain.com:4444

From there, you can create a MunkiReport installation package (I like using the AutoPkg recipe for this). Push it to your clients, then watch as they check in with sweet, sweet data.

Securing Squirrel with Let’s Encrypt on Synology DSM

Update, 2020-06-11: I’m now using Synology’s built-in NGINX-based reverse proxy instead. The instructions below may not work.


Yep, this is another Docker blog post…but this time, we’re covering Munki!

It’s pretty common knowledge that a Munki server is just a web server. This allows tons of flexibility with hosting your Munki repository – basically anything that can run Apache, NGINX, IIS, etc. can act as your Munki server (even macOS Server, but I wouldn’t recommend it). Today, I’m going to continue the series of blog posts about Docker, but we’re going to discuss something called Squirrel.

Squirrel, written by Victor Vrantchan (@groob), is described as a simple HTTPS server for Munki. While you can set up your own Apache server (or Docker container), Squirrel comes prebuilt for hosting a Munki server.

As with Ubooquity, I’m going to use Synology DSM’s certificates.  That way, we can leverage Let’s Encrypt without needing to do any additional setup.

  1. First, set up Let’s Encrypt with in DSM’s Control Panel. Synology has excellent documentation on that.
  2. Before we go any further, I’d recommend creating a directory for Squirrel to save files (such as certificates). Separately, you’ll also want to create a Munki repository (see the Demonstration Setup, but skip the Apache config stuff). If you already have a repo, that’s great too.
  3. Next, add the Docker image for micromdm/squirrel. Follow Synology’s instructions.
  4. Create a Docker container, following those same instructions.
    1. You’ll want to specify two volumes, both of which you created in Step 2: where to store Squirrel’s data, and your Munki repository. I have a shared folder named ‘docker’, and I like creating directories for each service within that: for example, /volume1/docker/Squirrel. I made a ‘certs’ directory within that, as well.
    2. You’ll also want to pick a port. If you’re comfortable exposing port 443 to the world, go for it. Otherwise, use 443 internally to the Docker container, and pick another port for the outside world. Be sure to forward this port on your router!
    3. The environmental variables you’ll want to override are:SQUIRREL_MUNKI_REPO_PATH (this is the path to your Munki repo, which you specified in Step 4a).
      SQUIRREL_BASIC_AUTH (this is a randomly generated password for your Munki repo)
      SQUIRREL_TLS_CERT (/path/to/cert.pem)
      SQUIRREL_TLS_KEY (/path/to/privkey.pem)
  5. But wait, where do we get the cert? I wrote a really simple script to copy the Let’s Encrypt certs to Squirrel’s config location: get it here. Be sure to edit line 6! I run this once a night, with DSM’s Task Scheduler.
  6. After you start up your Squirrel container, check the Docker logs by selecting your container, clicking the Details button, then the Log tab. You’ll see the Basic Authentication string that you’ll need to provide to your Munki clients. You can find out more information on the Munki wiki.

After that, you’re done! Your clients have a secure Munki repo, and you don’t have to bother with Apache config files, a reverse proxy for securing your web server, or any of that.

Securing Ubooquity with Let’s Encrypt on Synology DSM

Update, 2020-06-11: I’m now using Synology’s built-in NGINX-based reverse proxy instead. The instructions below may not work.


Whew, that’s a very specific title. I don’t know if this will be useful to anyone else, but it took a fair amount of work to figure it out, so I figured I’d document it. There will be more Mac stuff soon, I promise!

If you haven’t heard, Let’s Encrypt is an excellent service, with the aim of securing the internet by offering free HTTPS certificates to anyone who requests one. In fact, I’m using one on this website right now. πŸ™‚

With DSM 6.0, Synology added the ability to request a free certificate from Let’s Encrypt to secure your NAS. DSM handles renewing your certificate, which must happen every 90 days (one of the limitations of the free certificate, but nothing that can’t be automated).

Unrelated for the moment, but I’ve been using Ubooquity (through Docker!) for the past few months, and it’s been pretty neat. You can point Ubooquity to a directory of ePub and PDF files, and it’ll allow you to access the files remotely using reader apps like Marvin, KyBook, or Chunky. I have a habit of buying tech books and comics through Humble Bundle sales, but transferring the files to my iPad through iTunes/iBooks is clunky and requires a fair amount of disk space upfront.

Although Ubooquity supports user authentication, you’ll want that to happen over HTTPS, to keep your passwords secure. Luckily, Ubooquity supports HTTPS, but requires the certificate (and other associated files) to be in a format called a “keystore”. What?!

Here’s how to leverage DSM’s Let’s Encrypt support to secure Ubooquity, automatically.

  1. First, you’ll want to set up Let’s Encrypt in DSM’s Control Panel. See Synology’s documentation.
  2. Next, you’ll want to get Ubooquity up and running (I recommend the Docker image mentioned above). Synology’s documentation covers that, too. If your eBook library is a mess Calibre will make quick work of that.
  3. For this to work, you’ll also need the Java 8 JDK installed. This will give you access to the ‘keytool’ command you’ll need to create your keystore. Once again, see Synology’s documentation.
  4. Now, you’ll put all of this together. In a nutshell: you’re going to use the Let’s Encrypt certs that DSM has helpfully obtained for you, convert those to a keystore, put the keystore in Ubooquity’s config directory, and tell Ubooquity to use it to secure its interface. Here’s a script to get you started – note that you’ll need to edit lines 11, 12, and 15 for your environment. Thanks to Saltypoison on the Ubooquity forums for most of the code that became this script!
  5. Once you’ve successfully run the script, I recommend using DSM’s Task Scheduler to have it run once a day. This way, Ubooquity’s certificate will always be up to date with DSM’s certificate. That’s right, I’m going to link you to Synology’s documentation.
  6. Finally, you’ll need to tell Ubooquity where to find your keystore. Login to the Ubooquity admin interface, then click the Security tab. You’ll see two boxes – one for the path to your keystore, and one for the keystore password. Enter both. Click ‘Save and Restart’ at the top-right corner.
  7. Now, try accessing your Ubooquity instance using https and your FQDN! If it doesn’t work, make sure you’re forwarding the appropriate ports from your router to your Synology server – you’ll need to do this for both the eBook interface, and the admin interface (which are accessible via two different ports).

I’ll probably post more Synology/Docker stuff in the future, as I’ve been spending a lot of time with both. They’re really awesome!

Resolving a freezing problem on lab Macs

This post has been brewing for a while, and a MacEnterprise thread from today finally got me to write about this problem, and how we resolved it.

Our university has many computer labs – some in public, open spaces, and some in classrooms. Although we don’t use roaming profiles (a technology that Apple finally removed in macOS 10.12), we do bind to Active Directory and create mobile accounts upon logging in with a valid AD account.  To prevent the buildup of cruft, we remove student and faculty accounts periodically. In the public labs, we do it overnight, using a script based off of this one from Marnin Goldberg:

#!/bin/bash
# This script works well for removing local accounts that are older than 1 day.
# Obviously the 1 day timeframe can be modified (-mtime +1).
# Runs using Launch Daemon – /Library/LaunchDaemons/edu.org.deleteaccounts.plist
# version .7
DATE=`date "+%Y-%m-%d %H:%M:%S"`
# Don't delete local accounts
keep1="/Users/admin"
keep2="/Users/admin2"
keep3="/Users/Shared"
currentuser=`ls -l /dev/console | cut -d " " -f 4`
keep4=/Users/$currentuser
USERLIST=`/usr/bin/find /Users -type d -maxdepth 1 -mindepth 1 -mtime +1`
for a in $USERLIST ; do
[[ "$a" == "$keep1" ]] && continue #skip admin
[[ "$a" == "$keep2" ]] && continue #skip admin2
[[ "$a" == "$keep3" ]] && continue #skip shared
[[ "$a" == "$keep4" ]] && continue #skip current user
# Log results
echo ${DATE}"Deleting account and home directory for" $a >> "/Library/Logs/deleted user accounts.log"
# Delete the account
/usr/bin/dscl . -delete $a
# Delete the home directory
# dscl . list /Users UniqueID | awk '$2 > 500 { print $1 }' | grep -v Shared | grep -v admin | grep -v admin1 | grep -v .localized
/bin/rm -rf $a
done
exit 0
view raw gistfile1.txt hosted with ❤ by GitHub
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Disabled</key>
<false/>
<key>Label</key>
<string>edu.org.deleteaccounts</string>
<key>ProgramArguments</key>
<array>
<string>/Library/Scripts/delete-accounts.sh</string>
</array>
<key>RunAtLoad</key>
<true/>
<key>StartCalendarInterval</key>
<dict>
<key>Hour</key>
<integer>7</integer>
<key>Minute</key>
<integer>30</integer>
</dict>
<key>StartInterval</key>
<integer>86400</integer>
</dict>
</plist>
view raw gistfile2.txt hosted with ❤ by GitHub

The most important parts of that script are:

# Delete the account
/usr/bin/dscl . -delete $a

This deletes the cached Active Directory account from the system.

# Delete the home directory
/bin/rm -rf $a

This deletes the home folder, freeing up space for more accounts.

We noticed something strange, though. After a couple of weeks of usage, the iMacs in our public labs would freeze at random points: at boot, at login, when using applications, when logging out, even when shutting down. Here’s a list of things we noted while trying to resolve the issue:

  • We use Munki to deploy software, so one by one, we removed potential culprits from the manifests.  Eventually, we whittled down the manifest items to three things we could not remove from this particular lab: Microsoft Office, the Xerox printer driver, and Active Directory binding.
  • We investigated if this was an issue with our network, power, or Active Directory setup.  For a few weeks, all iMacs were plugged into UPSs.
  • We replaced all of the iMacs with brand new models – some with SSDs, and some not.
  • As this issue persisted over ~3 years or so, we tested against multiple macOS versions – including 10.9, 10.10, and 10.11 (and the minor versions in between).
  • We enabled OD debug logging, but couldn’t make much sense of the logs.  They were very, very verbose.
  • Ultimately, the best fix was to reimage the Mac.  This would hold off the freezing for at least another week or two.
  • The freezing seemed linked to computer usage.  If an entire lab was reimaged at the same time, the first Macs to freeze were located near the printers. During the summer, when usage was decreased, we rarely had reports of freezing issues in the public labs.

We were in the process of reaching out to our Apple Systems Engineer, when we found a long-running thread on Jamf Nation, detailing the exact problems we were facing.  It was a relief to see others were trying similar tactics, too.  Then, towards the bottom of the thread, Frank Kong noted that with every use login, some files were being left behind – and the script we were using did not clear those out.  In System Preferences > Sharing > File Sharing, you could see a long list of shares, all named things similar to “Mike Solin’s Public Folder”.  Bingo, there’s our culprit.

Alan Petty, in the same thread, added this code to his profile deletion script:

/usr/bin/find /private/var/db/dslocal/nodes/Default/sharepoints -name "*" -type f -delete
/usr/bin/find /private/var/db/dslocal/nodes/Default/groups -name "com.apple.sharepoint*" -type f -delete

We found this code can be run while a user is logged in, so we don’t need to exclude the current user from this part of the script. It will, however, delete all file shares present on the computer (whether they are for public folders or not). This isn’t an issue in our labs, but it’s still worth mentioning.

We’ve had this fix in production for just over a month, and I can safely say the freezing problems haven’t returned.

Long-term, it might be best to look into deleting profiles using a configuration profile – Marnin posted his here.  For now, we’re sticking with the script, as it gives us more control over where and when it runs.

Page 1 of 2

Powered by WordPress & Theme by Anders Norén