Setting up Synology’s reverse proxy server

In several previous posts, I detailed how to secure various services with a Let’s Encrypt certificate, installed in DSM. I approached each one individually, figuring out how to link the certificate in a way that each application accepted.

On my post about securing Ubooquity, jcd suggested I use Synology’s built-in reverse proxy server instead (and linked to this tutorial). Honestly, this was the best advice, and I’ve since switched away from the various methods I detailed before. Please check out Graham Leggat’s tutorial – this post isn’t meant to be a retelling, but hopefully adds some useful notes that I learned along the way.

Essentially, here’s how a reverse proxy works: you have a service running inside of your firewall over HTTP (not secured). Here are some of your options for opening that service outside of your network:

  • Open it as-is, unsecured. This is generally not a good idea.
  • Secure the service and open it outside of your network. You’ll need to read documentation, and possibly convert the certificate and put it in a place the application expects. Furthermore, as you open up more services outside of your network, you’ll need to open separate ports for each – it’s a lot to manage when you just want to access your service outside of your firewall.
  • You can use a reverse proxy.

A reverse proxy is a separate server, sitting in between your service and the internet, which will encrypt all traffic, seamlessly. When you connect from outside of your firewall, you’ll communicate securely to your reverse proxy, which will then pass along your traffic to your unencrypted applications.

There are many benefits to this approach: this works with nearly every application, requires very little configuration (past the initial setup), allows you to set up memorable URLs without using weird ports, etc.

Some prerequisites:

  • First, I’m going to assume you have an application/service that you want to open outside of your network, securely. Set it up on an unused TCP port. I recommend checking this list to avoid commonly used ports.
  • You’ll need a domain name, and be able to add custom DNS entries.
  • You’re also going to need a wildcard certificate. I paid for one, but Let’s Encrypt offers them for free, too (you’ll probably need to use this script).
  • If you don’t pay your ISP for a static IPv4 address, you’ll need to set up Synology’s QuickConnect service. It’s free, but requires a Synology.com account.

Now that you’ve got all of that squared away, let’s proceed.

  1. First, we’ll need to forward port 443 to your Synology server. See here for instructions on how to do that for most types of routers.
  2. Add your wildcard certificate to DSM.
  3. At your domain registrar, edit the DNS settings for your domain name. Add an entry with the following:
    1. Type: CNAME
    2. Name: application.yourdomain.com
    3. Value: (your QuickConnect address – example.synology.me)
  4. Unless your domain name is brand new, it shouldn’t take long for your new subdomain to resolve to your Synology server’s IP address.
  5. In DSM, click Control Panel, then Application Portal, then the Reverse Proxy tab. Click the Create button. Fill in these details:
    1. Description: (name of your application)
    2. Source
      1. Protocol: HTTPS
      2. Hostname: application.yourdomain.com
      3. Port: 443
      4. (don’t check any of the boxes)
    3. Destination
      1. Protocol: HTTP
      2. Hostname: (the local IP address of the server running your application, such as 192.168.1.3 or 127.0.0.1)
      3. Port: (the port you’re currently using to access your application, such as 80 or 8080)
  6. Click Save, then try to access https://application.yourdomain.com in a web browser. If you did everything right (and I didn’t miss any steps!), you should be able to load your application and see that the connection is secure. If you click the lock, you should see your wildcard certificate.

Going forward, you can do this for multiple applications – and each one can use port 443, so you don’t need to open additional ports outside of your firewall or remember anything more than your unique subdomain for each application.

Using Munki to enable sudo for Touch ID

Ever since I got my MacBook Pro with a Touch Bar, I’ve avoided typing in my password as much as possible. macOS 10.14 and 10.15 added more places in the OS that accept Touch ID, which has been a welcome change. As part of my job, I tend to use the sudo command quite a bit, and this post from Rich Trouton has been a godsend. Just edit the appropriate file, restart your Terminal session, and you’re all set.

However, with many macOS patches and security updates, /etc/pam.d/sudo is reset back to defaults. I don’t know why this happens, but it’s quite annoying. After manually applying the change to this file again, I finally decided to script it.

Now, there are a handful of files that can really ruin your day if they become damaged or invalid. This is one of those files. Please proceed with caution, keep good backups, and be prepared to reinstall your OS if things get really messed up. That said, this worked for me on macOS 10.15.5, and will hopefully continue to work for years to come.

Since I use Munki, I decided to build a nopkg file that checks for the appropriate line in /etc/pam.d/sudo, and inserts it if it’s not present. To download the code, please see my GitHub repository.

Our Smart Home Setup

Last spring, my fiancée and I bought a house. We lived in an apartment for two years, and experimented with smart home stuff, but wanted to do a bit more with our house. We’ve had enough people ask about our setup that I figured I’d write a blog post.

If you’re new to this stuff, you’re probably overwhelmed by the competing (and often overlapping) technologies that are out there. Between Apple Home / HomeKit, Google Home / Nest, Samsung SmartThings, Amazon Alexa, IFTTT, etc., it gets confusing very quickly (though it sounds like that will get better eventually). In our case, we made the decision to go with Apple Home / HomeKit – I’m not crazy about Google and Amazon’s invasive privacy practices, even if their voice assistants are better for it. Since we have iPhones, iPads, and Macs, Apple’s standard made the most sense for us.

First, we bought a bunch of Belkin WeMo Mini smart plugs. We’ve attached these to various appliances: lamps, humidifiers, noise machines, etc. They’re great for anything that resumes whatever they were doing when you unplug, then plug them back in. They didn’t support HomeKit at first, but a firmware update added that later.

For rooms that had built-in lighting, we bought a WeMo Dimmer Switch and a WeMo Light Switch. The dimmer switch received HomeKit compatibility via a firmware update, but the light switch required a new model – so be sure to get the second generation switch. Note that these require a neutral wire, so if your house’s electrical wiring is very old, you should check before buying – we weren’t able to use these everywhere, unfortunately.

For the front door, we bought a Yale Assure Lock, along with an indoor handle. We chose this model specifically because of the keypad (you can generate an unlimited number of codes and expire them whenever you want), the ability for it to auto-unlock as you approach the door, and the ability to auto-lock. You can receive notifications whenever the door is locked and unlocked, too. Some models do not have physical keys, but we wanted one in case the batteries died or the lock malfunctioned.

For the side door, my brother Paul gave us his August Smart Lock Pro, which we used with the standard deadbolt we installed last spring. This integrates with the same app we use for the front door, as well as HomeKit.

We also replaced the thermostat with a Honeywell Lyric T5+. Replacing the thermostat was surprisingly easy, and gave us immediate benefits: we can control the thermostat from anywhere, set up a geofence so it turns back the heating/cooling when we’re not home, and schedule times for different temperatures. Its HomeKit support hasn’t been the best – we’ve found that it will drop off of HomeKit, but not the Wi-Fi, so we know it’s not completely offline. We’ve considered replacing it with an Ecobee thermostat, which includes extra sensors for around the house – maybe in the future.

Outside, we bought a few Arlo Pro 2 cameras. Initially, these didn’t support HomeKit, but Arlo added support via an app update. Each Arlo camera appears in HomeKit as both a camera and a motion sensor. They run completely on batteries, so you’ll need to take them down every couple of months to charge them. This gives you a lot of freedom to mount them where they’re needed, though – you don’t need to plan around outdoor outlets. A big selling point was the free 7 days of recording, though we upgraded to a paid plan for more storage and notifications with video previews. Initially, we used the magnetic outdoor mounts, but switched to the Wasserstein Arlo Mounts instead – that way, when we take down the cameras for charging, we don’t need to do so much adjusting when we put them back up.

Although we use Spotify for music, we found that having various devices that can play music – TVs, sound bars, individual speakers, laptops – made for an inconsistent experience. A few months ago, prices dropped on Sonos One speakers, and we bought one out of curiosity. Long story short, that grew to a full house of Sonos speakers. It really helped that Sonos and IKEA have a partnership, so we picked up several SYMFONISK bookshelf speakers, as well as a couple of SYMFONISK table lamps. We even mounted a Sonos One in the bathroom with a ALLICAVER wall bracket. It’s very hard to describe the feeling of whole-house audio, but it’s pretty neat to walk from room to room and your music is playing everywhere. Every speaker supports AirPlay 2 and HomeKit, though we’ve found the Sonos and Spotify apps to be more reliable for queueing up music.

After buying a couple of SYMFONISK table lamps, we realized that we’d need smart bulbs. Up until this point, we’d managed to use the WeMo switches with standard LED bulbs, but cutting power to the SYMFONISK table lamps would mean no music. At first, we tried IKEA’s TRÅDFRI, but found their bulb selection to be very limited – we didn’t want to buy into a system that would only be useful for two lamps. This brought us to the Philips Hue bulbs.

I could write an entire post about Philips Hue alone, but in short: we’ve been very happy with it. There’s a wide selection of options, which is great, but also intimidating. They make smart bulbs in every size, so they’re flexible enough that you can expand in the future. Each bulb comes in two different types: bulbs that can switch between white/yellow light, and bulbs that support colors like red, blue, green, etc. For our case, we’ve found that the white/yellow bulbs have been fine for us, and haven’t been able to justify the cost of the colored bulbs. All bulbs support dimming without the need for additional hardware.

It helps to figure out how many bulbs you need, then buy a kit (which includes the hub) and the right amount of bulbs. You can also buy wireless buttons and dimmer switches, then pair them to any bulbs. The entire system is very flexible. Of course, everything integrates with HomeKit.

The end result is that we can tell Siri to unlock the front door, we can have all of our lights turn on when we arrive home, we can have motion sensors in the Arlo cameras turn on Philips Hue lights, and more.

Paul gave us a bunch of additional accessories that we’ve slowly integrated over the past couple of weeks, so if there’s interest, I’ll write a follow-up post. Let me know if you have any questions!

A note about testing macOS 10.14 and DEP with VMs

Just a quick note if you’re following this method to test Apple’s Device Enrollment Program (DEP) with VMs: as of macOS 10.14.3, the hardware must meet the minimum system requirements for macOS 10.14.

With macOS 10.14.0 through 10.14.2, you were able to use serial numbers from Macs that could not run 10.14.x themselves. Since you’re booting VMs, that didn’t really matter. However, as of 10.14.3, the VM will stall while booting, then eventually reboot and stall again.

It’s unfortunate, as older hardware is easier to find – I had a stack of 2011 Mac minis that I kept specifically for VMs.

Update, 2019-09-03: Erik Gomez corrected me: if you create a VM with vfuse, specify the 2011 Mac mini’s serial number, but use Macmini6,2 instead of Macmini5,1 as hw_model, it’ll boot and let you proceed through DEP. I haven’t tested any other model, but this works great! Thanks, Erik.

Binding Macs to AD using Munki’s Configuration Profile support

Update, 2020-06-11: I’ve changed the code back to a script. Please see the GitHub repo for an explanation and the updated code.


Although the trend is to move away from binding Macs to Active Directory (most commonly using NoMAD), we’re still binding for various reasons:

  • Being able to authenticate with domain credentials for doing things that require admin privileges.
  • Being able to connect remotely using domain credentials.
  • Computer labs and other multi-user Macs.
  • We’re still using AD mobile accounts, for now.

Originally, we would bind Macs to AD as part of our DeployStudio imaging workflow. Unfortunately, we faced a couple of drawbacks with this approach:

  • By installing/configuring something only when imaging, you’re not enforcing the setting – it’s a one-shot thing. In the future, the setting could change, or the application might require an update, but you’re not going to reimage your entire fleet every time that happens.
  • When we moved from DeployStudio to Imagr, we needed to trim our workflows to be as slim as possible.

With the help of Graham Gilbert’s tutorial, we were able to move AD binding to Munki. This also gave us an unexpected benefit: in the past, we frequently found that the binding on Macs would randomly break. This was a major issue in the classrooms, where students and faculty would not be able to login to computers and start class. Moving this to Munki with a custom installcheck_script made it “self-healing” – every 1-2 hours, Munki will rebind the Mac, if necessary (or prompt the user to do this through Managed Software Center).

For the past year, there’s been a big push to move to configuration profiles for applying settings. Luckily, you can use the “directory” payload to bind to AD! However, it’s just running dsconfigad in the background anyway, so it’s entirely possible for your Mac’s binding to be broken, but the AD profile to show as successfully installed. The MDM protocol currently has no method of determining if the AD profile should be reinstalled, so Munki is a much more logical choice for deploying this. Armin Briegel’s tutorial was instrumental in assisting with this transition.

Code and usage instructions are available in my GitHub repository.

Securing MunkiReport with Let’s Encrypt on Synology DSM

Update, 2020-06-11: I’m now using Synology’s built-in NGINX-based reverse proxy instead. The instructions below may not work.


Continuing my series on using Docker with a Synology NAS, I now have MunkiReport v3 working – and you can, too!

Some background: MunkiReport is a companion project to Munki (which we set up with Squirrel last week). MunkiReport v3 was released recently, and has a huge list of improvements, thanks to a dedicated group of contributors – especially @bochoven and @mosen, who have overhauled large portions of the project. MunkiReport v3 has some new requirements that weren’t present with v2 – this is the perfect use case for Docker! Docker will handle all of this for us.

Briefly, here’s what we’re going to do: we’re going to set up MySQL, Adminer, and MunkiReport using Docker Compose. Then, we’re going to use DSM 6.x’s certificate and reverse proxy support to secure MunkiReport. Let’s go!

  1. Enable SSH to your Synology server. Open the Terminal and connect to your server (I’m using root, but your admin account should also do fine). Leave that open for later.
  2. Install Docker through Package Center, if you don’t already have it.
  3. Add a certificate to DSM. I like Let’s Encrypt – DSM can walk you through the certificate creation process, and will renew it automatically. You’ll need a domain name for this. You might be able to use Synology’s QuickConnect service for this. (I ended up setting up a CNAME for my QuickConnect address with a subdomain that I already own, then used the CNAME for the certificate)
  4. Create a shared folder for your Docker data. I named mine ‘docker’. Create two directories inside of it: ‘MunkiReport’ and ‘MySQL’.
  5. Create a file called ‘docker-compose.yml’ in your ‘docker’ shared folder. Populate it with this data, to start:
version: '3.2'
networks:
default:
driver: bridge
services:
Adminer:
container_name: Adminer
image: adminer
# https://hub.docker.com/_/adminer/
ports:
"3307:8080"
networks:
default
restart: on-failure
MunkiReport:
container_name: MunkiReport
image: munkireport/munkireport-php:release-latest
# https://hub.docker.com/r/munkireport/munkireport-php/
volumes:
/volume1/docker/MunkiReport/config.php:/var/munkireport/config.php:ro
ports:
"4443:80"
networks:
default
restart: on-failure
depends_on:
MySQL
MySQL:
container_name: MySQL
image: mysql:5.7
# https://hub.docker.com/_/mysql/
volumes:
/volume1/docker/MySQL:/var/lib/mysql:rw
ports:
"3306:3306"
environment:
MYSQL_ROOT_PASSWORD=secretpassword
networks:
default
restart: on-failure

view raw
docker-compose.yml
hosted with ❤ by GitHub

  1. Change line 41, your MySQL root password, to something random. You can also change the port numbers if you’d like, but I’m going to finish this tutorial with the assumption that you haven’t touched those (it can get confusing very quickly).
  2. Switch over to your Terminal window and run these two commands. The first will download the Docker images for Adminer, MunkiReport, and MySQL. The second command will create Docker containers, which contain your custom settings. If you change any of the settings in your docker-compose.yml file, re-running these commands will destroy the Docker containers and recreate them with your new specifications. Pretty cool. You can monitor all of this with the Docker application in DSM.
    /usr/local/bin/docker-compose  -f /volume1/docker/docker-compose.yml pull
    /usr/local/bin/docker-compose -f /volume1/docker/docker-compose.yml up -d
  3. Now, let’s create the MySQL database for MunkiReport. Go to your Synology server’s IP address, but add :3307 to the end. You’ll reach a login page. Here are the relevant details:
    1. Server is your NAS’s IP address, but with :3306 at the end.
    2. Username is root.
    3. Password is whatever you set in Step 6.
    4. Database can be left blank.
  4. After you login, click ‘Create database’. Name the database whatever you’d like – I went with ‘mreport’. For ‘Collation’, pick ‘utf8_general_ci’. Close the Adminer tab.
  5. Open a new tab, with your server’s IP address followed by :4443 at the end.  You should be greeted with an empty MunkiReport installation. Nice!
  6. In your ‘docker’ shared folder, you had created a ‘MunkiReport’ folder in Step 4. Inside of that, create a file named ‘config.php’. This is how we’ll configure MunkiReport – by overriding values specified in config_default.php (click to see MunkiReport’s default values). I’ll skip this part of the tutorial, as it’s documented much better on MunkiReport’s wiki. At a minimum, I’d strongly suggest setting up authentication, MySQL connection settings, and the modules you’d like to enable.
  7. Before you can expose your MunkiReport container to the outside world, you’ll want to secure it. You’ll do this with a reverse proxy – basically, another web server put in front of your MunkiReport container (which itself contains a web server). The reverse proxy will add encryption, but otherwise leave your MunkiReport installation alone. DSM 6.0 includes a reverse proxy, so let’s use that.
  8. Check out the bottom of this Synology knowledge base article. Unfortunately, the documentation leaves a lot to be desired, so I’ll suggest some settings:
    1. Description: MunkiReport
    2. Source Protocol: HTTPS
    3. Source Hostname: *
    4. Source Port: 4444
    5. (leave both the HSTS and HTTP/2 boxes unchecked)
    6. Destination Protocol: HTTP
    7. Destination Hostname: 127.0.0.1
    8. Destination Port: 4443
  9. Click OK to save.
  10. In your router, forward port 4444 (TCP) to your Synology server. If you haven’t given your Synology server a static IP address, that’d be a good idea.
  11. Visit your secure MunkiReport installation in a web browser:
    https://yourdomain.com:4444

From there, you can create a MunkiReport installation package (I like using the AutoPkg recipe for this). Push it to your clients, then watch as they check in with sweet, sweet data.

Securing Squirrel with Let’s Encrypt on Synology DSM

Update, 2020-06-11: I’m now using Synology’s built-in NGINX-based reverse proxy instead. The instructions below may not work.


Yep, this is another Docker blog post…but this time, we’re covering Munki!

It’s pretty common knowledge that a Munki server is just a web server. This allows tons of flexibility with hosting your Munki repository – basically anything that can run Apache, NGINX, IIS, etc. can act as your Munki server (even macOS Server, but I wouldn’t recommend it). Today, I’m going to continue the series of blog posts about Docker, but we’re going to discuss something called Squirrel.

Squirrel, written by Victor Vrantchan (@groob), is described as a simple HTTPS server for Munki. While you can set up your own Apache server (or Docker container), Squirrel comes prebuilt for hosting a Munki server.

As with Ubooquity, I’m going to use Synology DSM’s certificates.  That way, we can leverage Let’s Encrypt without needing to do any additional setup.

  1. First, set up Let’s Encrypt with in DSM’s Control Panel. Synology has excellent documentation on that.
  2. Before we go any further, I’d recommend creating a directory for Squirrel to save files (such as certificates). Separately, you’ll also want to create a Munki repository (see the Demonstration Setup, but skip the Apache config stuff). If you already have a repo, that’s great too.
  3. Next, add the Docker image for micromdm/squirrel. Follow Synology’s instructions.
  4. Create a Docker container, following those same instructions.
    1. You’ll want to specify two volumes, both of which you created in Step 2: where to store Squirrel’s data, and your Munki repository. I have a shared folder named ‘docker’, and I like creating directories for each service within that: for example, /volume1/docker/Squirrel. I made a ‘certs’ directory within that, as well.
    2. You’ll also want to pick a port. If you’re comfortable exposing port 443 to the world, go for it. Otherwise, use 443 internally to the Docker container, and pick another port for the outside world. Be sure to forward this port on your router!
    3. The environmental variables you’ll want to override are:SQUIRREL_MUNKI_REPO_PATH (this is the path to your Munki repo, which you specified in Step 4a).
      SQUIRREL_BASIC_AUTH (this is a randomly generated password for your Munki repo)
      SQUIRREL_TLS_CERT (/path/to/cert.pem)
      SQUIRREL_TLS_KEY (/path/to/privkey.pem)
  5. But wait, where do we get the cert? I wrote a really simple script to copy the Let’s Encrypt certs to Squirrel’s config location: get it here. Be sure to edit line 6! I run this once a night, with DSM’s Task Scheduler.
  6. After you start up your Squirrel container, check the Docker logs by selecting your container, clicking the Details button, then the Log tab. You’ll see the Basic Authentication string that you’ll need to provide to your Munki clients. You can find out more information on the Munki wiki.

After that, you’re done! Your clients have a secure Munki repo, and you don’t have to bother with Apache config files, a reverse proxy for securing your web server, or any of that.

Securing Ubooquity with Let’s Encrypt on Synology DSM

Update, 2020-06-11: I’m now using Synology’s built-in NGINX-based reverse proxy instead. The instructions below may not work.


Whew, that’s a very specific title. I don’t know if this will be useful to anyone else, but it took a fair amount of work to figure it out, so I figured I’d document it. There will be more Mac stuff soon, I promise!

If you haven’t heard, Let’s Encrypt is an excellent service, with the aim of securing the internet by offering free HTTPS certificates to anyone who requests one. In fact, I’m using one on this website right now. 🙂

With DSM 6.0, Synology added the ability to request a free certificate from Let’s Encrypt to secure your NAS. DSM handles renewing your certificate, which must happen every 90 days (one of the limitations of the free certificate, but nothing that can’t be automated).

Unrelated for the moment, but I’ve been using Ubooquity (through Docker!) for the past few months, and it’s been pretty neat. You can point Ubooquity to a directory of ePub and PDF files, and it’ll allow you to access the files remotely using reader apps like Marvin, KyBook, or Chunky. I have a habit of buying tech books and comics through Humble Bundle sales, but transferring the files to my iPad through iTunes/iBooks is clunky and requires a fair amount of disk space upfront.

Although Ubooquity supports user authentication, you’ll want that to happen over HTTPS, to keep your passwords secure. Luckily, Ubooquity supports HTTPS, but requires the certificate (and other associated files) to be in a format called a “keystore”. What?!

Here’s how to leverage DSM’s Let’s Encrypt support to secure Ubooquity, automatically.

  1. First, you’ll want to set up Let’s Encrypt in DSM’s Control Panel. See Synology’s documentation.
  2. Next, you’ll want to get Ubooquity up and running (I recommend the Docker image mentioned above). Synology’s documentation covers that, too. If your eBook library is a mess Calibre will make quick work of that.
  3. For this to work, you’ll also need the Java 8 JDK installed. This will give you access to the ‘keytool’ command you’ll need to create your keystore. Once again, see Synology’s documentation.
  4. Now, you’ll put all of this together. In a nutshell: you’re going to use the Let’s Encrypt certs that DSM has helpfully obtained for you, convert those to a keystore, put the keystore in Ubooquity’s config directory, and tell Ubooquity to use it to secure its interface. Here’s a script to get you started – note that you’ll need to edit lines 11, 12, and 15 for your environment. Thanks to Saltypoison on the Ubooquity forums for most of the code that became this script!
  5. Once you’ve successfully run the script, I recommend using DSM’s Task Scheduler to have it run once a day. This way, Ubooquity’s certificate will always be up to date with DSM’s certificate. That’s right, I’m going to link you to Synology’s documentation.
  6. Finally, you’ll need to tell Ubooquity where to find your keystore. Login to the Ubooquity admin interface, then click the Security tab. You’ll see two boxes – one for the path to your keystore, and one for the keystore password. Enter both. Click ‘Save and Restart’ at the top-right corner.
  7. Now, try accessing your Ubooquity instance using https and your FQDN! If it doesn’t work, make sure you’re forwarding the appropriate ports from your router to your Synology server – you’ll need to do this for both the eBook interface, and the admin interface (which are accessible via two different ports).

I’ll probably post more Synology/Docker stuff in the future, as I’ve been spending a lot of time with both. They’re really awesome!

Deploying JMP Pro 12.x and 13.x licenses

As I’m reorganizing my GitHub repositories, I’ve realized that I forgot to post about my work with Shea Craig and the JMP Team at SAS. Because of them, I was able to deploy JMP Pro 12.x and 13.x licenses to our Mac labs.

You can find code and instructions at my jmp_pro_12_license_pkg and jmp_pro_13_license_pkg GitHub repositories.

NoMAD Group Condition for Munki

One of the most powerful features of Munki are conditional items – and the ability for an admin to provide custom conditions for deploying or removing software. For example, we’ve been using the scripts that Hannes Juutilainen has published to determine which macOS version is supported by a particular Mac’s hardware.  We can then offer the most appropriate OS upgrade to each Mac.

We recently deployed the excellent NoMAD to single-user Macs, with the intention of resolving keychain issues (and eventually moving away from Active Directory binding altogether). If you’re using AD with your Macs, it’s absolutely worth checking out.

When a user logs into NoMAD, some data about the user’s AD account is retrieved for later usage – such as their group membership (also known as Organizational Units, or OUs). In our environment, users are divided into different groups based on their department. What if we could use that for deploying printers, similar to Group Policy on Windows?

You might see where I’m going with this – check out the script on GitHub for requirements and usage instructions.

Page 1 of 4

Powered by WordPress & Theme by Anders Norén