Master & Cmd-R

Understanding Office 365 ProPlus Servicing

How do updates work in this new paradigm?

In my recent experience with deploying Office 365 Pro Plus, the methodology for deploying updates is still somewhat mystifying for most administrators – diagrams like this one don’t really help us to understand exactly how we want to (or should) apply updates:

I mean, in theory it explains it, but in my experience it’s just gets more confusing trying to understand which updates should be applied, when they should be applied, and how they should be applied.

Let’s break it down:

  1. Individual updates are no longer available for Office 365 Pro Plus – this means you cannot use Windows Updates, WSUS, or SCCM to apply updates the way you used to in the past. (source)
  2. Every month a new build is released – this means that you now update from one build to the next, not applying updates based off the build you installed 6 months ago.
  3. Update Channels – here is where find things get the muddiest… partially, I believe, because Microsoft decided to use a similar yet different naming scheme for Windows 10 update / servicing channels.
  4. Each build is in mainstream support for 1 year – this is as long as you can defer your updates / builds before needing to upgrade to remain supportable and current.

Channels, how do they work?

Let’s talk about what these channels are and what they mean to you as you try to figure out how you’re going to manage Office Pro Plus going forward. First off – bookmark this site, and keep an eye on it to know what Channel, Version, Build, and Release Date is current:

This is a screenshot of the most recent update (January 2017) – but check the site for the most recent version.

Here’s how the channels break down:

  1. Current Channel (CC) – this is the channel you’ll be on by default if you log into the portal and click the helpful button that wants you to install Office Pro Plus. The defaults for this channel are to receive a new build from Microsoft on a monthly basis, automatically. You can still control where these updates come from if you want to (more on that later), but this is the channel for early adopters, small companies that like being on the cutting edge, and are willing to put up with frequent changes.
  2. First Release for Deferred Channel (FRfDC) – think about this as being your pilot / testing channel. If you are not just sticking with the Current Channel for your business (and most aren’t), the First Release for Deferred channel will be your power users, IT teams, and whomever you’ve identified as being a good tester in your organization.
  3. Deferred Channel (DC) – this is where most businesses are going to put their users, and this is indeed a good idea. The deferred channel has a nice steady pace of updates (every four months), and these updates will have gone through all the testing of Current Channel users, then First Release for Deferred users before they finally make their way down to the Deferred Channel users. This means that you have about 8 months of folks testing new updates along those various channels before you push them out to your users, allowing for a much smoother update process, with much less chances of changes breaking things in your org.

Basically, the update flow looks like this – using today’s Deferred Release (Version 1605) as a reference:

  • June 6th, 2016: Version 1605 was released to the Current Channel (CC)
    • The current Channel continues to get new builds on a monthly basis
  • June 14th, 2016: FRfDC gets the first Version 1605 build
    • The FRfDC then gets monthly builds of version 1605 until October 11th, when Version 1609 is released to both the CC and the FRfDC.

Throughout these four months, the Current Channel has received Versions 1606, 1607, 1608, and 1609 with various iterations of builds throughout. Every quarter, all these updates get rolled into a single release and pushed out to both channels, and then CC starts to iterate again for another quarter.

  • January 10th, 2017: Version 1605 is now released to the Deferred Channel (DC)CC is already on Version 1611, and FRfDC has started using Version 1609

The big takeaway here is that if you stick with the DC for your broader user base, you’ll be deploying updates that were first released around 8 months ago – giving lots of time for these updates to be tested, bugs reported and squashed, and feedback given to Microsoft on features and changes. This channel gives you the safest, slowest update path possible, while still ensuring that your Office installations are being kept up to date.

Don’t forget that security updates are still being applied monthly, so it’s not like your 8 months behind on security, just on features and changes.

All good? Let’s move on to the how of things…

How do I actually manage this?

Glad you asked! One of the biggest changes that admins often miss is that Office Updates no longer roll out with Windows Updates. This means Windows Update, WSUS, and SCCM cannot be used to update and manage Office the way they used to.


There are three ways that admins can apply updates for Office 365 ProPlus:

  • Automatically from the Internet
    • This is the default setting for Office 365 ProPlus
    • Monthly builds / updates are installed automatically
    • No additional user or administrative input is required
    • Can be used for updates even if the Office Deployment Tool is used to install Office
    • Least amount of administrative effort, least amount of control

As I mentioned above, if you’re already agile enough to be on the Current Channel, you’ll probably want to just leave these settings to default, and let users apply updates automatically from Microsoft servers as new builds are pushed out. If this is you, congratulations! You’re helping to test updates and make sure they’re all good before they get released to the masses in the DC 😉

  • Automatically from an on-premises location
    • More admin effort, more control
    • Use the ODT to download the monthly build to a network share
    • Computers are configured through the ODT or GPO to install updates automatically from that share
    • Group Policy and the ODT specify a network location for updates

This option is where you go if you want to still keep people updating automatically, but you want a little more control over the version they’re getting – the TechNet links below layout the process of how you can automate this if desired, and basically bridges the gap between convenience and control in your environment. This option will also allow you to maintain a steady cadence of updates, as you only need to configure your installs to update from a specific location, and then download whichever version you want into that updates folder.

  • By installing an updated version of Office 365 ProPlus
    • Most admin control, greatest amount of effort required
    • Use the ODT to download and install the latest / required version
    • This option reinstalls ProPlus, but only new or changed files are downloaded to the user’s computer
    • Using this option disables automatic updates

This final option gives you the greatest amount of fine grained control – Office Updates are disabled entirely, and users will only get the versions that you deploy to them. Use this methodology if rigid change control is required, or if you want to make sure that everyone (except your pilot/test users of course) is holding to the same version, and helps to keep your environment standardized.

More information (and full details) available here:

It’s important to note that updates do not require local admin rights as they run under the system context, so if you’re trying to prevent users from running updates, just removing local admin privileges won’t stop these updates from applying. This also means that it’s a lot easier to manage these updates going forward, as you won’t have to go around type in an admin password in order for users to get their updates.

Given the nature of these channels (multiple release stages), it’s important that you implement a solid testing methodology in your environment. Designate a number of flexible and competent users, and put them on the FRfDC so that you know what updates are coming in your environment before they get pushed out to mission critical systems. This will allow you to defer updates if you need more testing / development time, or give you more time to prep your users for feature changes that will impact their day to day life. Once you’re comfortable that the updates are not going to cause problems in your environment, move them into the Deferred Channel and let them be released to the rest of your users.

Here’s some additional reading resources for extra credit:

Fixed: PXE Boot Process loops 4 times

I ran into this issue on a recent Windows 10 deployment for a client – when the machine attempted to PXE boot from the WDS / MDT server, it would go through four iterations of the PXE boot cycle before finally getting the correct boot image from the server. Worse, if you have WDS configured to require F12 to continue, you have to press F12 each time, or it will time out and fail.

I tried a number of fixes to see if I could resolve the issue, including:

  • Setting default boot images – didn’t work
  • Removing the F12 requirement – didn’t work
  • Removed option 67 from the DHCP scope – didn’t work
  • Removed option 66 just for good measure – didn’t work
  • Tried changing option 67 to the boot wim,, pxeboot.n12 – didn’t work

NB: In case you’re wondering what these boot options actually are, here’s some of the settings you might be seeing:

After much searching, and not much luck, I stumbled across the following forum post, and gave it a shot:

On the network settings of the WDS server, Disable NetBIOS over TCP/IP:

Problem solved!

Azure Point 2 Site VPN: DNS config is wrong

Just ran into this issue when I created a P2S VPN on my Azure Virtual Network – I downloaded the client and connected ok, but I realized I could only connect to my servers via IP, not by FQDN.

Checking my local IP settings, I realized that the DNS Server on my VPN connection was set to a public DNS server and not my Domain Controller / DNS server in Azure.

This wasn’t completely unexpected, because when I created the vnet I used Google DNS, and then I went back to the settings and changed it later once I had my DC set up.

It turns out that when you download the P2S VPN client from the Azure portal, it’s not really a client in the traditional sense (like the Cisco AnyConnect client) – it’s actually a number of config files that get installed to %appdata%\Microsoft\Network\Connections\Cm\connection-name\.

You can try editing the phonebook file as I’ve seen suggested around the web, but I don’t really like that solution – in order for this to work, you need to dial through the phonebook (pbk) file, and not just through the built in Windows VPN connection.

The answer, thankfully, is simple – just remove that VPN client and re-download the P2S VPN client from the Azure portal. Install it on your PC as before, and you’re good to go:

All better now!

Add-AzureAccount fails – Your browser is currently set to block cookies

I recently ran into an issue while running Server 2016 attempting to connect to my Azure account through PowerShell – after installing the Azure PowerShell Modules and running Add-AzureAccount, an authentication window opens, allowing you to connect to your Azure account. However, instead of seeing the logon window, I would only get the following error:

“Your browser is currently set to block cookies. You need to allow cookies to use this service.”

Figuring that Edge was blocking cookies due to the default security configuration in Server 2016, I attempted to open Edge so that I could unblock those sites and be able to log in to my Azure account and continue my server configuration. Seems like that’s a dead end as well!

I hadn’t run into this before, but apparently it’s a known issue – I decided to just create another admin account rather than going down the route of editing my registry settings, as I didn’t really want to start poking holes in my brand new server. It might be completely safe, but I figured I’d just leave it as is – I didn’t really see much use for Edge on my default admin account anyway.

However, after creating a new admin account, logging in, and launching Edge, I found that cookies were indeed already enabled, and I was still having the exact same error connecting to my Azure account in PowerShell. It turns out that the culprit is Internet Explorer, and not Edge at all! If you open Internet Explorer (Start – Run – iexplore.exe) and attempt to log in to or you’ll receive a very similar error:

The answer to this strange little conundrum is just to go in and add the following two sites to your trusted sites in Internet Explorer:

Once this was done, I was able to connect to my Azure account using both Microsoft Account (, and my Office 365 account ( Knowing this, I went back to my built in administrator account and added both those sites to my trusted sites in IE, and all was well with the world again.

Long story short… just add the Microsoft authentication sites above to your Trusted Sites in IE 11 (even on your built in admin account), and you’ll be able to connect to your Azure account properly.

Hope this helps save you some time searching for an answer to this weird problem – good luck!

Use PowerShell to Update Room Calendar Working Hours

I recently had a request to update a bunch of Meeting Room calendars whose Working Hours were set to the wrong time zone, which was causing issues when users tried to view or book appointments in those rooms. Now, I know I could do this by logging into each room manually, but where’s the fun in that? 😉

To update all of the rooms at once, I first needed to figure out how to get the mailboxes I needed, and then get their mailbox calendar configuration. You can do this by using Get-Mailbox with some filters to find the mailboxes with calendars that you want to change – in this case, I knew that they were all Room Mailboxes, and they all began with “HKG-“. You can structure your queries to filter by whatever you want, really – just do a Get-Mailbox username | FL to find out the name of the attributes that you can use in your query. In this case, the attributes I needed were called DisplayName and RecipientTypeDetails – once I had the mailboxes, the next step is to pipe it out to a Get-MailboxCalenadarConfiguration, so I could see what they were set to.

This is what the script looks like:

Get-Mailbox -ResultSize Unlimited | Where {$_.DisplayName -match “HKG-” -and $_.RecipientTypeDetails -match “RoomMailbox”} | Get-MailboxCalendarConfiguration | FT -AutoSize

It should go without saying, but make sure you’re connected to Exchange Online before you run this command!

And this was the result:

You can see from the screenshot above that all but one of the rooms was on Central Standard Time, and only one of them was in the correct time zone; to fix it, I use the first part of my script (the Get-Mailbox portion), and then pipe the results out to a Set-MailboxCalendarConfiguration, along with the attributes I want to change. For this scenario, it was WorkingHoursTimeZone, WorkingHoursStartTime, and WorkingHoursEndTime, like so:

Get-Mailbox -ResultSize Unlimited | Where {$_.DisplayName -match “HKG-” -and $_.RecipientTypeDetails -match “RoomMailbox”} | Set-MailboxCalendarConfiguration -WorkingHoursTimeZone “China Standard Time” -WorkingHoursStartTime 09:00:00 -WorkingHoursEndTime 18:00:00

Much better now!

If you only need to do this for a single user, use the following command in PowerShell:

Set-MailboxCalendarConfiguration adm-jdahl -WorkingHoursTimeZone “Pacific Standard Time” -WorkingHoursStartTime 09:00:00 -WorkingHoursEndTime 18:00:00

And then to view the results:

Get-MailboxCalendarConfiguration adm-jdahl | ft -AutoSize

Hope this helps someone learn a new way to do something cool in PowerShell!

Unable to change Deleted Item Retention

I recently needed to update the Deleted Item Retention period in Office 365 from the default 14 days to the maximum allowed (30 days) for all mailboxes in my environment. Since I was migrating mailboxes to Office 365 at the time, I wrote a script that I could add to my process which would update this setting while it was applying quotas to the mailboxes.

Things were working well, apart from a number of Room mailboxes that had been migrated from Exchange on Premise – every time the script ran, I’d get the following warning on all these mailboxes:

The strange thing is that this was only happening for Rooms that were migrated from Exchange on Premise – any new rooms that were created didn’t have this issue. I decided to compare the mailbox attributes of a room that wasn’t affected by this issue to see what the difference was, and found this culprit:

UseDatabaseRetentionDefaults: True

Turning that setting off allowed me to go back and change the RetainDeletedItemsFor setting to 30 days, like I wanted to:

Set-Mailbox mailboxname -UseDatabaseRetentionDefaults $false

Set-Mailbox mailboxname -RetainDeletedItemsFor 30

In order to fix this for all other rooms affected by this issue, use the following command:

Get-Mailbox -ResultSize Unlimited | where {$_.ResourceType -eq “Room” -and $_.UseDatabaseRetentionDefaults -eq $true} | Set-Mailbox -UseDatabaseRetentionDefaults $false

After that, it was a simple matter of re-running my script – the deleted item retention piece looks like this:

$t = New-TimeSpan -Days 14

$retMailboxes = Get-Mailbox -ResultSize unlimited | Where {($_.Name -notmatch “DiscoverySearch” -and $_.RetainDeletedItemsFor -eq $t)}

foreach ($r in $retMailboxes){
Set-Mailbox -Identity $r -RetainDeletedItemsFor 30

Write-Host “Deleted Item Retention for $($r.Name) successfully updated to 30 days” -ForegroundColor Green

Hope this helps someone else scratching their head trying to figure out why they’re unable to change the Deleted Item Retention Period on mailboxes!

How and when Clutter is enabled

This question has been bugging me since Clutter was launched, and I was happy to find this thread on the IT Pro Network that answered it. Clutter is one of those features that I take for granted now, but it’s definitely a question that comes up during migrations when users are starting to see it, and some aren’t (yet).

“Let me clarify the issue here:

Clutter is a learning system. It requires to have a certain lower limit of messages in the mailbox to confidently learn about a user’s behavior before Clutter is auto enabled for a mailbox.

For newly created mailboxes and mailboxes that are migrated from On-Prem to the cloud, we need the following requirements to be satisfied:

1) At least 1000 messages delivered to the mailbox after creation (or migration to the cloud).

2) User needs to have logged into the mailbox once after creation (or migration to the cloud).

After the above two criteria are satisfied, Clutter is auto enabled for that mailbox within 24 hours.”

From <>

Transfer Outlook 2010 Autocomplete Cache to a New Profile

One issue that can happen when creating a new Outlook profile in order to configure Office 365 access is that the Autocomplete cache disappears – the reason for this is that the autocomplete cache is tied to the old profile, and doesn’t get carried over automatically… the good news is that there is a way to import it from the old profile, so all is not lost!


Start by navigating to c:\Users\username\AppData\Local\Microsoft\Outlook\RoamCache – look for a file named Stream_Autocomplete_0.dat with a hash of numbers. If you have multiple profiles, you can sort by date, and choose the most recent one – the empty autocomplete file will typically be 1 – 2KB, so it’s usually pretty easy to see which one it is.

Next, find the Autocomplete file that you want to import – it’ll usually be quite a bit bigger, and it will have a different hash of numbers in the file name (and usually a different modified date as well):

In order for this to work, Outlook needs to be closed – once this is done, make a backup of the Autocomplete file that you plan to import – just in case. I typically make a copy of this file, and then work with the copy, so I can always go back to my original if I need to.

Next, rename the empty autocomplete dat file – just change the name of it to .bak, or .old, and that should be sufficient. At the same time, grab the name of the Autocomplete file that you want to replace (in this case, the name you’re copying is Stream_Autocomplete_0_A7D60F3ACC828B4EB204A03004F8BD58, and then rename the copy of the file you just made

Once you’re done, you should now have two autocomplete files of the same size – the autocomplete file from the original profile, and the one you’ve just renamed / imported:

Now, go ahead and open Outlook and verify that autocomplete is working properly again.


There have been a few times that I’ve done this, and found my autocomplete file cleared again when I re-open Outlook – if this is the case, just do the process again. This is why we made a copy of the good autocomplete file, as we can still go back and redo the process – otherwise, your working copy of the autocomplete cache would be all gone, and that would be the end of it!

Sit back and relax, and get used to being hailed as a hero… This trick is a particular brand of magic that makes you seem like both a magician, and a miracle worker!

.TrimEnd removes too many characters

I’m working on a migration project where I need to create temporary accounts for each user that I’m going to be migrating (long story, don’t ask!). I wanted a way to create the temporary account based on the real user name, have them easily identifiable as belonging to that user, and then make sure to not use the primary domain for their email address, just to make sure there was no confusion.

Based on these requirements, I started working on a script to provision these user accounts – I wanted to take a user’s name and UPN from a CSV file, and then produce the temporary migration account from there.

For example, my csv file looked like this:

Name samAccountName UPN LicenseType UsageLocation
YVR E1 Test yvrE1test E1 CA

Just so you can follow along, I’ve imported the CSV file into my Shell so we can work with it:

Now that I have my variable defined, I needed to get just the beginning of the UPN, so I could create a new user. I know what you’re thinking – why not just use the samAccountName, since it matches? Well, I wanted to make sure I wouldn’t end up with discrepancies if I ran this against a larger batch of users, and had some where those values didn’t match – I figured the safest bet would be take the UPN value that I’d be using later (for the real user account), and build off of that.

So, I started out by using the .TrimEnd method to remove the domain name from the end of the UPN, like so:

$migUser = $($u.upn).TrimEnd(“”)

And after that, add a prefix, and the onmicrosoft domain to create a new UPN:

$migUPN = “mc-$migUser+“”

And finally, I wanted the Display Name to make it obvious that this was my temporary migration account:

$migDisplay = $($u.Name) (MC)”

What happened next was really weird – .TrimEnd was taking away more characters than I had expected, like so:


So the end result was that my user would be created, but the results were inconsistent – very frustrating!

Doing some digging around on the internet I discovered that TrimEnd treats the characters that you specify as a character array, and not a string like I was expecting it to. Since all of the letters for “test” are found in “”, it was trimming away every character that it found at the end of the string that matched ANY of those characters. As soon as it hits a character that doesn’t match the array of characters you’ve provided, it stops trimming, which is why it doesn’t take away the remaining “rE” from my username.

To solve this problem, and to make sure that you are removing a specific string of text from the end of a word, use the -replace function instead, like so:

# Define migration user account format

$migUser = $($u.upn) -replace ‘’,

$migUPN = “mc-$migUser+“”

$migDisplay = $($u.Name) (MC)”

As you can see, this time my results were exactly as expected:


So, lessons learned – make sure if you need to remove a specific string of characters from the end of a string in PowerShell, use -replace and not .TrimEnd!

RetentionHoldEnabled Inconsistency

I ran into an issue today where we discovered that a subset of our users with mailboxes that had been migrated to Office 365 had Retention Hold enabled on their mailboxes – what was strange about this was that we hadn’t set this at all during our migration, and it seemed to be randomly applied to about 30% of the mailboxes.

You can check this setting with the following command:

Get-Mailbox -ResultSize unlimited | Where-Object {$_.RetentionHoldEnabled -eq $true} | Format-Table Name,RetentionPolicy,RetentionHoldEnabled –Auto

With this result:


Looking through the list of users, there was a mixture of E1 and E3 licenses, but no K1 licenses or shared mailboxes. This made sense, as K1s and shared mailboxes didn’t have archiving enabled.

It’s important to note that Retention Hold is not the same as Litigation Hold – Litigation Hold puts a change freeze on a mailbox so that a user can’t delete or change items in their mailbox. It generally happens behind the scenes, and most users don’t notice that their mailbox has litigation hold applied, as deleted items disappear as normal, and the deletions/changes end up in a separate folder that the user cannot see.

Retention Hold, on the other hand, prevents the Managed Folder Assistant from running on that mailbox and processing retention tags. This means that users with Retention Hold enabled will not have their emails archiving or deleting based on the policies that have been set up in their Retention Policy.

After looking around a bit, I began to notice a pattern – each of the users who had their RetentionHold set to Enabled were users that we had been importing PST files into their online archives using the Office 365 Import Service. We had already noticed a bug (and opened a ticket), because we were not able to manually delete the jobs, and they weren’t automatically being deleted after 30 days the way they’re supposed to be.

It seems like Retention Hold is being enabled when a PST import job starts, and the flag is not being cleared automatically because the jobs are not being deleted properly.

You can fix this on a single mailbox by running the following command:

Set-Mailbox -RetentionHoldEnabled $false

Alternately, if you want to run this for all affected mailboxes, here’s the command to use:

Get-Mailbox -ResultSize unlimited | Where-Object {$_.RetentionHoldEnabled -eq $true} | Set-Mailbox -RetentionHoldEnabled $false

Hopefully Microsoft will get this bug resolved soon so that we have better control over the PST Import service – in the meantime, you can use this script to get retention policies functioning properly again.

Hope this helps!