Master & Cmd-R

Troubleshooting Hybrid Azure AD Join

Hybrid Azure AD Join and Conditional Access

One of the cool features of Azure AD Conditional Access Policies is being able to require that machines be domain joined, essentially locking down your access to corporate devices only, and preventing non-managed or non-trusted devices from being able to access your business data. You can see from the screenshot below, that there is a fair amount of flexibility involved: for instance, you could select multiple options like I’ve done below, and your users will be prompted for MFA, but only if their device is not domain joined. If the device is domain joined, the user doesn’t get prompted for MFA when accessing the cloud application you’ve specified.


Even better, if you add the option to require device to be marked as compliant, your user will only get prompted for MFA until they register their device in Azure AD / Intune, at which point their device will be considered trusted, and they’ll no longer be prompted for MFA. Cool, right?

Anyway, we’re here to talk about the third requirement – Hybrid Azure AD join. This is a great option for enforcing corporate compliance, as it requires a device to be joined both to your Active Directory on prem, as well as Azure AD. Note that simply taking a BYOD device and joining it to Azure AD does not fit this requirement – it has to be joined in both places in order for it to be considered hybrid AD joined. If you’re shooting for a more self-service option, this is not it – typically only admins can join a workstation to AD, so your end users will not be able to set themselves up and become a trusted device on their own. However, if you’re trying to lock down your environment and prevent personal devices from connecting to your corporate data, this is the option for you!

Setting up Hybrid Azure AD is actually pretty straightforward – I won’t get into the details, go give it a read if you haven’t seen it yet. However, what happens when you have some devices that are not joining Azure AD automatically? The happened to me recently while working on a deployment project, and here’s what it took to fix it – at least in my case…

What happens when devices don’t join?

Troubleshooting this one was difficult as first, as we couldn’t find a pattern of what was causing some machines to fail, and we weren’t finding any error messages that were very helpful in tracking down the root cause. Windows 10 is also challenging, because the hybrid AD join happens automatically – at least with Windows 7 devices, there’s an executable that gets deployed, and allows you a bit more flexibility on forcing the join and troubleshooting why it’s not happening. I discovered later that Windows 10 also has this ability, just done a bit differently – more on that in a bit.

At any rate, after doing a bit of digging, I was able to find the error messages showing why my machines weren’t joining. If you’re looking on the client machine, you’ll find these events in Event Viewer – expand the Applications and Services Logs, click on Microsoft, Windows, User Device Registration, and then Admin.



If your device isn’t joining, you’re more than likely going to find Event ID 304 and Event ID 305, which are remarkably unhelpful:



I mean, seriously – I ALREADY KNOW that they’re failing at the join phase!

I spent a fair amount of time troubleshooting everything I could find – Windows version (get all the updates done), latest version of AAD Connect, checked for updates to ADFS, troubleshooting my claims rules, recreating them, etc.

The suggestions in this post where helpful, but something was still missing. Particularly useful though, was this little tidbit of information: You can run the dsregcmd utility in Windows 10 with a number of different switches to report back on device join information (dsregcmd /status), and you can even use this same utility to force an immediate Azure AD join attempt, and spit out the results to a text file to help you with your troubleshooting. Note that dsregcmd needs to run as System, so you’ll need psExec to get your commands running in the correct context.

psexec -i -s cmd.exe

dsregcmd /debug > c:\users\username\desktop\workstationJoin.txt

You can crack that text file open and start looking through it to see if you can find your answer. Sadly, though, all the digging I was doing wasn’t getting me anywhere, so I opened up a Premier support ticket to see if Microsoft could shed some light on my problem here. In all honesty, this is one of the few times when I’ve opened a Microsoft support ticket and got the answer to my problem quickly – so kudos to them this time around!

Anyway, you’re here to find out what the answer was, and here it is: I had two ADFS claims rules that supplied the immutable ID claim, and they were conflicting with each other.


Here’s what happened… when ADFS was originally deployed (not by me), the

This rule was created automatically because the –SupportsMultipleDomains switch was used. This is the recommended approach to federation, as it allows you to easily add federated domains down the line – however, it creates an additional rule that was causing me problems.

This is the rule that was created:

c:[Type == “http://schemas.xmlsoap.org/claims/UPN“]

=> issue(Type = “http://schemas.microsoft.com/ws/2008/06/identity/claims/issuerid“, Value = regexreplace(c.Value, “.+@(?<domain>.+)”, “http://${domain}/adfs/services/trust/“));

And then this is the rule that gets created when you are supporting multiple domains for device registration:


As you can see, the rule is a bit different, and this second rule contains the accounttype = “user” claim as well.

Basically, device registration won’t work with both those rules in place, and the second one is the only one you need. It also wouldn’t work with just the first rule in place (which is how I had set it up originally). When I configured my hybrid Azure AD, I set it up without the multiple domain support, because I didn’t realize that it had been set up that way in the beginning. Since the rule is missing the account type claim as well as the UPN (c1 && c2 above), the claims rule won’t allow device registration to work properly in a multi-domain environment. As you’d expect, I went back and added the claims rules for multiple domain support as part of my troubleshooting, but that still won’t resolve the issue when you still have the first claims rule in place. Thankfully, the solution was easy – delete the original claims rule, and keep only the second (the claims rule that supports device registration), and your devices will start to register.

TL;DR… give me the short version!

If you’re following these instructions to set up hybrid Azure AD, you’ll more than likely use the script to set up the claims rules – highly recommended, it works well. Just make sure to check beforehand if your federation was set up to support multiple domains so that you can configure your claims rules appropriately.

You can find out if your federation supports multiple domains by running the Get-MsolFederationProperty -DomainName mydomain.com – if the Federation Service Identifier is different between the ADFS Server and the Office 365 Service (screenshot below), then your federation is set up to support multiple domains. If they’re both the same, then you’re configured to only support a single domain.


If your federation supports multiple domains, make sure to provision the correct rules using the script Microsoft provides, and delete your original claims rule – otherwise things won’t work properly afterwards.

After this was done, my workstations started joining Azure AD correctly on next reboot, and my pages and pages of error messages started going away. Good times were had by all!

Troubleshooting ADFS/CBA: Error 342

I ran into this error today while configuring Certificate Based Authentication (CBA), and it was a weird enough of an issue that I thought it would be useful to post it, and share the fix.

After configuring my CRL so that it was published publicly (this is required for both your Root CA, as well as your Enterprise CA), and installing my certificates on both my ADFS servers and WAP servers (again, both the Root CA certificate and the Enterprise CA certificate are required), CBA was still failing when trying to log in to the Office 365 Portal.

Well, we’re no stranger to error logs and troubleshooting, right? Off we go to the ADFS logs to see what’s going on.

The Error: Event ID 342

This error basically states that it couldn’t build the trust chain for the certificate, usually because it can’t properly access your CRL all the way up the line.


I knew this wasn’t the case, because I had already tested that using one of my issued certificates – the command to do this is:

certutil -f -urlfetch -verify certname.cer

(replace certname.cer with the name of your cert)

This command will go through and check all of the URLs listed on the cert and verify connectivity to them – it’s great for checking your CRL/CDP/AIA distribution points and making sure that they’re all accessible internally and externally.

Next, I checked all my certificates on the local computer certificate store to verify that I didn’t have any old certificates, duplicates with wrong information, etc. – everything was as it was supposed to be. I eventually found an answer indirectly on this forum post – it didn’t list my issue exactly, or provide the fix I used, but it DID provide me with the tools I needed to figure it out.

The Fix: clear out old certificates

It turns out that the issue was being caused by old certificates sitting in the NTAuth store on my ADFS servers – it’s bizarre, because I had deleted all my old certificates and replaced them with new ones containing updated CRL distribution points, etc. However, that did not clear them out of this certificate store, as these certificates are being pulled directly from Active Directory.

Here’s how you check for these little deviants, and how to get ’em all fixed up:

Start by running the following command:

certutil -viewstore -user -enterprise NTAuth

(like so)


This will pop up a view of your NTAuth certificate store: scroll through the list of certificates until you find the one relating to your Enterprise CA:


Now, you can see that the certificate is definitely still valid (not expired) – however, I know that I updated my CRL & AIA locations and the new certificate that I’ve installed on all my servers is valid from today’s date, not August 2017.

Next, open the certificate properties by clicking on the link below the date, and note the thumbprint of the certificate:


Next, open the registry, and match that certificate thumbprint against the certificates found in HKLM\Software\Microsoft\EnterpriseCertificates\NTAuth\Certificates.


Then I simply deleted the registry key that matched that thumbprint (always make a backup of your reg key before you delete it!). This time when I run checked my NTAuth store by running the command above, that Enterprise CA certificate was completely gone.

Finally, to update the NTAuth store and pull in a new certificate, I ran the following command:

certutil -pulse


Now when I check my NTAuth store, I can see that it’s pulled in the correct certificate:


You can, of course, verify this by opening the certificate and making sure that the thumbprint matches your current certificate, and that the correct CRL & AIA distribution points are listed. Once this was done, my trust chains were able to build correctly, and certificate based authentication immediately started working. 😀

There you have it… if your struggling to get CBA configured, and you know you’ve updated all your certs with the correct CDP, give this a shot and see if it solves your problem!

PowerShell: Connect to Lync Online

The issue: unable to discover PowerShell endpoint URI

I don’t run into this error very often, but it’s happened enough times in the last few weeks that I really wanted to come up with a permanent/elegant solution. This error can happen when Lync/Skype is configured in a hybrid deployment, and autodiscover is pointing back on-prem – when trying to connect to Lync Online using the New-CSOnlineSession cmdlet, you receive the following error:


The Fix: Override Admin Domain

The solution is simple – all you need to do is add the -OverrideAdminDomain switch to your connection script. You can add the admin domain permanently to your script, and be done with it. For me, however, I often end up connecting to multiple environments depending on the projects I’m working on, or supporting different clients, etc. I wanted a more elegant solution, so I came up with a way of automating that process so that I can connect to any environment just by putting in my logon credentials. The script will check and find the onmicrosoft domain, and then use that to connect to a new CSOnline session with that domain specified as the admin domain.

This is what the script looks like:

$credential = Get-Credential
Connect-MsolService -Credential $credential

# Find the root (onmicrosoft.com) tenant domain
Write-Host "Connected to MS Online Services, checking admin domain..." -ForegroundColor Yellow
$msolDomain = Get-MsolDomain | where {$_.Name -match "onmicrosoft.com" -and $_.Name -notmatch "mail.onmicrosoft.com"}

Write-Host "Admin domain found, connecting to $($msolDomain.Name)" -ForegroundColor Green

# Use this domain to connect to SFB Admin domain
$session = New-CsOnlineSession -Credential $credential -OverrideAdminDomain $msolDomain.Name
Import-PSSession $session

And there you go… connected properly, every time!


Feel free to download the script, and add it to your toolkit – hope it helps!

Understanding Office 365 ProPlus Servicing

How do updates work in this new paradigm?

In my recent experience with deploying Office 365 Pro Plus, the methodology for deploying updates is still somewhat mystifying for most administrators – diagrams like this one don’t really help us to understand exactly how we want to (or should) apply updates:


I mean, in theory it explains it, but in my experience it’s just gets more confusing trying to understand which updates should be applied, when they should be applied, and how they should be applied.

Let’s break it down:

  1. Individual updates are no longer available for Office 365 Pro Plus – this means you cannot use Windows Updates, WSUS, or SCCM to apply updates the way you used to in the past. (source)
  2. Every month a new build is released – this means that you now update from one build to the next, not applying updates based off the build you installed 6 months ago.
  3. Update Channels – here is where find things get the muddiest… partially, I believe, because Microsoft decided to use a similar yet different naming scheme for Windows 10 update / servicing channels.
  4. Each build is in mainstream support for 1 year – this is as long as you can defer your updates / builds before needing to upgrade to remain supportable and current.

Channels, how do they work?

Let’s talk about what these channels are and what they mean to you as you try to figure out how you’re going to manage Office Pro Plus going forward. First off – bookmark this site, and keep an eye on it to know what Channel, Version, Build, and Release Date is current: https://technet.microsoft.com/en-us/library/mt592918.aspx


This is a screenshot of the most recent update (January 2017) – but check the site for the most recent version.

Here’s how the channels break down:

  1. Current Channel (CC) – this is the channel you’ll be on by default if you log into the portal and click the helpful button that wants you to install Office Pro Plus. The defaults for this channel are to receive a new build from Microsoft on a monthly basis, automatically. You can still control where these updates come from if you want to (more on that later), but this is the channel for early adopters, small companies that like being on the cutting edge, and are willing to put up with frequent changes.
  2. First Release for Deferred Channel (FRfDC) – think about this as being your pilot / testing channel. If you are not just sticking with the Current Channel for your business (and most aren’t), the First Release for Deferred channel will be your power users, IT teams, and whomever you’ve identified as being a good tester in your organization.
  3. Deferred Channel (DC) – this is where most businesses are going to put their users, and this is indeed a good idea. The deferred channel has a nice steady pace of updates (every four months), and these updates will have gone through all the testing of Current Channel users, then First Release for Deferred users before they finally make their way down to the Deferred Channel users. This means that you have about 8 months of folks testing new updates along those various channels before you push them out to your users, allowing for a much smoother update process, with much less chances of changes breaking things in your org.

Basically, the update flow looks like this – using today’s Deferred Release (Version 1605) as a reference:

  • June 6th, 2016: Version 1605 was released to the Current Channel (CC)
    • The current Channel continues to get new builds on a monthly basis
  • June 14th, 2016: FRfDC gets the first Version 1605 build
    • The FRfDC then gets monthly builds of version 1605 until October 11th, when Version 1609 is released to both the CC and the FRfDC.

Throughout these four months, the Current Channel has received Versions 1606, 1607, 1608, and 1609 with various iterations of builds throughout. Every quarter, all these updates get rolled into a single release and pushed out to both channels, and then CC starts to iterate again for another quarter.

  • January 10th, 2017: Version 1605 is now released to the Deferred Channel (DC)CC is already on Version 1611, and FRfDC has started using Version 1609

The big takeaway here is that if you stick with the DC for your broader user base, you’ll be deploying updates that were first released around 8 months ago – giving lots of time for these updates to be tested, bugs reported and squashed, and feedback given to Microsoft on features and changes. This channel gives you the safest, slowest update path possible, while still ensuring that your Office installations are being kept up to date.

Don’t forget that security updates are still being applied monthly, so it’s not like your 8 months behind on security, just on features and changes.

All good? Let’s move on to the how of things…

How do I actually manage this?

Glad you asked! One of the biggest changes that admins often miss is that Office Updates no longer roll out with Windows Updates. This means Windows Update, WSUS, and SCCM cannot be used to update and manage Office the way they used to.

Instead,

There are three ways that admins can apply updates for Office 365 ProPlus:

  • Automatically from the Internet
    • This is the default setting for Office 365 ProPlus
    • Monthly builds / updates are installed automatically
    • No additional user or administrative input is required
    • Can be used for updates even if the Office Deployment Tool is used to install Office
    • Least amount of administrative effort, least amount of control

As I mentioned above, if you’re already agile enough to be on the Current Channel, you’ll probably want to just leave these settings to default, and let users apply updates automatically from Microsoft servers as new builds are pushed out. If this is you, congratulations! You’re helping to test updates and make sure they’re all good before they get released to the masses in the DC 😉

  • Automatically from an on-premises location
    • More admin effort, more control
    • Use the ODT to download the monthly build to a network share
    • Computers are configured through the ODT or GPO to install updates automatically from that share
    • Group Policy and the ODT specify a network location for updates

This option is where you go if you want to still keep people updating automatically, but you want a little more control over the version they’re getting – the TechNet links below layout the process of how you can automate this if desired, and basically bridges the gap between convenience and control in your environment. This option will also allow you to maintain a steady cadence of updates, as you only need to configure your installs to update from a specific location, and then download whichever version you want into that updates folder.

  • By installing an updated version of Office 365 ProPlus
    • Most admin control, greatest amount of effort required
    • Use the ODT to download and install the latest / required version
    • This option reinstalls ProPlus, but only new or changed files are downloaded to the user’s computer
    • Using this option disables automatic updates

This final option gives you the greatest amount of fine grained control – Office Updates are disabled entirely, and users will only get the versions that you deploy to them. Use this methodology if rigid change control is required, or if you want to make sure that everyone (except your pilot/test users of course) is holding to the same version, and helps to keep your environment standardized.

More information (and full details) available here: https://technet.microsoft.com/en-us/library/dn761707.aspx

It’s important to note that updates do not require local admin rights as they run under the system context, so if you’re trying to prevent users from running updates, just removing local admin privileges won’t stop these updates from applying. This also means that it’s a lot easier to manage these updates going forward, as you won’t have to go around type in an admin password in order for users to get their updates.

Given the nature of these channels (multiple release stages), it’s important that you implement a solid testing methodology in your environment. Designate a number of flexible and competent users, and put them on the FRfDC so that you know what updates are coming in your environment before they get pushed out to mission critical systems. This will allow you to defer updates if you need more testing / development time, or give you more time to prep your users for feature changes that will impact their day to day life. Once you’re comfortable that the updates are not going to cause problems in your environment, move them into the Deferred Channel and let them be released to the rest of your users.

Here’s some additional reading resources for extra credit:

Accessing mail options on a resource mailbox

I ran into an issue recently where I needed to access the mail options on a resource mailbox, which of course has no license, and can’t be logged into directly. After a bit of looking around, I found I was able to access the mail options directly using the following URL: https://outlook.office.com/owa/mailbox@domain.com/?path=/options/mail

Just replace mailbox@domain.com with the actual mailbox you need to manage, and you’re good to go! You can bookmark that link for easy access to the options, and save yourself several extra clicks.

To access these options manually on a mailbox, open OWA, and then click on your account name in the top right corner, and then select Open another mailbox…


Type in the name of the mailbox you want to open, and click on it in the search results, then click Open:


Once the mailbox opens, click on the cog, and select mail options. Or, just take the address above, and use the name of the mailbox you need to access, and get at it in one click! 😉

Make sure, of course, that you have Full Access permissions assigned to your account, otherwise you won’t be able to open the mailbox. Just a quick little tip, but I figure any little trick that makes mailbox management easier is worth sharing!

Fixed: PXE Boot Process loops 4 times

I ran into this issue on a recent Windows 10 deployment for a client – when the machine attempted to PXE boot from the WDS / MDT server, it would go through four iterations of the PXE boot cycle before finally getting the correct boot image from the server. Worse, if you have WDS configured to require F12 to continue, you have to press F12 each time, or it will time out and fail.


I tried a number of fixes to see if I could resolve the issue, including:

  • Setting default boot images – didn’t work
  • Removing the F12 requirement – didn’t work
  • Removed option 67 from the DHCP scope – didn’t work
  • Removed option 66 just for good measure – didn’t work
  • Tried changing option 67 to the boot wim, wdsnbp.com, pxeboot.n12 – didn’t work

NB: In case you’re wondering what these boot options actually are, here’s some of the settings you might be seeing: https://support.microsoft.com/en-us/kb/259670

After much searching, and not much luck, I stumbled across the following forum post, and gave it a shot:

On the network settings of the WDS server, Disable NetBIOS over TCP/IP:


Problem solved!


Azure Point 2 Site VPN: DNS config is wrong

Just ran into this issue when I created a P2S VPN on my Azure Virtual Network – I downloaded the client and connected ok, but I realized I could only connect to my servers via IP, not by FQDN.

Checking my local IP settings, I realized that the DNS Server on my VPN connection was set to a public DNS server and not my Domain Controller / DNS server in Azure.


This wasn’t completely unexpected, because when I created the vnet I used Google DNS, and then I went back to the settings and changed it later once I had my DC set up.


It turns out that when you download the P2S VPN client from the Azure portal, it’s not really a client in the traditional sense (like the Cisco AnyConnect client) – it’s actually a number of config files that get installed to %appdata%\Microsoft\Network\Connections\Cm\connection-name\.

You can try editing the phonebook file as I’ve seen suggested around the web, but I don’t really like that solution – in order for this to work, you need to dial through the phonebook (pbk) file, and not just through the built in Windows VPN connection.


The answer, thankfully, is simple – just remove that VPN client and re-download the P2S VPN client from the Azure portal. Install it on your PC as before, and you’re good to go:


All better now!


Add-AzureAccount fails – Your browser is currently set to block cookies

I recently ran into an issue while running Server 2016 attempting to connect to my Azure account through PowerShell – after installing the Azure PowerShell Modules and running Add-AzureAccount, an authentication window opens, allowing you to connect to your Azure account. However, instead of seeing the logon window, I would only get the following error:


“Your browser is currently set to block cookies. You need to allow cookies to use this service.”

Figuring that Edge was blocking cookies due to the default security configuration in Server 2016, I attempted to open Edge so that I could unblock those sites and be able to log in to my Azure account and continue my server configuration. Seems like that’s a dead end as well!


I hadn’t run into this before, but apparently it’s a known issue – I decided to just create another admin account rather than going down the route of editing my registry settings, as I didn’t really want to start poking holes in my brand new server. It might be completely safe, but I figured I’d just leave it as is – I didn’t really see much use for Edge on my default admin account anyway.

However, after creating a new admin account, logging in, and launching Edge, I found that cookies were indeed already enabled, and I was still having the exact same error connecting to my Azure account in PowerShell. It turns out that the culprit is Internet Explorer, and not Edge at all! If you open Internet Explorer (Start – Run – iexplore.exe) and attempt to log in to https://portal.azure.com or https://login.microsoftonline.com you’ll receive a very similar error:


The answer to this strange little conundrum is just to go in and add the following two sites to your trusted sites in Internet Explorer:


Once this was done, I was able to connect to my Azure account using both Microsoft Account (@outlook.com), and my Office 365 account (@microsoftonline.com). Knowing this, I went back to my built in administrator account and added both those sites to my trusted sites in IE, and all was well with the world again.

Long story short… just add the Microsoft authentication sites above to your Trusted Sites in IE 11 (even on your built in admin account), and you’ll be able to connect to your Azure account properly.

Hope this helps save you some time searching for an answer to this weird problem – good luck!

Use PowerShell to Update Room Calendar Working Hours

I recently had a request to update a bunch of Meeting Room calendars whose Working Hours were set to the wrong time zone, which was causing issues when users tried to view or book appointments in those rooms. Now, I know I could do this by logging into each room manually, but where’s the fun in that? 😉

To update all of the rooms at once, I first needed to figure out how to get the mailboxes I needed, and then get their mailbox calendar configuration. You can do this by using Get-Mailbox with some filters to find the mailboxes with calendars that you want to change – in this case, I knew that they were all Room Mailboxes, and they all began with “HKG-“. You can structure your queries to filter by whatever you want, really – just do a Get-Mailbox username | FL to find out the name of the attributes that you can use in your query. In this case, the attributes I needed were called DisplayName and RecipientTypeDetails – once I had the mailboxes, the next step is to pipe it out to a Get-MailboxCalenadarConfiguration, so I could see what they were set to.

This is what the script looks like:

Get-Mailbox -ResultSize Unlimited | Where {$_.DisplayName -match “HKG-” -and $_.RecipientTypeDetails -match “RoomMailbox”} | Get-MailboxCalendarConfiguration | FT -AutoSize

It should go without saying, but make sure you’re connected to Exchange Online before you run this command!

And this was the result:


You can see from the screenshot above that all but one of the rooms was on Central Standard Time, and only one of them was in the correct time zone; to fix it, I use the first part of my script (the Get-Mailbox portion), and then pipe the results out to a Set-MailboxCalendarConfiguration, along with the attributes I want to change. For this scenario, it was WorkingHoursTimeZone, WorkingHoursStartTime, and WorkingHoursEndTime, like so:

Get-Mailbox -ResultSize Unlimited | Where {$_.DisplayName -match “HKG-” -and $_.RecipientTypeDetails -match “RoomMailbox”} | Set-MailboxCalendarConfiguration -WorkingHoursTimeZone “China Standard Time” -WorkingHoursStartTime 09:00:00 -WorkingHoursEndTime 18:00:00

Much better now!


If you only need to do this for a single user, use the following command in PowerShell:

Set-MailboxCalendarConfiguration adm-jdahl -WorkingHoursTimeZone “Pacific Standard Time” -WorkingHoursStartTime 09:00:00 -WorkingHoursEndTime 18:00:00

And then to view the results:

Get-MailboxCalendarConfiguration adm-jdahl | ft -AutoSize

Hope this helps someone learn a new way to do something cool in PowerShell!

Unable to change Deleted Item Retention

I recently needed to update the Deleted Item Retention period in Office 365 from the default 14 days to the maximum allowed (30 days) for all mailboxes in my environment. Since I was migrating mailboxes to Office 365 at the time, I wrote a script that I could add to my process which would update this setting while it was applying quotas to the mailboxes.

Things were working well, apart from a number of Room mailboxes that had been migrated from Exchange on Premise – every time the script ran, I’d get the following warning on all these mailboxes:


The strange thing is that this was only happening for Rooms that were migrated from Exchange on Premise – any new rooms that were created didn’t have this issue. I decided to compare the mailbox attributes of a room that wasn’t affected by this issue to see what the difference was, and found this culprit:

UseDatabaseRetentionDefaults: True

Turning that setting off allowed me to go back and change the RetainDeletedItemsFor setting to 30 days, like I wanted to:

Set-Mailbox mailboxname -UseDatabaseRetentionDefaults $false


Set-Mailbox mailboxname -RetainDeletedItemsFor 30


In order to fix this for all other rooms affected by this issue, use the following command:

Get-Mailbox -ResultSize Unlimited | where {$_.ResourceType -eq “Room” -and $_.UseDatabaseRetentionDefaults -eq $true} | Set-Mailbox -UseDatabaseRetentionDefaults $false

After that, it was a simple matter of re-running my script – the deleted item retention piece looks like this:

$t = New-TimeSpan -Days 14

$retMailboxes = Get-Mailbox -ResultSize unlimited | Where {($_.Name -notmatch “DiscoverySearch” -and $_.RetainDeletedItemsFor -eq $t)}

foreach ($r in $retMailboxes){
Set-Mailbox -Identity $r -RetainDeletedItemsFor 30

Write-Host “Deleted Item Retention for $($r.Name) successfully updated to 30 days” -ForegroundColor Green

Hope this helps someone else scratching their head trying to figure out why they’re unable to change the Deleted Item Retention Period on mailboxes!