Master & Cmd-R

Troubleshooting Hybrid Azure AD Join

Hybrid Azure AD Join and Conditional Access

One of the cool features of Azure AD Conditional Access Policies is being able to require that machines be domain joined, essentially locking down your access to corporate devices only, and preventing non-managed or non-trusted devices from being able to access your business data. You can see from the screenshot below, that there is a fair amount of flexibility involved: for instance, you could select multiple options like I’ve done below, and your users will be prompted for MFA, but only if their device is not domain joined. If the device is domain joined, the user doesn’t get prompted for MFA when accessing the cloud application you’ve specified.

Even better, if you add the option to require device to be marked as compliant, your user will only get prompted for MFA until they register their device in Azure AD / Intune, at which point their device will be considered trusted, and they’ll no longer be prompted for MFA. Cool, right?

Anyway, we’re here to talk about the third requirement – Hybrid Azure AD join. This is a great option for enforcing corporate compliance, as it requires a device to be joined both to your Active Directory on prem, as well as Azure AD. Note that simply taking a BYOD device and joining it to Azure AD does not fit this requirement – it has to be joined in both places in order for it to be considered hybrid AD joined. If you’re shooting for a more self-service option, this is not it – typically only admins can join a workstation to AD, so your end users will not be able to set themselves up and become a trusted device on their own. However, if you’re trying to lock down your environment and prevent personal devices from connecting to your corporate data, this is the option for you!

Setting up Hybrid Azure AD is actually pretty straightforward – I won’t get into the details, go give it a read if you haven’t seen it yet. However, what happens when you have some devices that are not joining Azure AD automatically? The happened to me recently while working on a deployment project, and here’s what it took to fix it – at least in my case…

What happens when devices don’t join?

Troubleshooting this one was difficult as first, as we couldn’t find a pattern of what was causing some machines to fail, and we weren’t finding any error messages that were very helpful in tracking down the root cause. Windows 10 is also challenging, because the hybrid AD join happens automatically – at least with Windows 7 devices, there’s an executable that gets deployed, and allows you a bit more flexibility on forcing the join and troubleshooting why it’s not happening. I discovered later that Windows 10 also has this ability, just done a bit differently – more on that in a bit.

At any rate, after doing a bit of digging, I was able to find the error messages showing why my machines weren’t joining. If you’re looking on the client machine, you’ll find these events in Event Viewer – expand the Applications and Services Logs, click on Microsoft, Windows, User Device Registration, and then Admin.

If your device isn’t joining, you’re more than likely going to find Event ID 304 and Event ID 305, which are remarkably unhelpful:

I mean, seriously – I ALREADY KNOW that they’re failing at the join phase!

I spent a fair amount of time troubleshooting everything I could find – Windows version (get all the updates done), latest version of AAD Connect, checked for updates to ADFS, troubleshooting my claims rules, recreating them, etc.

The suggestions in this post where helpful, but something was still missing. Particularly useful though, was this little tidbit of information: You can run the dsregcmd utility in Windows 10 with a number of different switches to report back on device join information (dsregcmd /status), and you can even use this same utility to force an immediate Azure AD join attempt, and spit out the results to a text file to help you with your troubleshooting. Note that dsregcmd needs to run as System, so you’ll need psExec to get your commands running in the correct context.

psexec -i -s cmd.exe

dsregcmd /debug > c:\users\username\desktop\workstationJoin.txt

You can crack that text file open and start looking through it to see if you can find your answer. Sadly, though, all the digging I was doing wasn’t getting me anywhere, so I opened up a Premier support ticket to see if Microsoft could shed some light on my problem here. In all honesty, this is one of the few times when I’ve opened a Microsoft support ticket and got the answer to my problem quickly – so kudos to them this time around!

Anyway, you’re here to find out what the answer was, and here it is: I had two ADFS claims rules that supplied the immutable ID claim, and they were conflicting with each other.

Here’s what happened… when ADFS was originally deployed (not by me), the

This rule was created automatically because the –SupportsMultipleDomains switch was used. This is the recommended approach to federation, as it allows you to easily add federated domains down the line – however, it creates an additional rule that was causing me problems.

This is the rule that was created:

c:[Type == ““]

=> issue(Type = ““, Value = regexreplace(c.Value, “.+@(?<domain>.+)”, “http://${domain}/adfs/services/trust/“));

And then this is the rule that gets created when you are supporting multiple domains for device registration:

As you can see, the rule is a bit different, and this second rule contains the accounttype = “user” claim as well.

Basically, device registration won’t work with both those rules in place, and the second one is the only one you need. It also wouldn’t work with just the first rule in place (which is how I had set it up originally). When I configured my hybrid Azure AD, I set it up without the multiple domain support, because I didn’t realize that it had been set up that way in the beginning. Since the rule is missing the account type claim as well as the UPN (c1 && c2 above), the claims rule won’t allow device registration to work properly in a multi-domain environment. As you’d expect, I went back and added the claims rules for multiple domain support as part of my troubleshooting, but that still won’t resolve the issue when you still have the first claims rule in place. Thankfully, the solution was easy – delete the original claims rule, and keep only the second (the claims rule that supports device registration), and your devices will start to register.

TL;DR… give me the short version!

If you’re following these instructions to set up hybrid Azure AD, you’ll more than likely use the script to set up the claims rules – highly recommended, it works well. Just make sure to check beforehand if your federation was set up to support multiple domains so that you can configure your claims rules appropriately.

You can find out if your federation supports multiple domains by running the Get-MsolFederationProperty -DomainName – if the Federation Service Identifier is different between the ADFS Server and the Office 365 Service (screenshot below), then your federation is set up to support multiple domains. If they’re both the same, then you’re configured to only support a single domain.

If your federation supports multiple domains, make sure to provision the correct rules using the script Microsoft provides, and delete your original claims rule – otherwise things won’t work properly afterwards.

After this was done, my workstations started joining Azure AD correctly on next reboot, and my pages and pages of error messages started going away. Good times were had by all!

Use PowerShell to find Mail Contacts

It seems like one of the tasks I do the most on projects is discovery and documentation of existing settings in a client’s environment. While there’s a number of reports in the Office 365 portal, I find that nothing beats PowerShell for getting just what I want when I want it – and this time was no exception!

Here’s a quick script you can use to find all the mail contacts in an environment and outputs their name and primary smtp address to a csv file – good for a report, or as step one in a migration, when you can then turn around and use this csv file to create contacts in the new environment.

$mailContacts = @()
$contacts = Get-MailContact -ResultSize Unlimited

foreach ($c in $contacts){

   $mc = New-Object System.Object
   $mc | Add-Member -type NoteProperty -Name Name -Value $c.Name
   $mc | Add-Member -type NoteProperty -Name Email -Value $c.PrimarySmtpAddress
 $mailContacts += $mc


$mailContacts | Export-Csv Mail-contacts.csv -NoTypeInformation

Let me break down what we’re doing here – this command creates an empty array for us to hold our data:

$mailContacts = @()

And then this command does a quick Get to pull all our mail contacts into a variable.

$contacts = Get-MailContact -ResultSize Unlimited

From there, I use a foreach statement to iterate through each contact, and add it to the array we’ve created:

foreach ($c in $contacts){

   $mc = New-Object System.Object
   $mc | Add-Member -type NoteProperty -Name Name -Value $c.Name
   $mc | Add-Member -type NoteProperty -Name Email -Value $c.PrimarySmtpAddress
   $mailContacts += $mc


And finally, you can take your variable and display it on the screen, or output it to a csv file using the final command:

$mailContacts | Export-Csv Mail-contacts.csv -NoTypeInformation

If you output it to your screen, this is what it’ll look like:

And here’s the csv output:

I think it might be helpful to break down this array a bit, because it was something I saw the first few times without really understanding what it was doing. When you build an array, you start out with a variable of your choice (in this case, $mc, and use it to create a new PowerShell object. Your next to lines take the same variable and add properties to it by piping the variable into the Add-Member command. You can put as many properties as you like into your array, just keep adding new lines, giving them a unique name, and then populating it with a value that you’ve gotten earlier. You can even use this method for reporting in your scripts, as you can populate these values however you see fit.

At the end of it all, you need to complete your array by using $mailContacts += $mc. This will add a single line on a table with the Name and Email address of each mail contact that you have in your $contacts variable.

There you go – quick and easy, and more than likely the building block for the many different scripts you’ll create over time. Good luck, have fun!

Troubleshooting ADFS/CBA: Error 342

I ran into this error today while configuring Certificate Based Authentication (CBA), and it was a weird enough of an issue that I thought it would be useful to post it, and share the fix.

After configuring my CRL so that it was published publicly (this is required for both your Root CA, as well as your Enterprise CA), and installing my certificates on both my ADFS servers and WAP servers (again, both the Root CA certificate and the Enterprise CA certificate are required), CBA was still failing when trying to log in to the Office 365 Portal.

Well, we’re no stranger to error logs and troubleshooting, right? Off we go to the ADFS logs to see what’s going on.

The Error: Event ID 342

This error basically states that it couldn’t build the trust chain for the certificate, usually because it can’t properly access your CRL all the way up the line.

I knew this wasn’t the case, because I had already tested that using one of my issued certificates – the command to do this is:

certutil -f -urlfetch -verify certname.cer

(replace certname.cer with the name of your cert)

This command will go through and check all of the URLs listed on the cert and verify connectivity to them – it’s great for checking your CRL/CDP/AIA distribution points and making sure that they’re all accessible internally and externally.

Next, I checked all my certificates on the local computer certificate store to verify that I didn’t have any old certificates, duplicates with wrong information, etc. – everything was as it was supposed to be. I eventually found an answer indirectly on this forum post – it didn’t list my issue exactly, or provide the fix I used, but it DID provide me with the tools I needed to figure it out.

The Fix: clear out old certificates

It turns out that the issue was being caused by old certificates sitting in the NTAuth store on my ADFS servers – it’s bizarre, because I had deleted all my old certificates and replaced them with new ones containing updated CRL distribution points, etc. However, that did not clear them out of this certificate store, as these certificates are being pulled directly from Active Directory.

Here’s how you check for these little deviants, and how to get ’em all fixed up:

Start by running the following command:

certutil -viewstore -user -enterprise NTAuth

(like so)

This will pop up a view of your NTAuth certificate store: scroll through the list of certificates until you find the one relating to your Enterprise CA:

Now, you can see that the certificate is definitely still valid (not expired) – however, I know that I updated my CRL & AIA locations and the new certificate that I’ve installed on all my servers is valid from today’s date, not August 2017.

Next, open the certificate properties by clicking on the link below the date, and note the thumbprint of the certificate:

Next, open the registry, and match that certificate thumbprint against the certificates found in HKLM\Software\Microsoft\EnterpriseCertificates\NTAuth\Certificates.

Then I simply deleted the registry key that matched that thumbprint (always make a backup of your reg key before you delete it!). This time when I run checked my NTAuth store by running the command above, that Enterprise CA certificate was completely gone.

Finally, to update the NTAuth store and pull in a new certificate, I ran the following command:

certutil -pulse

Now when I check my NTAuth store, I can see that it’s pulled in the correct certificate:

You can, of course, verify this by opening the certificate and making sure that the thumbprint matches your current certificate, and that the correct CRL & AIA distribution points are listed. Once this was done, my trust chains were able to build correctly, and certificate based authentication immediately started working. 😀

There you have it… if your struggling to get CBA configured, and you know you’ve updated all your certs with the correct CDP, give this a shot and see if it solves your problem!

PowerShell: Connect to Lync Online

The issue: unable to discover PowerShell endpoint URI

I don’t run into this error very often, but it’s happened enough times in the last few weeks that I really wanted to come up with a permanent/elegant solution. This error can happen when Lync/Skype is configured in a hybrid deployment, and autodiscover is pointing back on-prem – when trying to connect to Lync Online using the New-CSOnlineSession cmdlet, you receive the following error:

The Fix: Override Admin Domain

The solution is simple – all you need to do is add the -OverrideAdminDomain switch to your connection script. You can add the admin domain permanently to your script, and be done with it. For me, however, I often end up connecting to multiple environments depending on the projects I’m working on, or supporting different clients, etc. I wanted a more elegant solution, so I came up with a way of automating that process so that I can connect to any environment just by putting in my logon credentials. The script will check and find the onmicrosoft domain, and then use that to connect to a new CSOnline session with that domain specified as the admin domain.

This is what the script looks like:

$credential = Get-Credential
Connect-MsolService -Credential $credential

# Find the root ( tenant domain
Write-Host "Connected to MS Online Services, checking admin domain..." -ForegroundColor Yellow
$msolDomain = Get-MsolDomain | where {$_.Name -match "" -and $_.Name -notmatch ""}

Write-Host "Admin domain found, connecting to $($msolDomain.Name)" -ForegroundColor Green

# Use this domain to connect to SFB Admin domain
$session = New-CsOnlineSession -Credential $credential -OverrideAdminDomain $msolDomain.Name
Import-PSSession $session

And there you go… connected properly, every time!

Feel free to download the script, and add it to your toolkit – hope it helps!

Resize Azure Managed Disk

I recently was trying to resize the OS Disk of an Azure VM that I had just created, and ran into an error while using these instructions. In case you missed it, make sure you stop/deallocate your VM before trying to update the disk – otherwise it’ll just fail on you.

For the record, this is what the PowerShell looked like that I was trying to use:

$rgName = ‘My-Resource-Group’
$vmName = ‘My-VM’
$vm.StorageProfile.osDisk.DiskSizeGB = 1023

Update-AzureRmVM -ResourceGroupName $rgName -VM $vm

No dice! 🙁

Update AzureRmVM: Managed disk size via virtual machine ‘My-VM’ is not allowed. Please resize disk resource at /pathtomanageddisk/diskname.

Error code: ResizeDiskError

Now, the reason for this is that I had created this new VM using managed disks, and you can’t update those directly using the Update-AzureRmVM command. It took a little bit of digging to figure out how to update that managed disk, so I figured I’d post how I’d done it, in the hopes that it’ll help someone else out.

Since it’s a managed disk, running Get-AzureRmStorageAccount will not show you your disk – instead, you need to run Get-AzureRmDisk.

You can see that my disk size is 127GB, and not the glorious Terabyte I’m hoping to see:

Now that you’ve found your disk, go ahead and grab the name of your disk (or pull it into a variable if you prefer) and then simply update the size of the disk with the following command:

New-AzureRmDiskUpdateConfig -DiskSizeGB 1023 | Update-AzureRmDisk -ResourceGroupName $rgName -DiskName MyVM_OsDisk_1_crazylongnumbers

Verify that your disk looks correct by running

Get-AzureRmDisk -DiskName MyVM_OSDisk_1_crazylongnumbers

Next, Start your VM back up using the following command:

Start-AzureRmVM -ResourceGroupName $rgName -Name $vmName

(or use the Azure portal if you prefer, but c’mon… PowerShell!!)

Once back in your VM, you can see that your disk size is unchanged – this is because Windows knows about the newly available space, but doesn’t auto-expand your disk drive.

Right click on the start menu, and select Disk Management. You can see your disk now has unallocated space to match what you specified in PowerShell:

Go ahead and select the disk and extend it to fill the usable space, and you’re good to go!

Hope this helps – if this helped you, or you have any questions, feel free to shoot me a comment below.

MigrationPermanentException: Cannot find a recipient that has a mailbox GUID

Ran into the following error while attempting to offboard a mailbox (migrate it back on prem from Exchange Online):

MigrationPermanentException: Cannot find a recipient that has mailbox GUID ‘xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx’

This error is usually caused when a mailbox is created directly in the cloud (New – Office 365 Mailbox in Exchange console on prem). The Exchange mailbox GUID doesn’t actually get written back on-prem in this case, so trying to offboard a user results in this error. You can confirm this by connecting to your Exchange Shell on premises and do a Get-RemoteMailbox username | FL ExchangeGUID.

To fix this problem, connect to Exchange Online and find the GUID that’s missing:

Then simply take that GUID and write it back to your on prem remote mailbox object:

Force a DirSync, and all is well with the world!

Exchange Online Hybrid: Fixing free/busy issues

Now, I’m just going to come out and say it – this is NOT the only fix for free/busy issues when configuring Exchange Online Hybrid with an on-prem Exchange server. If you’re reading this, then it’s more than likely that you (like me), have been reading countless TechNet articles, blog posts, forum posts, etc. Well, at the end of it all, this was the fix for my free/busy issues, and I thought others might benefit by finding this ahead of time, and hopefully cut out some of the Googling Binging… 😉

The Problem:

Pretty straightforward – users on prem could not see the free/busy status of users in Office 365. I worked my way through every setting I could think of, including (but not limited to) Autodiscover, DNS, permissions, certificate settings, Exchange CU level, to no avail!

Also, if you haven’t seen this before, the hybrid environment free/busy troubleshooter was actually a great help in systematically working your way through potential problem spots.

The Solution:

Eventually, I came across this TechNet blog post which gave me the answer I needed – now, I will say that I’ve never had to set this before, and never noticed this setting missing on previous hybrid configs, but anyway…

In my on-prem environment, the TargetSharingEpr setting was blank, like so:

Thankfully, the fix for this is simple – run the following from an elevated PowerShell prompt:

Get-OrganizationRelationship | Set-OrganizationRelationship -TargetSharingEpr

This is what it should look like when you’re done:

I also checked my Exchange Online org settings and found that the TargetSharingEpr was also blank:

Now, I wasn’t having any issues with free/busy in this direction, but I thought I’d go ahead and update it anyway – just in case. Make sure that this time around, you’re connecting to Exchange Online, and not your on-prem Exchange, and point it back to your EWS endpoint:

Get-OrganizationRelationship | Set-OrganizationRelationship -TargetSharingEpr

(I don’t have to tell you that needs to be updated to your own hybrid namespace, do I?) 😛

When you’re done, it should look like this:

There you have it – hope this helps someone else solve some free/busy issues without having to spend hours of frustrating trying everything else!

Unable to connect to Exchange Online Shell

Access Denied (No soup for you!)

I’ve been using this script to streamline my connection to the Exchange Online Shell, and it’s been working well for me – until recently when I ran into this weird “Access Denied” error:

As you can imagine, I started out by troubleshooting issues with my account, trying to figure out why I was being denied access, including stuff like what this article talks about (bad username/password, not being an Exchange Online admin).

I knew however, that this was not my issue – I confirmed my password, and my account was a global admin in Office 365. Turns out, the issue was caused not by any specific access being denied on my user account, but more specifically because I was connecting to an Exchange Online tenant that was configured for Multi Factor Authentication! If you’re getting the access denied error connecting through the old way, it’s time for a bit of a change.

The Fix:

There are two ways to resolve this issue, depending on how you want to use your scripts – I use the connection script above quite frequently in my other scripts to connect to Exchange Online, and so I wanted to be able to keep using it.

Fix 1: Use an account that is not enabled for MFA

The first fix is more of a workaround than a fix – simply use a global admin account (or an account with the Exchange Admin role) that is not enabled for multi-factor authentication. This is a good place to set up a cloud only admin account, and connect using, or simply use an on-prem account with MFA disabled – either/or.

Connect using MFA:

If instead you want to start connecting to the Exchange Online shell using MFA and Modern Auth, you’ll need to install the Exchange Online Remote PowerShell Module, and follow the instructions here.

You know you’re using the right module, because it has a blue Exchange icon, and it also gives you this information in yellow text when it loads.

Like it says, you initiate a connection by using Connect-EXOPSSession, like so:

You can see that you’re greeted by a Modern Auth prompt instead of your typical basic auth prompt:

Which then passes on to your MFA approval flow:

For the record, if you’re still getting this credential prompt:

You’re still using basic auth in your connection and are going to run into the “Access Denied” error.

So, there you go – the problem is not with your account, but how you’re connecting to the Exchange Online Shell. Using one of these two options here should get you up and running, and back into your admin shell. I haven’t yet updated my management scripts to leverage this new module, so I’m still using an account with MFA disabled – but that’s up next!

Uniqueness violation. Property: SourceAnchor

I’ve run into this error in Azure AD Connect (DirSync), and I thought I’d share how I fixed it – as is often the case with sync errors, the solution is not always obvious and requires some digging!

To start us off, this is what the error looks like: attributes associated with this object have values that may already be associated with another object in your local directory services.

Since the error message helpfully points out the duplicate proxy addresses, that seems like a good place to start; however, clearing out the proxy addresses on prem (or changing it if you prefer) didn’t resolve the problem. Instead, it caused my error message to change from duplicate proxy addresses to duplicate UPN values!

Now, under normal circumstances, I’d just delete the synced object in Office 365, and let AAD Connect put things back together – however, in this situation, I was dealing with accounts that were already in production, and I needed to make sure I could match these accounts up with their on-prem counterparts without causing data loss. If you’re just setting things up, and you’ve got cloud accounts that were created by mistake, or even if they’re not in production yet, you can resolve these issues by deleting the offending cloud accounts, and resyncing from your Active Directory.

Looking back in Office 365/Azure AD, I can see my duplicate accounts – now to figure out which one is the rogue, and which one needs to be kept!

There is an excellent TechNet article that gives us a command we can use to generate the immutable ID on premise, like so:

ldifde -f export.txt -r “(Userprincipalname=*)” -l “objectGuid, userPrincipalName”

(hint: the objectGuid that is output by the command above is your ImmutableID in Office 365)

Using PowerShell, we can look for the account matching that Immutable ID, like so:

Confirmed that this is NOT the account that has a mailbox attached:

Checked again, just to be sure:

Tried changing Immutable ID to null – no problems there:

Set-MsolUser -UserPrincipalName -ImmutableId “$null”

The meeting rooms did not have an immutable ID, but adding one would give me the dreaded uniqueness violation error

Now that we know which account is which, go ahead and delete the duplicate account, and remove it from the Recycle Bin. Once the duplicate account is gone, I was able to update the Immutable ID on the production account, so that the DirSync could perform a hard match the next time it ran.

It’s possible that you might run into the issue reported here, and be unable to remove the object in Azure AD, due to it now being a lonely orphan – to get past this hurdle, you’ll need to disable DirSync on your tenant before you can clear out the objects.

Disable DirSync using the following command:

Set-MsolDirSyncEnabled -EnableDirSync $false

Note that you’ll need to wait a while (MS says up to 72 hours – it can happen quicker, but it can definitely take a while, so plan to do this in an outage window, or over the weekend, when you can expect little or no changes to be taking place in your AD).

You can check the progress in PowerShell, which always seems to be quicker than the admin portal, by using

the following commands:


This one gives you a True or False – obviously we’re looking for it to be false before we proceed.


This command will either be PendingDisabled, Enabled, or Disabled. Once DirSync has been disabled, you should be able to delete the offending account, update your immutable ID on the account that you need to keep, and then turn DirSync back on again.

Once this was done, my accounts synced up again, and I was back in business… hope this helps!

Add-AzureAccount fails – Your browser is currently set to block cookies

I recently ran into an issue while running Server 2016 attempting to connect to my Azure account through PowerShell – after installing the Azure PowerShell Modules and running Add-AzureAccount, an authentication window opens, allowing you to connect to your Azure account. However, instead of seeing the logon window, I would only get the following error:

“Your browser is currently set to block cookies. You need to allow cookies to use this service.”

Figuring that Edge was blocking cookies due to the default security configuration in Server 2016, I attempted to open Edge so that I could unblock those sites and be able to log in to my Azure account and continue my server configuration. Seems like that’s a dead end as well!

I hadn’t run into this before, but apparently it’s a known issue – I decided to just create another admin account rather than going down the route of editing my registry settings, as I didn’t really want to start poking holes in my brand new server. It might be completely safe, but I figured I’d just leave it as is – I didn’t really see much use for Edge on my default admin account anyway.

However, after creating a new admin account, logging in, and launching Edge, I found that cookies were indeed already enabled, and I was still having the exact same error connecting to my Azure account in PowerShell. It turns out that the culprit is Internet Explorer, and not Edge at all! If you open Internet Explorer (Start – Run – iexplore.exe) and attempt to log in to or you’ll receive a very similar error:

The answer to this strange little conundrum is just to go in and add the following two sites to your trusted sites in Internet Explorer:

Once this was done, I was able to connect to my Azure account using both Microsoft Account (, and my Office 365 account ( Knowing this, I went back to my built in administrator account and added both those sites to my trusted sites in IE, and all was well with the world again.

Long story short… just add the Microsoft authentication sites above to your Trusted Sites in IE 11 (even on your built in admin account), and you’ll be able to connect to your Azure account properly.

Hope this helps save you some time searching for an answer to this weird problem – good luck!