Master & Cmd-R

G-Suite to Office 365: Meeting Room Interop

One of the challenges when migrating from Google Suite to Office 365 is coexistence – mail routing is not that hard to configure, and free / busy (Calendar Interop) is now available and works fairly well. Google’s instructions are pretty straightforward and are available here. Just remember that you need to disable your user’s calendar in G Suite in order for the Interop to head across to Office 365 and look up availability. Without that, users will only ever see their Google Calendars, and you’ll run into issues.


This is all well and good – however, Calendar Interop only works with users, not resource mailboxes.

The reason why resource mailboxes are not supported (to the best of my knowledge) is because a resource calendar in G Suite is not a user object like it is in Exchange Online – instead it’s a unique calendar object that ends in… there’s no way for an Exchange Online org to federate with that!

Now what?

Since I knew that I wasn’t going to win with free busy going towards that resource in Google, I turned my attention to Exchange Online, since I know exactly what my calendar interop options look like. As expected, if I create a meeting room in Exchange Online, and a user object in G-Suite (calendar disabled), free busy flows as expected – that’s at least a step in the right direction. Next up from there is making sure that this calendar can still accept meeting requests and process them even though it’s coming from outside the org. Turns out it’s not that hard – here’s what you need to do.

First off, create new resource mailboxes in Exchange Online. If you’re planning to migrate the calendars, you can use a third party tool like Migration Wiz, or manually export them from your source and re-import them using Outlook. In this instance here, I used the Migration Wiz tools, and it migrated them over quickly and cleanly. I still exported them anyway, just so I have a backup in case things go wrong – never hurts to have a way back if things go sideways!

Now that the calendars have been migrated (and backed up), we need to configure the interop and booking options – start in G Suite by deleting your migrated meeting rooms, and then create new user accounts with the exact same name as your deleted resource calendars.

You can of course use different names, but this will make it easier for users to find the rooms they’re looking for without a lot of extra effort. As long as the email address matches the email of the resource mailbox in Exchange Online, free / busy lookup will work properly.

Next step is to configure mail routing – make sure that your resource mailbox in Exchange Online has a secondary smtp address that you can route to. To keep things simple, I just use the domain. To configure mail routing in G-Suite, you need to go into Apps > G Suite > Gmail > Advanced Settings > Recipient Address Map and click Edit.

Under option 3, type in the name of your source and target address, separated by a comma, like so:,

Click Save, and then Save again to make sure your changes are properly applied. After this, anybody who sends an email from Gmail or even external (since your MX records should still be pointed at Google) will route properly to your mailbox in Office 365.

At this point, you should have free / busy and mail routing working correctly – all that’s left to do is to configure your calendar in Exchange Online to accept meeting requests from Google users.

Go ahead and log into Exchange Online through PowerShell, and run the following command:

Set-CalendarProcessing my_boardroom -ProcessExternalMeetingMessages $true

Confirm that your settings are properly applied by running this command:

Get-CalendarProcessing my_boardroom | fl AutomateProcessing,AllRequestInPolicy,ResourceDelegates,ProcessExternalMeetingMessages

Once Process External Meeting Messages, and Automate Processing are set correctly, your meeting requests will be properly processed and booked (or declined) based on the rules you configured in your calendar resource processing settings.

I’m sure you’re wondering why I’ve included the settings for All Request in Policy, and Resource Delegates – I’ve run into this a few times, so I figured I’d make sure to include it so that when I’m trying to figure out why my rooms aren’t auto-accepting meeting requests, I’ll come back here and remember what I need to do! 😀

Once you’ve assigned a delegate on a meeting room, All Request in Policy switches to False, and the delegate starts to receive meeting requests instead of auto approval. I’ve found this to be the case even when I’ve told the GUI to accept or decline booking requests automatically.

To be honest, I haven’t made it work yet with having both a delegate and auto accept on at the same time – it’s only ever seemed to be one or the other. So, to get everything working properly, set All Request In Policy to $true, and Resource Delegates to $false, and you’ll be good to go.

Rooms, how do I book?

The final piece to mention in all this is that once you’ve deleted these resource calendars in G Suite, users will no longer be able to pick them as rooms when they’re creating a meeting – this might seem obvious to you, but if you haven’t planned to communicate these changes with your users, you’re going to end up with more support calls, and unhappy people!

The first thing to remember is that your rooms will now be gone – when someone clicks on the Rooms tab in a meeting window, they’re going to only see the meeting rooms you’ve left behind (or none, if you’ve moved them all).

Instead, users can click on the Guests tab, and start typing in the name of the board room they want to book:

You see now why we used the same names for our new accounts as the rooms we deleted? Your users will love you for it!

Next, click on Find A Time, and you can see the availability of the room you want to select:

Once you have your meeting details set, click Save, then Send – within a few moments you should receive confirmation that your room has been booked in Office 365. Checking the calendar again, you’ll see your meeting booked as intended:

One last important change to remember is that if your users are used to adding the resource calendar to their calendar in Google, that won’t work any longer – all they’re going to get is this error message:

This is expected, since we’ve created an account without a calendar so that Interop works properly – simply show your users the New Way, and everybody will be happy(ish) again!

Hope this helps 😀

The Case of the Missing Mailbox Permissions

Just ran into this today where there was a discrepancy between the permissions that were showing up in the Office 365 Admin Portal, in the Exchange Admin Center, and in PowerShell.

From the Exchange Admin Center, you could only see a single user added with Full Access:

However, if you look at the Office 365 Portal, it shows that there are two users with the “Read and manage” permission:

Looking at the permissions in PowerShell, I noticed something interesting… the user that is not showing up in the EAC has a Deny: True attached to their permissions:

Even weirder still, trying to remove those mailbox permissions just gave me an error, like so:

I figured I’d try to see if I could update those permissions and change the Deny from True to False, but no success. I also tried adding the user back in to reset their permissions, and they only got added a second time, and now had both a Deny -eq True and a Deny -eq False entry!

Eventually this is what fixed it for me:

Remove-MailboxPermission -Identity user -User delegate -AccessRights FullAccess -Deny

Remember that in this cmdlet, “-Identity” is the mailbox you want to edit permissions on, and “-User” is the person you’re either adding permissions for, or removing their existing permissions. As soon as I ran that command, it removed the Deny permissions, and left the Allow permissions intact. Better still, the Admin Portal, Exchange Admin Center and PowerShell all told the same story again!

I don’t know how those Deny permissions got on there in the first place, but ultimately, remember this – if you come across a user with funky permissions, and the Deny -eq True… the Deny permissions are going to always overrule any Allow permissions that have been granted. Deal with those ones first, and all will be well with the world again.

Troubleshooting Hybrid Azure AD Join

Hybrid Azure AD Join and Conditional Access

One of the cool features of Azure AD Conditional Access Policies is being able to require that machines be domain joined, essentially locking down your access to corporate devices only, and preventing non-managed or non-trusted devices from being able to access your business data. You can see from the screenshot below, that there is a fair amount of flexibility involved: for instance, you could select multiple options like I’ve done below, and your users will be prompted for MFA, but only if their device is not domain joined. If the device is domain joined, the user doesn’t get prompted for MFA when accessing the cloud application you’ve specified.

Even better, if you add the option to require device to be marked as compliant, your user will only get prompted for MFA until they register their device in Azure AD / Intune, at which point their device will be considered trusted, and they’ll no longer be prompted for MFA. Cool, right?

Anyway, we’re here to talk about the third requirement – Hybrid Azure AD join. This is a great option for enforcing corporate compliance, as it requires a device to be joined both to your Active Directory on prem, as well as Azure AD. Note that simply taking a BYOD device and joining it to Azure AD does not fit this requirement – it has to be joined in both places in order for it to be considered hybrid AD joined. If you’re shooting for a more self-service option, this is not it – typically only admins can join a workstation to AD, so your end users will not be able to set themselves up and become a trusted device on their own. However, if you’re trying to lock down your environment and prevent personal devices from connecting to your corporate data, this is the option for you!

Setting up Hybrid Azure AD is actually pretty straightforward – I won’t get into the details, go give it a read if you haven’t seen it yet. However, what happens when you have some devices that are not joining Azure AD automatically? The happened to me recently while working on a deployment project, and here’s what it took to fix it – at least in my case…

What happens when devices don’t join?

Troubleshooting this one was difficult as first, as we couldn’t find a pattern of what was causing some machines to fail, and we weren’t finding any error messages that were very helpful in tracking down the root cause. Windows 10 is also challenging, because the hybrid AD join happens automatically – at least with Windows 7 devices, there’s an executable that gets deployed, and allows you a bit more flexibility on forcing the join and troubleshooting why it’s not happening. I discovered later that Windows 10 also has this ability, just done a bit differently – more on that in a bit.

At any rate, after doing a bit of digging, I was able to find the error messages showing why my machines weren’t joining. If you’re looking on the client machine, you’ll find these events in Event Viewer – expand the Applications and Services Logs, click on Microsoft, Windows, User Device Registration, and then Admin.

If your device isn’t joining, you’re more than likely going to find Event ID 304 and Event ID 305, which are remarkably unhelpful:

I mean, seriously – I ALREADY KNOW that they’re failing at the join phase!

I spent a fair amount of time troubleshooting everything I could find – Windows version (get all the updates done), latest version of AAD Connect, checked for updates to ADFS, troubleshooting my claims rules, recreating them, etc.

The suggestions in this post where helpful, but something was still missing. Particularly useful though, was this little tidbit of information: You can run the dsregcmd utility in Windows 10 with a number of different switches to report back on device join information (dsregcmd /status), and you can even use this same utility to force an immediate Azure AD join attempt, and spit out the results to a text file to help you with your troubleshooting. Note that dsregcmd needs to run as System, so you’ll need psExec to get your commands running in the correct context.

psexec -i -s cmd.exe

dsregcmd /debug > c:\users\username\desktop\workstationJoin.txt

You can crack that text file open and start looking through it to see if you can find your answer. Sadly, though, all the digging I was doing wasn’t getting me anywhere, so I opened up a Premier support ticket to see if Microsoft could shed some light on my problem here. In all honesty, this is one of the few times when I’ve opened a Microsoft support ticket and got the answer to my problem quickly – so kudos to them this time around!

Anyway, you’re here to find out what the answer was, and here it is: I had two ADFS claims rules that supplied the immutable ID claim, and they were conflicting with each other.

Here’s what happened… when ADFS was originally deployed (not by me), the

This rule was created automatically because the –SupportsMultipleDomains switch was used. This is the recommended approach to federation, as it allows you to easily add federated domains down the line – however, it creates an additional rule that was causing me problems.

This is the rule that was created:

c:[Type == ““]

=> issue(Type = ““, Value = regexreplace(c.Value, “.+@(?<domain>.+)”, “http://${domain}/adfs/services/trust/“));

And then this is the rule that gets created when you are supporting multiple domains for device registration:

As you can see, the rule is a bit different, and this second rule contains the accounttype = “user” claim as well.

Basically, device registration won’t work with both those rules in place, and the second one is the only one you need. It also wouldn’t work with just the first rule in place (which is how I had set it up originally). When I configured my hybrid Azure AD, I set it up without the multiple domain support, because I didn’t realize that it had been set up that way in the beginning. Since the rule is missing the account type claim as well as the UPN (c1 && c2 above), the claims rule won’t allow device registration to work properly in a multi-domain environment. As you’d expect, I went back and added the claims rules for multiple domain support as part of my troubleshooting, but that still won’t resolve the issue when you still have the first claims rule in place. Thankfully, the solution was easy – delete the original claims rule, and keep only the second (the claims rule that supports device registration), and your devices will start to register.

TL;DR… give me the short version!

If you’re following these instructions to set up hybrid Azure AD, you’ll more than likely use the script to set up the claims rules – highly recommended, it works well. Just make sure to check beforehand if your federation was set up to support multiple domains so that you can configure your claims rules appropriately.

You can find out if your federation supports multiple domains by running the Get-MsolFederationProperty -DomainName – if the Federation Service Identifier is different between the ADFS Server and the Office 365 Service (screenshot below), then your federation is set up to support multiple domains. If they’re both the same, then you’re configured to only support a single domain.

If your federation supports multiple domains, make sure to provision the correct rules using the script Microsoft provides, and delete your original claims rule – otherwise things won’t work properly afterwards.

After this was done, my workstations started joining Azure AD correctly on next reboot, and my pages and pages of error messages started going away. Good times were had by all!

Use PowerShell to find Mail Contacts

It seems like one of the tasks I do the most on projects is discovery and documentation of existing settings in a client’s environment. While there’s a number of reports in the Office 365 portal, I find that nothing beats PowerShell for getting just what I want when I want it – and this time was no exception!

Here’s a quick script you can use to find all the mail contacts in an environment and outputs their name and primary smtp address to a csv file – good for a report, or as step one in a migration, when you can then turn around and use this csv file to create contacts in the new environment.

$mailContacts = @()
$contacts = Get-MailContact -ResultSize Unlimited

foreach ($c in $contacts){

   $mc = New-Object System.Object
   $mc | Add-Member -type NoteProperty -Name Name -Value $c.Name
   $mc | Add-Member -type NoteProperty -Name Email -Value $c.PrimarySmtpAddress
 $mailContacts += $mc


$mailContacts | Export-Csv Mail-contacts.csv -NoTypeInformation

Let me break down what we’re doing here – this command creates an empty array for us to hold our data:

$mailContacts = @()

And then this command does a quick Get to pull all our mail contacts into a variable.

$contacts = Get-MailContact -ResultSize Unlimited

From there, I use a foreach statement to iterate through each contact, and add it to the array we’ve created:

foreach ($c in $contacts){

   $mc = New-Object System.Object
   $mc | Add-Member -type NoteProperty -Name Name -Value $c.Name
   $mc | Add-Member -type NoteProperty -Name Email -Value $c.PrimarySmtpAddress
   $mailContacts += $mc


And finally, you can take your variable and display it on the screen, or output it to a csv file using the final command:

$mailContacts | Export-Csv Mail-contacts.csv -NoTypeInformation

If you output it to your screen, this is what it’ll look like:

And here’s the csv output:

I think it might be helpful to break down this array a bit, because it was something I saw the first few times without really understanding what it was doing. When you build an array, you start out with a variable of your choice (in this case, $mc, and use it to create a new PowerShell object. Your next to lines take the same variable and add properties to it by piping the variable into the Add-Member command. You can put as many properties as you like into your array, just keep adding new lines, giving them a unique name, and then populating it with a value that you’ve gotten earlier. You can even use this method for reporting in your scripts, as you can populate these values however you see fit.

At the end of it all, you need to complete your array by using $mailContacts += $mc. This will add a single line on a table with the Name and Email address of each mail contact that you have in your $contacts variable.

There you go – quick and easy, and more than likely the building block for the many different scripts you’ll create over time. Good luck, have fun!

Troubleshooting ADFS/CBA: Error 342

I ran into this error today while configuring Certificate Based Authentication (CBA), and it was a weird enough of an issue that I thought it would be useful to post it, and share the fix.

After configuring my CRL so that it was published publicly (this is required for both your Root CA, as well as your Enterprise CA), and installing my certificates on both my ADFS servers and WAP servers (again, both the Root CA certificate and the Enterprise CA certificate are required), CBA was still failing when trying to log in to the Office 365 Portal.

Well, we’re no stranger to error logs and troubleshooting, right? Off we go to the ADFS logs to see what’s going on.

The Error: Event ID 342

This error basically states that it couldn’t build the trust chain for the certificate, usually because it can’t properly access your CRL all the way up the line.

I knew this wasn’t the case, because I had already tested that using one of my issued certificates – the command to do this is:

certutil -f -urlfetch -verify certname.cer

(replace certname.cer with the name of your cert)

This command will go through and check all of the URLs listed on the cert and verify connectivity to them – it’s great for checking your CRL/CDP/AIA distribution points and making sure that they’re all accessible internally and externally.

Next, I checked all my certificates on the local computer certificate store to verify that I didn’t have any old certificates, duplicates with wrong information, etc. – everything was as it was supposed to be. I eventually found an answer indirectly on this forum post – it didn’t list my issue exactly, or provide the fix I used, but it DID provide me with the tools I needed to figure it out.

The Fix: clear out old certificates

It turns out that the issue was being caused by old certificates sitting in the NTAuth store on my ADFS servers – it’s bizarre, because I had deleted all my old certificates and replaced them with new ones containing updated CRL distribution points, etc. However, that did not clear them out of this certificate store, as these certificates are being pulled directly from Active Directory.

Here’s how you check for these little deviants, and how to get ’em all fixed up:

Start by running the following command:

certutil -viewstore -user -enterprise NTAuth

(like so)

This will pop up a view of your NTAuth certificate store: scroll through the list of certificates until you find the one relating to your Enterprise CA:

Now, you can see that the certificate is definitely still valid (not expired) – however, I know that I updated my CRL & AIA locations and the new certificate that I’ve installed on all my servers is valid from today’s date, not August 2017.

Next, open the certificate properties by clicking on the link below the date, and note the thumbprint of the certificate:

Next, open the registry, and match that certificate thumbprint against the certificates found in HKLM\Software\Microsoft\EnterpriseCertificates\NTAuth\Certificates.

Then I simply deleted the registry key that matched that thumbprint (always make a backup of your reg key before you delete it!). This time when I run checked my NTAuth store by running the command above, that Enterprise CA certificate was completely gone.

Finally, to update the NTAuth store and pull in a new certificate, I ran the following command:

certutil -pulse

Now when I check my NTAuth store, I can see that it’s pulled in the correct certificate:

You can, of course, verify this by opening the certificate and making sure that the thumbprint matches your current certificate, and that the correct CRL & AIA distribution points are listed. Once this was done, my trust chains were able to build correctly, and certificate based authentication immediately started working. 😀

There you have it… if your struggling to get CBA configured, and you know you’ve updated all your certs with the correct CDP, give this a shot and see if it solves your problem!

PowerShell: Connect to Lync Online

The issue: unable to discover PowerShell endpoint URI

I don’t run into this error very often, but it’s happened enough times in the last few weeks that I really wanted to come up with a permanent/elegant solution. This error can happen when Lync/Skype is configured in a hybrid deployment, and autodiscover is pointing back on-prem – when trying to connect to Lync Online using the New-CSOnlineSession cmdlet, you receive the following error:

The Fix: Override Admin Domain

The solution is simple – all you need to do is add the -OverrideAdminDomain switch to your connection script. You can add the admin domain permanently to your script, and be done with it. For me, however, I often end up connecting to multiple environments depending on the projects I’m working on, or supporting different clients, etc. I wanted a more elegant solution, so I came up with a way of automating that process so that I can connect to any environment just by putting in my logon credentials. The script will check and find the onmicrosoft domain, and then use that to connect to a new CSOnline session with that domain specified as the admin domain.

This is what the script looks like:

<br />
$credential = Get-Credential<br />
Connect-MsolService -Credential $credential</p>
<p># Find the root ( tenant domain<br />
Write-Host &quot;Connected to MS Online Services, checking admin domain...&quot; -ForegroundColor Yellow<br />
$msolDomain = Get-MsolDomain | where {$_.Name -match &quot;; -and $_.Name -notmatch &quot;;}</p>
<p>Write-Host &quot;Admin domain found, connecting to $($msolDomain.Name)&quot; -ForegroundColor Green</p>
<p># Use this domain to connect to SFB Admin domain<br />
$session = New-CsOnlineSession -Credential $credential -OverrideAdminDomain $msolDomain.Name<br />
Import-PSSession $session<br />

And there you go… connected properly, every time!

Feel free to download the script, and add it to your toolkit – hope it helps!

Resize Azure Managed Disk

I recently was trying to resize the OS Disk of an Azure VM that I had just created, and ran into an error while using these instructions. In case you missed it, make sure you stop/deallocate your VM before trying to update the disk – otherwise it’ll just fail on you.

For the record, this is what the PowerShell looked like that I was trying to use:

$rgName = ‘My-Resource-Group’
$vmName = ‘My-VM’
$vm.StorageProfile.osDisk.DiskSizeGB = 1023

Update-AzureRmVM -ResourceGroupName $rgName -VM $vm

No dice! 🙁

Update AzureRmVM: Managed disk size via virtual machine ‘My-VM’ is not allowed. Please resize disk resource at /pathtomanageddisk/diskname.

Error code: ResizeDiskError

Now, the reason for this is that I had created this new VM using managed disks, and you can’t update those directly using the Update-AzureRmVM command. It took a little bit of digging to figure out how to update that managed disk, so I figured I’d post how I’d done it, in the hopes that it’ll help someone else out.

Since it’s a managed disk, running Get-AzureRmStorageAccount will not show you your disk – instead, you need to run Get-AzureRmDisk.

You can see that my disk size is 127GB, and not the glorious Terabyte I’m hoping to see:

Now that you’ve found your disk, go ahead and grab the name of your disk (or pull it into a variable if you prefer) and then simply update the size of the disk with the following command:

New-AzureRmDiskUpdateConfig -DiskSizeGB 1023 | Update-AzureRmDisk -ResourceGroupName $rgName -DiskName MyVM_OsDisk_1_crazylongnumbers

Verify that your disk looks correct by running

Get-AzureRmDisk -DiskName MyVM_OSDisk_1_crazylongnumbers

Next, Start your VM back up using the following command:

Start-AzureRmVM -ResourceGroupName $rgName -Name $vmName

(or use the Azure portal if you prefer, but c’mon… PowerShell!!)

Once back in your VM, you can see that your disk size is unchanged – this is because Windows knows about the newly available space, but doesn’t auto-expand your disk drive.

Right click on the start menu, and select Disk Management. You can see your disk now has unallocated space to match what you specified in PowerShell:

Go ahead and select the disk and extend it to fill the usable space, and you’re good to go!

Hope this helps – if this helped you, or you have any questions, feel free to shoot me a comment below.

MigrationPermanentException: Cannot find a recipient that has a mailbox GUID

Ran into the following error while attempting to offboard a mailbox (migrate it back on prem from Exchange Online):

MigrationPermanentException: Cannot find a recipient that has mailbox GUID ‘xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx’

This error is usually caused when a mailbox is created directly in the cloud (New – Office 365 Mailbox in Exchange console on prem). The Exchange mailbox GUID doesn’t actually get written back on-prem in this case, so trying to offboard a user results in this error. You can confirm this by connecting to your Exchange Shell on premises and do a Get-RemoteMailbox username | FL ExchangeGUID.

To fix this problem, connect to Exchange Online and find the GUID that’s missing:

Then simply take that GUID and write it back to your on prem remote mailbox object:

Force a DirSync, and all is well with the world!

Exchange Online Hybrid: Fixing free/busy issues

Now, I’m just going to come out and say it – this is NOT the only fix for free/busy issues when configuring Exchange Online Hybrid with an on-prem Exchange server. If you’re reading this, then it’s more than likely that you (like me), have been reading countless TechNet articles, blog posts, forum posts, etc. Well, at the end of it all, this was the fix for my free/busy issues, and I thought others might benefit by finding this ahead of time, and hopefully cut out some of the Googling Binging… 😉

The Problem:

Pretty straightforward – users on prem could not see the free/busy status of users in Office 365. I worked my way through every setting I could think of, including (but not limited to) Autodiscover, DNS, permissions, certificate settings, Exchange CU level, to no avail!

Also, if you haven’t seen this before, the hybrid environment free/busy troubleshooter was actually a great help in systematically working your way through potential problem spots.

The Solution:

Eventually, I came across this TechNet blog post which gave me the answer I needed – now, I will say that I’ve never had to set this before, and never noticed this setting missing on previous hybrid configs, but anyway…

In my on-prem environment, the TargetSharingEpr setting was blank, like so:

Thankfully, the fix for this is simple – run the following from an elevated PowerShell prompt:

Get-OrganizationRelationship | Set-OrganizationRelationship -TargetSharingEpr

This is what it should look like when you’re done:

I also checked my Exchange Online org settings and found that the TargetSharingEpr was also blank:

Now, I wasn’t having any issues with free/busy in this direction, but I thought I’d go ahead and update it anyway – just in case. Make sure that this time around, you’re connecting to Exchange Online, and not your on-prem Exchange, and point it back to your EWS endpoint:

Get-OrganizationRelationship | Set-OrganizationRelationship -TargetSharingEpr

(I don’t have to tell you that needs to be updated to your own hybrid namespace, do I?) 😛

When you’re done, it should look like this:

There you have it – hope this helps someone else solve some free/busy issues without having to spend hours of frustrating trying everything else!

Add Azure AD Trusted Certificate Authority

Scott Duffey has put together some excellent articles (four parts in total) around setting up Azure AD based CBA, and deploying certificates to mobile devices. It’s worked really well as a guideline for me in setting up certificate based authentication in production environments – however, there’s one scenario that isn’t covered in these articles, and if you’re running a two-tier PKI architecture, you’re going to have some headaches.

Part 2 of the series discusses how to configure your Azure AD as a Certification Authority, but it only shows you how to add your root CA as your trusted certificate authority. If you have a Root CA and an Enterprise or Intermediate CA, you need to upload both certificates into Azure AD. Without this step, your CBA won’t work because your certificate trust chains won’t properly build out. Also, make sure that you publish all required CRLs – if you have a Root CA as well as an Intermediate or Enterprise CA, make sure that both CRLs are publicly available, as you’re going to be setting those URLs using the PowerShell script below.


# Find existing Certification Authority
     Get-AzureADTrustedCertificateAuthority | FL

# Install Root CA (AuthorityType=0). CRL Distribution Point should be the CRL of the Root CA
     $rootcert=Get-Content -Encoding byte “C:\users\username\Desktop\AzureCA\RootCA.cer”
     $new_rootca=New-Object -TypeName Microsoft.Open.AzureAD.Model.CertificateAuthorityInformation
     New-AzureADTrustedCertificateAuthority -CertificateAuthorityInformation $new_rootca

# Install Enterprise CA (AuthorityType=1). CRL Distribution Point should be the CRL of the Enterprise CA
     $entcert=Get-Content -Encoding byte “C:\users\username\Desktop\AzureCA\EntCA.cer”
     $new_entca=New-Object -TypeName Microsoft.Open.AzureAD.Model.CertificateAuthorityInformation
     New-AzureADTrustedCertificateAuthority -CertificateAuthorityInformation $new_entca

# Remove existing Certification Authority – [0] for first cert, [1] for second, etc.
     Remove-AzureADTrustedCertificateAuthority -CertificateAuthorityInformation $c[1]

The important key above is using the AuthorityType=0 for your Root CA, and AuthorityType=1 for your Enterprise CA. I also added a section that will allow you to clear out your certificates and start over if you need to – just use [0] to remove your first cert, and [1] to remove your second.

Hope this helps!