Master & Cmd-R

G-Suite to Office 365: Meeting Room Interop

One of the challenges when migrating from Google Suite to Office 365 is coexistence – mail routing is not that hard to configure, and free / busy (Calendar Interop) is now available and works fairly well. Google’s instructions are pretty straightforward and are available here. Just remember that you need to disable your user’s calendar in G Suite in order for the Interop to head across to Office 365 and look up availability. Without that, users will only ever see their Google Calendars, and you’ll run into issues.

But…

This is all well and good – however, Calendar Interop only works with users, not resource mailboxes.


The reason why resource mailboxes are not supported (to the best of my knowledge) is because a resource calendar in G Suite is not a user object like it is in Exchange Online – instead it’s a unique calendar object that ends in @resource.calendar.google.com… there’s no way for an Exchange Online org to federate with that!


Now what?

Since I knew that I wasn’t going to win with free busy going towards that resource in Google, I turned my attention to Exchange Online, since I know exactly what my calendar interop options look like. As expected, if I create a meeting room in Exchange Online, and a user object in G-Suite (calendar disabled), free busy flows as expected – that’s at least a step in the right direction. Next up from there is making sure that this calendar can still accept meeting requests and process them even though it’s coming from outside the org. Turns out it’s not that hard – here’s what you need to do.

First off, create new resource mailboxes in Exchange Online. If you’re planning to migrate the calendars, you can use a third party tool like Migration Wiz, or manually export them from your source and re-import them using Outlook. In this instance here, I used the Migration Wiz tools, and it migrated them over quickly and cleanly. I still exported them anyway, just so I have a backup in case things go wrong – never hurts to have a way back if things go sideways!

Now that the calendars have been migrated (and backed up), we need to configure the interop and booking options – start in G Suite by deleting your migrated meeting rooms, and then create new user accounts with the exact same name as your deleted resource calendars.


You can of course use different names, but this will make it easier for users to find the rooms they’re looking for without a lot of extra effort. As long as the email address matches the email of the resource mailbox in Exchange Online, free / busy lookup will work properly.

Next step is to configure mail routing – make sure that your resource mailbox in Exchange Online has a secondary smtp address that you can route to. To keep things simple, I just use the onmicrosoft.com domain. To configure mail routing in G-Suite, you need to go into Apps > G Suite > Gmail > Advanced Settings > Recipient Address Map and click Edit.

Under option 3, type in the name of your source and target address, separated by a comma, like so:

myboardroom@domain.com,myboardroom@domain.onmicrosoft.com


Click Save, and then Save again to make sure your changes are properly applied. After this, anybody who sends an email from Gmail or even external (since your MX records should still be pointed at Google) will route properly to your mailbox in Office 365.

At this point, you should have free / busy and mail routing working correctly – all that’s left to do is to configure your calendar in Exchange Online to accept meeting requests from Google users.

Go ahead and log into Exchange Online through PowerShell, and run the following command:

Set-CalendarProcessing my_boardroom -ProcessExternalMeetingMessages $true

Confirm that your settings are properly applied by running this command:

Get-CalendarProcessing my_boardroom | fl AutomateProcessing,AllRequestInPolicy,ResourceDelegates,ProcessExternalMeetingMessages


Once Process External Meeting Messages, and Automate Processing are set correctly, your meeting requests will be properly processed and booked (or declined) based on the rules you configured in your calendar resource processing settings.

I’m sure you’re wondering why I’ve included the settings for All Request in Policy, and Resource Delegates – I’ve run into this a few times, so I figured I’d make sure to include it so that when I’m trying to figure out why my rooms aren’t auto-accepting meeting requests, I’ll come back here and remember what I need to do! 😀

Once you’ve assigned a delegate on a meeting room, All Request in Policy switches to False, and the delegate starts to receive meeting requests instead of auto approval. I’ve found this to be the case even when I’ve told the GUI to accept or decline booking requests automatically.



To be honest, I haven’t made it work yet with having both a delegate and auto accept on at the same time – it’s only ever seemed to be one or the other. So, to get everything working properly, set All Request In Policy to $true, and Resource Delegates to $false, and you’ll be good to go.

Rooms, how do I book?

The final piece to mention in all this is that once you’ve deleted these resource calendars in G Suite, users will no longer be able to pick them as rooms when they’re creating a meeting – this might seem obvious to you, but if you haven’t planned to communicate these changes with your users, you’re going to end up with more support calls, and unhappy people!

The first thing to remember is that your rooms will now be gone – when someone clicks on the Rooms tab in a meeting window, they’re going to only see the meeting rooms you’ve left behind (or none, if you’ve moved them all).


Instead, users can click on the Guests tab, and start typing in the name of the board room they want to book:


You see now why we used the same names for our new accounts as the rooms we deleted? Your users will love you for it!

Next, click on Find A Time, and you can see the availability of the room you want to select:


Once you have your meeting details set, click Save, then Send – within a few moments you should receive confirmation that your room has been booked in Office 365. Checking the calendar again, you’ll see your meeting booked as intended:


One last important change to remember is that if your users are used to adding the resource calendar to their calendar in Google, that won’t work any longer – all they’re going to get is this error message:


This is expected, since we’ve created an account without a calendar so that Interop works properly – simply show your users the New Way, and everybody will be happy(ish) again!

Hope this helps 😀

MVP 2018-2019!

What used to be my annual New Year’s Day obsession has now moved to July 1st:


The Microsoft MVP community is an incredible bunch of folks who are both really smart and genuinely love sharing their knowledge with the broader community. It’s definitely an honor for me to be renewed once again, and I’m looking forward to another amazing year in Office 365!

The Case of the Missing Mailbox Permissions

Just ran into this today where there was a discrepancy between the permissions that were showing up in the Office 365 Admin Portal, in the Exchange Admin Center, and in PowerShell.

From the Exchange Admin Center, you could only see a single user added with Full Access:


However, if you look at the Office 365 Portal, it shows that there are two users with the “Read and manage” permission:


Looking at the permissions in PowerShell, I noticed something interesting… the user that is not showing up in the EAC has a Deny: True attached to their permissions:


Even weirder still, trying to remove those mailbox permissions just gave me an error, like so:


I figured I’d try to see if I could update those permissions and change the Deny from True to False, but no success. I also tried adding the user back in to reset their permissions, and they only got added a second time, and now had both a Deny -eq True and a Deny -eq False entry!

Eventually this is what fixed it for me:

Remove-MailboxPermission -Identity user -User delegate -AccessRights FullAccess -Deny

Remember that in this cmdlet, “-Identity” is the mailbox you want to edit permissions on, and “-User” is the person you’re either adding permissions for, or removing their existing permissions. As soon as I ran that command, it removed the Deny permissions, and left the Allow permissions intact. Better still, the Admin Portal, Exchange Admin Center and PowerShell all told the same story again!


I don’t know how those Deny permissions got on there in the first place, but ultimately, remember this – if you come across a user with funky permissions, and the Deny -eq True… the Deny permissions are going to always overrule any Allow permissions that have been granted. Deal with those ones first, and all will be well with the world again.

Troubleshooting Hybrid Azure AD Join

Hybrid Azure AD Join and Conditional Access

One of the cool features of Azure AD Conditional Access Policies is being able to require that machines be domain joined, essentially locking down your access to corporate devices only, and preventing non-managed or non-trusted devices from being able to access your business data. You can see from the screenshot below, that there is a fair amount of flexibility involved: for instance, you could select multiple options like I’ve done below, and your users will be prompted for MFA, but only if their device is not domain joined. If the device is domain joined, the user doesn’t get prompted for MFA when accessing the cloud application you’ve specified.


Even better, if you add the option to require device to be marked as compliant, your user will only get prompted for MFA until they register their device in Azure AD / Intune, at which point their device will be considered trusted, and they’ll no longer be prompted for MFA. Cool, right?

Anyway, we’re here to talk about the third requirement – Hybrid Azure AD join. This is a great option for enforcing corporate compliance, as it requires a device to be joined both to your Active Directory on prem, as well as Azure AD. Note that simply taking a BYOD device and joining it to Azure AD does not fit this requirement – it has to be joined in both places in order for it to be considered hybrid AD joined. If you’re shooting for a more self-service option, this is not it – typically only admins can join a workstation to AD, so your end users will not be able to set themselves up and become a trusted device on their own. However, if you’re trying to lock down your environment and prevent personal devices from connecting to your corporate data, this is the option for you!

Setting up Hybrid Azure AD is actually pretty straightforward – I won’t get into the details, go give it a read if you haven’t seen it yet. However, what happens when you have some devices that are not joining Azure AD automatically? The happened to me recently while working on a deployment project, and here’s what it took to fix it – at least in my case…

What happens when devices don’t join?

Troubleshooting this one was difficult as first, as we couldn’t find a pattern of what was causing some machines to fail, and we weren’t finding any error messages that were very helpful in tracking down the root cause. Windows 10 is also challenging, because the hybrid AD join happens automatically – at least with Windows 7 devices, there’s an executable that gets deployed, and allows you a bit more flexibility on forcing the join and troubleshooting why it’s not happening. I discovered later that Windows 10 also has this ability, just done a bit differently – more on that in a bit.

At any rate, after doing a bit of digging, I was able to find the error messages showing why my machines weren’t joining. If you’re looking on the client machine, you’ll find these events in Event Viewer – expand the Applications and Services Logs, click on Microsoft, Windows, User Device Registration, and then Admin.



If your device isn’t joining, you’re more than likely going to find Event ID 304 and Event ID 305, which are remarkably unhelpful:



I mean, seriously – I ALREADY KNOW that they’re failing at the join phase!

I spent a fair amount of time troubleshooting everything I could find – Windows version (get all the updates done), latest version of AAD Connect, checked for updates to ADFS, troubleshooting my claims rules, recreating them, etc.

The suggestions in this post where helpful, but something was still missing. Particularly useful though, was this little tidbit of information: You can run the dsregcmd utility in Windows 10 with a number of different switches to report back on device join information (dsregcmd /status), and you can even use this same utility to force an immediate Azure AD join attempt, and spit out the results to a text file to help you with your troubleshooting. Note that dsregcmd needs to run as System, so you’ll need psExec to get your commands running in the correct context.

psexec -i -s cmd.exe

dsregcmd /debug > c:\users\username\desktop\workstationJoin.txt

You can crack that text file open and start looking through it to see if you can find your answer. Sadly, though, all the digging I was doing wasn’t getting me anywhere, so I opened up a Premier support ticket to see if Microsoft could shed some light on my problem here. In all honesty, this is one of the few times when I’ve opened a Microsoft support ticket and got the answer to my problem quickly – so kudos to them this time around!

Anyway, you’re here to find out what the answer was, and here it is: I had two ADFS claims rules that supplied the immutable ID claim, and they were conflicting with each other.


Here’s what happened… when ADFS was originally deployed (not by me), the

This rule was created automatically because the –SupportsMultipleDomains switch was used. This is the recommended approach to federation, as it allows you to easily add federated domains down the line – however, it creates an additional rule that was causing me problems.

This is the rule that was created:

c:[Type == “http://schemas.xmlsoap.org/claims/UPN“]

=> issue(Type = “http://schemas.microsoft.com/ws/2008/06/identity/claims/issuerid“, Value = regexreplace(c.Value, “.+@(?<domain>.+)”, “http://${domain}/adfs/services/trust/“));

And then this is the rule that gets created when you are supporting multiple domains for device registration:


As you can see, the rule is a bit different, and this second rule contains the accounttype = “user” claim as well.

Basically, device registration won’t work with both those rules in place, and the second one is the only one you need. It also wouldn’t work with just the first rule in place (which is how I had set it up originally). When I configured my hybrid Azure AD, I set it up without the multiple domain support, because I didn’t realize that it had been set up that way in the beginning. Since the rule is missing the account type claim as well as the UPN (c1 && c2 above), the claims rule won’t allow device registration to work properly in a multi-domain environment. As you’d expect, I went back and added the claims rules for multiple domain support as part of my troubleshooting, but that still won’t resolve the issue when you still have the first claims rule in place. Thankfully, the solution was easy – delete the original claims rule, and keep only the second (the claims rule that supports device registration), and your devices will start to register.

TL;DR… give me the short version!

If you’re following these instructions to set up hybrid Azure AD, you’ll more than likely use the script to set up the claims rules – highly recommended, it works well. Just make sure to check beforehand if your federation was set up to support multiple domains so that you can configure your claims rules appropriately.

You can find out if your federation supports multiple domains by running the Get-MsolFederationProperty -DomainName mydomain.com – if the Federation Service Identifier is different between the ADFS Server and the Office 365 Service (screenshot below), then your federation is set up to support multiple domains. If they’re both the same, then you’re configured to only support a single domain.


If your federation supports multiple domains, make sure to provision the correct rules using the script Microsoft provides, and delete your original claims rule – otherwise things won’t work properly afterwards.

After this was done, my workstations started joining Azure AD correctly on next reboot, and my pages and pages of error messages started going away. Good times were had by all!

Use PowerShell to find Mail Contacts

It seems like one of the tasks I do the most on projects is discovery and documentation of existing settings in a client’s environment. While there’s a number of reports in the Office 365 portal, I find that nothing beats PowerShell for getting just what I want when I want it – and this time was no exception!

Here’s a quick script you can use to find all the mail contacts in an environment and outputs their name and primary smtp address to a csv file – good for a report, or as step one in a migration, when you can then turn around and use this csv file to create contacts in the new environment.

$mailContacts = @()
$contacts = Get-MailContact -ResultSize Unlimited


foreach ($c in $contacts){

   $mc = New-Object System.Object
   $mc | Add-Member -type NoteProperty -Name Name -Value $c.Name
   $mc | Add-Member -type NoteProperty -Name Email -Value $c.PrimarySmtpAddress
 $mailContacts += $mc

  }

$mailContacts | Export-Csv Mail-contacts.csv -NoTypeInformation

Let me break down what we’re doing here – this command creates an empty array for us to hold our data:

$mailContacts = @()

And then this command does a quick Get to pull all our mail contacts into a variable.

$contacts = Get-MailContact -ResultSize Unlimited

From there, I use a foreach statement to iterate through each contact, and add it to the array we’ve created:


foreach ($c in $contacts){

   $mc = New-Object System.Object
   $mc | Add-Member -type NoteProperty -Name Name -Value $c.Name
   $mc | Add-Member -type NoteProperty -Name Email -Value $c.PrimarySmtpAddress
   $mailContacts += $mc

}

And finally, you can take your variable and display it on the screen, or output it to a csv file using the final command:

$mailContacts | Export-Csv Mail-contacts.csv -NoTypeInformation

If you output it to your screen, this is what it’ll look like:


And here’s the csv output:


I think it might be helpful to break down this array a bit, because it was something I saw the first few times without really understanding what it was doing. When you build an array, you start out with a variable of your choice (in this case, $mc, and use it to create a new PowerShell object. Your next to lines take the same variable and add properties to it by piping the variable into the Add-Member command. You can put as many properties as you like into your array, just keep adding new lines, giving them a unique name, and then populating it with a value that you’ve gotten earlier. You can even use this method for reporting in your scripts, as you can populate these values however you see fit.

At the end of it all, you need to complete your array by using $mailContacts += $mc. This will add a single line on a table with the Name and Email address of each mail contact that you have in your $contacts variable.

There you go – quick and easy, and more than likely the building block for the many different scripts you’ll create over time. Good luck, have fun!


Troubleshooting ADFS/CBA: Error 342

I ran into this error today while configuring Certificate Based Authentication (CBA), and it was a weird enough of an issue that I thought it would be useful to post it, and share the fix.

After configuring my CRL so that it was published publicly (this is required for both your Root CA, as well as your Enterprise CA), and installing my certificates on both my ADFS servers and WAP servers (again, both the Root CA certificate and the Enterprise CA certificate are required), CBA was still failing when trying to log in to the Office 365 Portal.

Well, we’re no stranger to error logs and troubleshooting, right? Off we go to the ADFS logs to see what’s going on.

The Error: Event ID 342

This error basically states that it couldn’t build the trust chain for the certificate, usually because it can’t properly access your CRL all the way up the line.


I knew this wasn’t the case, because I had already tested that using one of my issued certificates – the command to do this is:

certutil -f -urlfetch -verify certname.cer

(replace certname.cer with the name of your cert)

This command will go through and check all of the URLs listed on the cert and verify connectivity to them – it’s great for checking your CRL/CDP/AIA distribution points and making sure that they’re all accessible internally and externally.

Next, I checked all my certificates on the local computer certificate store to verify that I didn’t have any old certificates, duplicates with wrong information, etc. – everything was as it was supposed to be. I eventually found an answer indirectly on this forum post – it didn’t list my issue exactly, or provide the fix I used, but it DID provide me with the tools I needed to figure it out.

The Fix: clear out old certificates

It turns out that the issue was being caused by old certificates sitting in the NTAuth store on my ADFS servers – it’s bizarre, because I had deleted all my old certificates and replaced them with new ones containing updated CRL distribution points, etc. However, that did not clear them out of this certificate store, as these certificates are being pulled directly from Active Directory.

Here’s how you check for these little deviants, and how to get ’em all fixed up:

Start by running the following command:

certutil -viewstore -user -enterprise NTAuth

(like so)


This will pop up a view of your NTAuth certificate store: scroll through the list of certificates until you find the one relating to your Enterprise CA:


Now, you can see that the certificate is definitely still valid (not expired) – however, I know that I updated my CRL & AIA locations and the new certificate that I’ve installed on all my servers is valid from today’s date, not August 2017.

Next, open the certificate properties by clicking on the link below the date, and note the thumbprint of the certificate:


Next, open the registry, and match that certificate thumbprint against the certificates found in HKLM\Software\Microsoft\EnterpriseCertificates\NTAuth\Certificates.


Then I simply deleted the registry key that matched that thumbprint (always make a backup of your reg key before you delete it!). This time when I run checked my NTAuth store by running the command above, that Enterprise CA certificate was completely gone.

Finally, to update the NTAuth store and pull in a new certificate, I ran the following command:

certutil -pulse


Now when I check my NTAuth store, I can see that it’s pulled in the correct certificate:


You can, of course, verify this by opening the certificate and making sure that the thumbprint matches your current certificate, and that the correct CRL & AIA distribution points are listed. Once this was done, my trust chains were able to build correctly, and certificate based authentication immediately started working. 😀

There you have it… if your struggling to get CBA configured, and you know you’ve updated all your certs with the correct CDP, give this a shot and see if it solves your problem!

Office ProPlus: Access to Group Files

This is something that we’ve been waiting to see since April 2017, but I’m so happy to see it’s actually there now!

I looked around and couldn’t find this change listed on the Office 365 Roadmap, or even on the Office Insider sites – kudos to Michiel van den Broek on the Microsoft Tech Community for calling it out.

Dude, where’s my files?

Ever since Teams was announced, I’ve been hoping for better integration between Office and Teams – especially around opening and saving your documents within your Team Sites. The navigation was always clunky, and looked pretty much like this:


Prior to this change, trying to navigate your Teams document repositories that weren’t either pinned or part of your recent documents meant that you’d have to try and navigate through the SharePoint sites, like this:


Needless to say, the only way that made sense was navigating to the files in Teams, and then opening them in Word, or Excel, or maybe doing the same thing through SharePoint. The reverse of that would be to save a new file somewhere local, and then copy it over/upload it to your Team.

New and Shiny!

After installing the latest Office Insiders build (Version 1712 – Build 8827.2131 Click to Run), I was greeted by the following:


TADA!! I can easily navigate all my Groups and Teams from within Office Applications!

Clicking on Open when Recent is selected still looks the same:


But if you click on Sites again, you can navigate your Teams sites with ease!


I know it’ll take a while before these changes get out of the Insiders Track and into general availability, but if you want to get early access to these and whatever other cool features are coming down the line, you can sign up to be an Office Insider here.

Error Opening Office 365 Meeting Requests

Ran into this issue recently, and it’s a bit of a weird one that you might possibly run into as you’re migrating to Office 365.

Here’s the scenario:

A cloud user sends a meeting invite to someone on prem – on prem user accepts the invitation, and everything’s all good (so far). Cloud user updates the meeting invitation and attaches a document, and sends the meeting update out. All of a sudden, the on prem user can no longer open the meeting, or access the attached document. The error only seems to happen between users who were already migrated when they were sending the meeting invite back to on prem users – sending meeting invites back and forth between cloud users wouldn’t reproduce any issues; on prem users could send back and forth just fine also without issues.

First thought of course, was that there was something messed up in the sharing settings as the invite crosses orgs, or possibly something broken in hybrid – wouldn’t be the first time some weird firewall rules caused issues in hybrid, right?

However, it wasn’t a hybrid issue at all, it was something even weirder – the cloud to on prem side of the scenario was actually a red herring… that’s the only way it was happening, but the issue is actually caused because all the migrated users had also been upgraded to Office Pro Plus (2016), and the on prem users were still on Office 2013!

The issue: HTML Calendaring

Turns out there’s a known issue that can happen with meeting requests that are created in Outlook 2016 and then opened in Outlook 2013. According to the support article, the error has to do with the way Outlook 2016 formats calendar requests, particularly meeting requests that have “table content, embedded images, and attachments”.

I tried to reproduce the issue in my own environment, and here’s what I found…

Starts with a regular calendar invite – everything works fine:


Then I went back and added a document, table, and image to the meeting request (might as well do them all, right?)and sent out the update:


Sure enough, opening the attachment in Outlook 2013, the HTML options (image, table, attachments) are either messed up, or missing, or corrupted. If the message appears corrupted, the recipient won’t be able to open it at all.


In my case, I was still able to open the meeting request and the attachment, but the content was definitely messed up.

Interestingly enough, the content looks fine in OWA:


The Fix:

Option 1: Upgrade to Office ProPlus

If you have the ability to upgrade your users to Office ProPlus, this is by far your best option – your users will be using the latest and greatest, and won’t run into issues like this again.

Option 2: Update Office + Registry Settings

According to the support bulletin, you need to install KB3127975, and then add these registry settings:

  1. Open your registry, and navigate to Computer\HK_CURRENT_USER\Software\Microsoft\Office\15.0\Outlook\Options\Calendar, right click, and choose New – DWORD (32-bit) Value:


  1. Name the key AllowHTMLCalendarContent, and then edit it and change the value to 1:


Exit the Registry editor, and restart Outlook, and you should be good to go.

I’d still suggest upgrading to Office Pro Plus if you have the licensing to support it – in the meantime, however, this will at least make sure that your users are not having issues opening HTML meeting requests. Hope it helps!

Office 2013 and Modern Auth

Office 2013 and modern auth have a bit of a shaky relationship – once you’re working with Office Pro Plus, or even 2016, the experience is a whole lot smoother. Sadly, Office 2013 feels like it’s a bit of a hit and a miss when trying to nail down authentication errors, especially if you can’t seem to reproduce a consistent experience with either Modern or Basic authentication. Here’s a few things I’ve run into that will hopefully put you in a good place with Office 2013 and allow you to consistently see a modern auth prompt:

#1. Registry updates

In order to enable Modern Auth in Office 2013, you need to add or update the following registry keys:

[HKEY_CURRENT_USER\Software\Microsoft\Exchange]

“AlwaysUseMSOAuthForAutodiscover”=dword:00000001

[HKEY_CURRENT_USER\Software\Microsoft\Office\15.0\Common\Identity]

“Version”=dword:00000001

“EnableADAL”=dword:00000001

Without these keys added, you’re dead in the water – you’re only ever going to see basic auth. If you’ve already added these registry keys, maybe even pushed them out via GPO, and you’re still seeing basic auth on some computers, it’s time to move on to number two.

#2. Office updates

So here’s the kicker – for modern auth to be supported in Office 2013, you need to be patched up to the March 2015 update release. I know, I know… you’re a WSUS/SCCM/Intune/Patch Manager wiz, and all your Office clients are 100% patched up to the latest version. Well, we both know that’s not quite the case, as you’re still reading ;). The reality is, that even when you do your best to make sure that all your systems are patched, and updates are approved on time, there can still be stragglers out there that haven’t been receiving their updates – and sometimes can be YEARS behind!

Here’s what to look for: Office 2013 SP1 installs at Version 15.0.4569.1506 (SP1). This is what you’d call a vanilla install – no patches applied yet.


From within the Office client itself, you’ll see the versions reported from Help – About.


After first round of updates: not there yet


After second round of updates: modern auth will work now


Final round of updates: why wouldn’t you patch all the way?


To get to the March 2015 version, you need to be at version 15.0.4701.1002 – I keep on going and patch all the way to current patch levels, because that’s just the kinda guy I am. I know there are valid reasons to stop at certain patch levels sometimes, so just make sure you at least get to the version listed here so that your modern auth will work properly.

#3. Know your auth prompts

When you’re testing or troubleshooting, it’s important to understand what kind of authentication prompt you’re actually getting – this is especially critical if you’re enabling MFA on your user accounts. If they’re not getting a modern auth prompt, they won’t get prompted for MFA, their username & password won’t work, and in fact, the only thing that WILL work is an app password… yuck!

Basic Auth:

Anytime you see this type of authentication window, this means you’re only using basic authentication:


As soon as you see this logon prompt, you can know that MFA will fail, and that app passwords (or disabling MFA on that user account) is the only way to keep on signing in like this. If you’re getting a basic auth prompt, check that your reg keys are applied properly, and Office is fully patched.

Modern Auth:

This is the authentication window you want to see – notice that it’s a web form, it’ll have your logo if you’ve configured it, and you will know properly see MFA or whatever other conditional access policies you might have put in place:


MFA, BABY!!


And now, finally… Outlook will connect and set up your user’s profile and email will begin to flow yet again.


Hope this helps you to nail down your Office 2013 and modern auth experience, and helps you to ensure a consistent response, ever single time!

PowerShell: Connect to Lync Online

The issue: unable to discover PowerShell endpoint URI

I don’t run into this error very often, but it’s happened enough times in the last few weeks that I really wanted to come up with a permanent/elegant solution. This error can happen when Lync/Skype is configured in a hybrid deployment, and autodiscover is pointing back on-prem – when trying to connect to Lync Online using the New-CSOnlineSession cmdlet, you receive the following error:


The Fix: Override Admin Domain

The solution is simple – all you need to do is add the -OverrideAdminDomain switch to your connection script. You can add the admin domain permanently to your script, and be done with it. For me, however, I often end up connecting to multiple environments depending on the projects I’m working on, or supporting different clients, etc. I wanted a more elegant solution, so I came up with a way of automating that process so that I can connect to any environment just by putting in my logon credentials. The script will check and find the onmicrosoft domain, and then use that to connect to a new CSOnline session with that domain specified as the admin domain.

This is what the script looks like:

<br />
$credential = Get-Credential<br />
Connect-MsolService -Credential $credential</p>
<p># Find the root (onmicrosoft.com) tenant domain<br />
Write-Host &quot;Connected to MS Online Services, checking admin domain...&quot; -ForegroundColor Yellow<br />
$msolDomain = Get-MsolDomain | where {$_.Name -match &quot;onmicrosoft.com&quot; -and $_.Name -notmatch &quot;mail.onmicrosoft.com&quot;}</p>
<p>Write-Host &quot;Admin domain found, connecting to $($msolDomain.Name)&quot; -ForegroundColor Green</p>
<p># Use this domain to connect to SFB Admin domain<br />
$session = New-CsOnlineSession -Credential $credential -OverrideAdminDomain $msolDomain.Name<br />
Import-PSSession $session<br />

And there you go… connected properly, every time!


Feel free to download the script, and add it to your toolkit – hope it helps!