Quantcast
Channel: You Had Me At EHLO…
Viewing all 172 articles
Browse latest View live

Modern public folders logging and when to use it

$
0
0

Hello again! In our last article, we discussed recommendations for deployment of public folders and public folder mailboxes. In this post, we will be discussing methods and tips for monitoring connections being made to the Public Folder mailboxes with the help of different log types available in Exchange Server 2013 and Exchange Server 2016. This article mainly focuses on logging related to public folder mailbox activity and provides information on how to analyze these logs to get the information on the usage of public folders. Let’s get to it!

How do I log and report on different public folder connections?

As we discussed in previous post, the ability to estimate the number of connections being made to public folder mailboxes is very helpful as deployment guidance for public folders partially revolves around connection counts. As of today, currently available logging methods will not reveal individual names of public folders clients are connecting to but will contain information about public folder mailboxes being accessed by clients.

Depending on what information you are looking to gather there are several flavors of logging you can consider.

  • Autodiscover logs – use these to learn which public folder mailboxes Outlook clients get sent to during the Autodiscover process.
  • Outlook Web App logs – use these to learn which default public folder mailboxes Outlook Web App clients get sent to during connection process. As stated in our first article, the default public folder mailboxes could be either the ones which are provided randomly to the requesting OWA client or could be a hard coded default public folder mailbox assigned to a specific user’s mailbox.
  • RPC Client Access logs & MAPI Client Access on Microsoft Exchange 2013 Mailbox Servers – use these to find out which public folder mailboxes on a specific mailbox server the users are connecting using RPC/HTTP and MAPI/HTTP protocols. These logs can be used with Microsoft Exchange 2013.
  • MAPI/HTTP logs in Microsoft Exchange 2016 Servers – learn which public folder mailboxes your MAPI/HTTP clients are connecting to. These logs should only be used with Microsoft Exchange 2016.

Let’s get started! In the upcoming section, we are going to make extensive use of Log Parser Studio (LPS) tool which will be used to parse the logs to help get the required data. It is a great tool and if you are not aware of it, I would recommend you to first visit the following links and get yourself familiarized with it first:

Autodiscover logs: Which public folder mailboxes are Outlook clients connecting to?

Why do Autodiscover logs need to be investigated?

The Autodiscover service is responsible for informing Outlook clients where and how to connect to a public folder mailbox. This may be so Outlook can display the public folder hierarchy tree, or to make a public logon connection to access content within a public folder mailbox.

Thus, the Autodiscover logs can be useful to administrators in determining which public folder mailboxes are being returned by the Autodiscover service. This information can be very helpful in large multi-site environments when trying to identify possible improvements in public folder mailbox or public folder locations.

To understand this better let’s consider a common scenario that an administrator might face in the environment. An administrator may need to determine which public folder mailboxes are being returned to the end users when they connect from different sites using Outlook. This can be a challenging task if there are many sites and users resulting in a huge data set. Rather than try to analyze the data manually there needs to be an automated way which can get the desired outcome.

This is where the Log Parser Studio (LPS) queries can be used to parse the Autodiscover logs on mailbox servers to get us the required data for further investigation and actions.

Where are Autodiscover logs located?

Autodiscover logs should be investigated on Mailbox servers and can be found in the following default path for Microsoft Exchange 2013/2016:

  • C:\Program Files\Microsoft\Exchange Server\V15\Logging\Autodiscover

(The location may change if the installation path is different from the default.)

Autodiscover Method 1, server-side.

At this point it is assumed Log Parser Studio has been installed.

1. Open the Log Parser Studio by double clicking the LPS.exe application file as shown in the below image which will launch the LPS.

image

2. Once the LPS launches, at the top of the left corner, select File and then click on New Query which will open new tab for query

3. Copy the sample query mentioned in the example below to the query section and set the Log Type to EELXLOG

/* New Query */
SELECT Count(*) As Hits,
EXTRACT_PREFIX(EXTRACT_SUFFIX(GenericInfo, 0, 'Caller='), 0, ';') as User-Name,
EXTRACT_PREFIX(EXTRACT_SUFFIX(GenericInfo, 0, 'ResolveMethod='), 0, ';') as Method,
EXTRACT_PREFIX(EXTRACT_SUFFIX(GenericInfo, 0, 'ExchangePrincipal='), 0, ';') as PF-MBX,
EXTRACT_PREFIX(EXTRACT_SUFFIX(GenericInfo, 0, 'epSite='), 0, ';') as Site-Name
FROM '[LOGFILEPATH]'
WHERE Method LIKE '%FoundBySMTP%'
GROUP BY User-name, Method, PF-MBX, Site-Name
/* End Query */

4. Lock the query to avoid any modifications by clicking on the Lock icon once as shown below

image

5. Click on the Log file manager button available at the top panel window of LPS to add required logs as shown in the below image.

image

6. Specify the log location of the required log files and select one file in the folder where the logs reside and click Open and hit OK

7. In this example, I have accessed and selected logs from a specific mailbox server by specifying the UNC path of the server and log location. It is possible to add multiple folders of same log type from different servers and parse all of them at same time.

image

8. The only thing left is to execute the query and to do so just click the execute query button. in the LPS panel. The output will be similar format to the one shown

image

Note: This LPS query will provide a report that includes information on what users are connecting to what public folder mailboxes along with the Active Directory site the mailbox resides in:

Why might this type of report be useful?

The output of this data may help an administrator determine if a significant number of users in a geographic location would benefit from a public folder mailbox to be located closer to them. Depending on the results administrator can make decision to deploy additional Hierarchy Only Secondary Public Folder Mailbox (HOSPFM) in those geographic sites and then set the DefaultPublicFolderMailbox property on the mailboxes so that they contact the PF Mailbox (HOSPFM) in their own site for fetching the public folder hierarchy information and in turn the user experience while accessing public folders will be better!

One more point to be noted is that only Microsoft Exchange 2016 Autodiscover logs will show the Site Name. This logging feature functionality is not present in Microsoft Exchange 2013 and will require additional manual work to figure out the site location of the mailbox.

Note: The example query will return additional Autodiscover log entries for non-public folder mailbox queries. If you have standardized naming convention for your public folder mailboxes you could enhance the query to only return results where the ExchangePrincipal value contains a portion of your naming convention.

Autodiscover Method 2, client-side.

You can also use the Test E-mail AutoConfiguration tool from within the Outlook client to perform a single user test. This will provide you with which public folder mailbox is being returned to a single end user by Autodiscover service for hierarchy connections.

To start the Test E-mail AutoConfiguration tool, follow these steps:

  1. Start Outlook.
  2. Hold down the Ctrl key, right-click the Outlook icon in the notification area, and then click Test Email AutoConfiguration.
  3. Verify that the correct email address is in the E-mail Address box. You do not need to provide a password if you are running a test for the currently logged in user. If you are testing a different user account than the one currently logged into the machine, then you will need to provide both the email address and password for that account.
  4. In the Test Email AutoConfiguration window, click to clear the Use Guessmart check box and the Secure Guessmart Authentication check box.
  5. Click to select the Use Autodiscover check box, and then click Test.

Below is the excerpt from the XML File gathered from Test E-mail AutoConfiguration:

image

As you can see above the user administrator@contoso.com will be using the public folder mailbox HOSPFM-001@contoso.com to make a hierarchy connection.

Please note this is only an example; if you follow our guidance you will not have any users making connections to your primary public folder mailbox for hierarchy or content.

Outlook Web App logging: which default public folder mailboxes do Outlook Web App clients get sent to?

When users log into Outlook on the Web (OWA) in an environment with public folders, the public folder mailbox used for hierarchy information could be a static default public folder mailbox (if one has been set manually on the mailbox), or a random public folder mailbox. It should be noted Autodiscover is not utilized when accessing public folders using OWA. Instead, OWA uses its own function to return a default public folder mailbox to the requesting user. As such, you will not find OWA users in the previously mentioned Autodiscover logs.

Location of OWA logs

All logging data for Outlook on the Web (OWA) including public folder access will be in the following folder on Exchange 203 Client Access Servers or Exchange 2016 Mailbox Server:

  • C:\Program Files\Microsoft\Exchange Server\V15\Logging\HttpProxy\Owa

Here is an example of a Log Parser Studio query to fetch data from OWA logs:

/* New Query */
SELECT COUNT(*) as hits,
AnchorMailbox AS PF-MBX,AuthenticatedUser,ProtocolAction,TargetServer,HttpStatus,BackEndStatus,Method,ProxyAction
FROM '[LOGFILEPATH]'
WHERE PF-MBX LIKE '%smtp%'
GROUP BY PF-MBX,AuthenticatedUser,ProtocolAction,TargetServer,HttpStatus,BackEndStatus,Method,ProxyAction
ORDER BY hits ASC

Log type is set to EELXLOG

Fields used in Query
Field Description
AnchorMailbox The default public folder mailbox being returned to the user
AuthenticatedUser Users accessing the PF mailbox
ProtocolAction Action being taken by the user while accessing public folder such as GetFolder, Getitem, Createitem, Finditem
TargetServer Provides information on which Exchange Server the query is being redirected to fetch the public folder mailbox
HttpStatus & BackEndStatus Provides information on connection status for the public folder mailbox connection

Output is as follows:

In the output below the AnchorMailbox value is the public folder mailbox the end user is accessing for their hierarchy connection.

image

In the above sample result, the user “Administrator” is logged into OWA and is accessing public folder mailbox HOSPFM-001 which is returned as default public folder mailbox. We know Administrator is using this public folder mailbox for a hierarchy connection as OWA logging currently does not capture information for public folder content access.

In Log Parser Studio, you can save this query and execute it in batches to get concurrent logging. You can also add entire folder instead of individual logs which will make it easier to parse existing and newly written logs. The number of hits returned and being logged against specific public folder mailbox by the user will reveal the public folder mailboxes which are most often being used for fetching hierarchy information.

How can this logging be useful?

Since OWA does not use Autodiscover to fetch a default public folder mailbox, it may make sense to identify the public folder mailboxes being returned to users when they use OWA. Like our earlier example for Outlook, it may identify cases were OWA is using public folder mailboxes that are a less optimal performance choice. Keep in mind that for OWA a better performing hierarchy mailbox is one closer to the Exchange mailbox server where OWA is being rendered rather than one closer to where the user’s Outlook client sits. Depending on your Exchange deployment and where OWA is served this may mean making choices about your public folder mailbox deployment based on what client is more often used in your environment to provide that client more optimal experience.

As mentioned in my earlier post the recommendation for users in geographically disperse sites is to deploy additional Hierarchy Only Secondary Public Folder Mailbox (HOSPFM) and set the DefaultPublicFolderMailbox property on the user mailboxes in those sites to ensure a public folder mailbox within the Site is being used by the respective users for hierarchy.

RPC Client Access logs & MAPI Client Access logs on Mailbox Servers (Microsoft Exchange 2013)

While AutoDiscover logs can provide information about public folder mailboxes Outlook is learning about and may potentially connect to, the RPC Client Access (RPC/HTTP) & MAPI Client Access (MAPI/HTTP) logs will provide information about actual public folder mailbox connections established by users.

Both log types in this case can be combined in LPS in single query and parsed to get some useful information on Public folder mailboxes being accessed.

Default location of logs:

  • MAPI Client Access: C:\Program Files\Microsoft\Exchange Server\V15\Logging\MAPI Client Access
  • RPC Client Access: C:\Program Files\Microsoft\Exchange Server\V15\Logging\RPC Client Access
Which public folder mailboxes on a specific server are users connecting to?

Consider the scenario consisting of a multi-site environment where the administrator is given a task to determine which users are connecting to public folder mailboxes on a specific server. Let’s say E15-CLASS-MB1 is the Mailbox server hosting the public folder mailboxes and the administrator needs to find who is making connections to them. Depending on the results decisions can be made whether or not it makes sense to move certain public folder mailboxes closer to a certain user location based on who actually uses that public folder mailbox. Below are the steps to be followed:

1. Open the LPS on the machine. Copy and paste the query below in the New Query Window in LPS as per the instructions mentioned earlier in the post.

/* Public Folder Mailboxes Hits */
SELECT Count(*) as Hits,
operation as Operation,
user-email as [SMTP Address],
EXTRACT_PREFIX(EXTRACT_SUFFIX(operation-specific, 0, 'Logon:'), 0, ';') as MailBox-LegacyExchangeDN,
EXTRACT_PREFIX(EXTRACT_SUFFIX(operation-specific, 0, 'on '), 0, ';') as Server
INTO '[OUTFILEPATH]\GeoReport.CSV'
FROM '[LOGFILEPATH]'
WHERE operation-specific LIKE '%Logon: Public%' AND Server LIKE '%E15-CLASS-MB1%'
GROUP BY Operation, Mailbox-LegacyExchangeDN, Server, [SMTP Address]
ORDER BY hits DESC

Fields used in query:

Field Description
Operation Used to extract the logons for public folder mailboxes
SMTP Address Email address of the users accessing the public folder mailbox
Mailbox LegacyExchangeDN Public folder mailboxes in form of LegacyExchangeDN
Server Connection requests coming to the server

2. Set the Log File type to EELLOG. Add the required folders to parse from respective mailbox servers and start the query by clicking Query button in LPS Panel.

3. The above sample query exports the results in CSV format. If there is no specific location specified in the query to export the report the default export directory will be used.

4. Once the query has finished executing, it will export the output to a CSV file, which can be further formatted as table.

5. To do so Open the CSV file. By default, the CSV file will not have any formatting and will show the output in similar format.

image

6. Select all the cells which contains the data and then select Insert tab and click on Table which will open a new pop-up window to Create Table. Click on OK button

image

7. A new table will be created in structured format to help sort the data and filter it.

image

8. The filtering can be used to sort the data by available fields such as SMTP Address, Mailbox-LegacyExchangeDN

If the LegacyExchangeDN output is trimmed and you cannot figure out the full public folder mailbox name, then you can copy the LegacyExchangeDN value of the public folder mailbox in Exchange PowerShell and use it to find the name of relevant mailbox as shown below:

image

You now have information regarding public folder mailboxes being actively used by users on the server. Not only which ones, but also the frequency. This can be utilized by the administrator to make public folder deployment decisions.

MAPI/HTTP Logs (Exchange 2016 Only)

In Microsoft Exchange 2016 there is one additional folder created specially to log MAPI/HTTP protocol traffic. Recent updates to Exchange 2016 have removed MAPI/HTTP traffic from the MAPI Client Access log. If not all of your Outlook for Windows clients are connecting to Exchange 2016 via MAPI/HTTP you may need to analyze both logs to get a full picture of your public folder mailbox connections until such time that all Outlook for Windows clients are using MAPI/HTTP. All MAPI/HTTP logging is now logged in to the MapiHttp folder.

The logs reside in the following default path:

  • C:\Program Files\Microsoft\Exchange Server\V15\Logging\MapiHttp\Mailbox

Exchange Server 2016 uses slightly different field names for MAPI/HTTP logging, and a query used previously with Exchange Server 2013 for parsing the MAPI/HTTP traffic in the older MAPI Client access logs will no longer work with Exchange Server 2016.

Which public folder mailboxes are your MAPI/HTTP clients connecting to?

MAPI/HTTP logs can be investigated for connections established to public folder mailboxes over the MAPI/HTTP protocol in Exchange Server 2016 using the below query in Log Parser.

Ensure the Log Type is set to EELXLOG

/* New Query */
SELECT Count(*) as Hits,MailboxId AS PF-Mailbox, MDBGuid AS Database, ActAsUserEmail AS SMTP-Address, SourceCafeServer FROM '[LOGFILEPATH]'
WHERE OperationSpecific LIKE '%PublicLogon%'
GROUP BY PF-Mailbox,Database,SMTP-Address, SourceCafeServer
ORDER BY Hits DESC

Fields used in this query:
Field Description
Operation-Specific Used to extract the logons for public folder mailboxes
SMTP Address Email address of the users accessing the public folder mailbox
PF-mailbox Mailbox Guid of PF mailbox
SourceCafeServer Connection request coming to the server
Database Shows which specific mailbox database which host public folder mailboxes is being connected to

Once the query is executed it will gather the information and will populate the results in the below format which can be exported to CSV and output can be gathered in batches by running the query in batches to fetch more data.

Sample output:

image

In Exchange 2016 MAPI/HTTP logs, the name of the public folder mailbox is not revealed but, the log does capture the mailbox GUID of the public folder mailbox which can be used in PowerShell command to fetch the actual public folder mailbox name.

Note: If there are any users hosted in Exchange 2016 who still use the RPC/HTTP protocol, the RPC/HTTPS query previously shown can be used to fetch the data for these specific users.

How this data can be useful to administrators?

The administrators can run this report repeatedly in batches and gather the data in CSV file. The data can be collated for the results from different batches and investigated for public folder mailboxes being accessed frequently by the users. From there administrators should be able to find if there are any public folder mailboxes being used heavily by the users and then make decision to move any specific public folder mailboxes or maybe even specific public folders closer to users in specific location.

There are so many log types. When should I use what?

It is true there are many different logs in Exchange Sever showing similar information. Depending on what protocol your users use you may make decisions on the log type to parse. Autodiscover logs will give a combined view of what public folder mailboxes users are at least trying to access once. If you have content-only public folder mailboxes in your environment that are excluded from serving hierarchy and not directly assigned to users as their default, you may be able to determine if some are never accessed and may contain content worthy of purging. If you need a more granular view of the world, and the ability to generate some sort of heat map you may choose to go with more protocol specific logs. These logs will provide data on each time the client creates a new connection to a public folder mailbox and allow you to determine more than just if the client learned about it through Autodiscover but if it is being used far more heavily by many users over time. The options are varied and up to you to choose based on your need.

Summary

In this post, I have discussed and provided information on different types of public folder logging and how this logging can be useful to administrators to identity heavily used public folder mailboxes, which in turn can be used to do planning and deployment of public folders in the environment. In upcoming posts, we will discuss topics related to public folder management and quota related information

I would like to thank Brian Day, Ross Smith IV & Nasir Ali for their inputs while reviewing this content and validating the guidance mentioned in the blog post, Special thanks to Kary Wall for providing inputs with Log parser studio queries and Nino Bilic for helping to get this blog post ready!

Siddhesh Dalvi
Support Escalation Engineer


Announcing availability of 250,000 public folder Exchange 2010 hierarchy migrations to Exchange Online

$
0
0

Last September, we announced a beta program to validate onboarding of public folder data from Exchange 2010 on-premises to Exchange Online with large public folder hierarchies (100K – 250K public folders).

We are glad to announce that Exchange Online now officially supports public folder hierarchies of up to 250K public folders in the cloud – more than double the previously supported limit of 100K public folders!

In line with our efforts to help larger customers onboard to Exchange Online, we would like to additionally announce support for the migration of public folders from on-premises Exchange 2010 to Exchange Online, for customers with folder hierarchies up to 250K.

What does all this mean?

  • All existing customers using Exchange Online who would have been constrained by the limit of 100K public folders, can now expand their Exchange Online public folder hierarchy up to 250K folders.
  • Any on-premises customers running Exchange 2010 with up to 250K public folders, who would like to onboard to Exchange Online, can now do so.

Note: At this point in time, Exchange 2013/2016 customers with over 100K folders can still only migrate up to 100K public folders to Exchange Online. However, once they have migrated to Exchange Online, they can expand their hierarchy up to 250K public folders. We are working to resolve this limitation for our Exchange 2013/2016 customers in the future.

Keep checking this blog for further updates on the subject.

Public folder team

Released: September 2017 Quarterly Exchange Updates

$
0
0

The latest set of Cumulative Updates for Exchange Server 2016 and Exchange Server 2013 are now available on the download center.  These releases include fixes to customer reported issues, all previously reported security/quality issues and updated functionality.

Minimum supported Forest Functional Level is now 2008R2

In our blog post, Active Directory Forest Functional Levels for Exchange Server 2016, we informed customers that Exchange Server 2016 would enforce a minimum 2008R2 Forest Functional Level requirement for Active Directory.  Cumulative Update 7 for Exchange Server 2016 will now enforce this requirement.  This change will require all domain controllers in a forest where Exchange is installed to be running Windows Server 2008R2 or higher.  Active Directory support for Exchange Server 2013 remains unchanged at this time.

Support for latest .NET Framework

The .NET team is preparing to release a new update to the framework, .NET Framework 4.7.1.  The Exchange Team will include support for .NET Framework 4.7.1 in our December Quarterly updates for Exchange Server 2013 and 2016, at which point it will be optional.  .NET Framework 4.7.1 will be required on Exchange Server 2013 and 2016 installations starting with our June 2018 quarterly releases.  Customers should plan to upgrade to .NET Framework 4.7.1 between the December 2017 and June 2018 quarterly releases.

The Exchange team has decided to skip supporting .NET 4.7.0 with Exchange Server.  We have done this not because of problems with the 4.7.0 version of the Framework, rather as an optimization to encourage adoption of the latest version.

Known unresolved issues in these releases

The following known issues exist in these releases and will be resolved in a future update:

  • Online Archive Folders created in O365 will not appear in the Outlook on the Web UI
  • Information protected e-Mails may show hyperlinks which are not fully translated to a supported, local language

Release Details

KB articles that describe the fixes in each release are available as follows:

Exchange Server 2016 Cumulative Update 7 does include new updates to Active Directory Schema.  If upgrading from an older Exchange version or installing a new server, Active Directory updates may still be required.  These updates will apply automatically during setup if the logged on user has the required permissions.  If the Exchange Administrator lacks permissions to update Active Directory Schema, a Schema Admin must execute SETUP /PrepareSchema prior to the first Exchange Server installation or upgrade.  The Exchange Administrator should execute SETUP /PrepareAD to ensure RBAC roles are current.

Exchange Server 2013 Cumulative Update 18 does not include updates to Active Directory, but may add additional RBAC definitions to your existing configuration. PrepareAD should be executed prior to upgrading any servers to Cumulative Update 18. PrepareAD will run automatically during the first server upgrade if Exchange Setup detects this is required and the logged on user has sufficient permission.

Additional Information

Microsoft recommends all customers test the deployment of any update in their lab environment to determine the proper installation process for your production environment. For information on extending the schema and configuring Active Directory, please review the appropriate TechNet documentation.

Also, to prevent installation issues you should ensure that the Windows PowerShell Script Execution Policy is set to “Unrestricted” on the server being upgraded or installed. To verify the policy settings, run the Get-ExecutionPolicy cmdlet from PowerShell on the machine being upgraded. If the policies are NOT set to Unrestricted you should use the resolution steps in KB981474 to adjust the settings.

Reminder: Customers in hybrid deployments where Exchange is deployed on-premises and in the cloud, or who are using Exchange Online Archiving (EOA) with their on-premises Exchange deployment are required to deploy the most current (e.g., 2013 CU18, 2016 CU7) or the prior (e.g., 2013 CU17, 2016 CU6) Cumulative Update release.

For the latest information on Exchange Server and product announcements please see What's New in Exchange Server 2016 and Exchange Server 2016 Release Notes.  You can also find updated information on Exchange Server 2013 in What’s New in Exchange Server 2013, Release Notes and product documentation available on TechNet.

Note: Documentation may not be fully available at the time this post is published.

The Exchange Team

Migrate your public folders to Office 365 Groups

$
0
0

Over the last few months, we ran a TAP Program where our customers tested the batch migration process to move their public folders (both online and on-premises) to Office 365 Groups. We want to thank all of the customers who helped us out with the testing by sharing their experiences with us. The TAP program proved successful, and so we are now making this process available worldwide.

We encourage you to read Migrate your public folders to Office 365 Groups to learn about the advantages Office 365 Groups offers over public folders in a number of scenarios. Hopefully, you’ll want to migrate your own public folders to Office 365 Groups.

If you’ve already decided to migrate, you can click one of the following links to understand the step-by-step details of the migration process, which is dependent upon the current version of your Exchange environment.

Enjoy!

Public folder team

Ask the Perf Guy: Update to scalability guidance for Exchange 2016

$
0
0

I’m happy to announce a significant update to our scalability guidance for Exchange 2016. Effective immediately, we are increasing our maximum recommended memory for deployments of Exchange 2016 from 96 GB to 192 GB.

This change is now reflected within our Exchange 2016 Sizing Guidance, as well as the latest release of the Exchange Server Role Requirements Calculator.

We have received ongoing feedback that the previous recommended maximum memory size of 96 GB was far too limiting, and that it was difficult to purchase modern hardware with memory of that size. We are aware that this has led to many difficult architectural choices, and we have been evaluating multiple types of larger hardware in our Exchange Online deployments to get to a significant level of comfort that customers will not experience issues with utilization of memory up to this size.

At this time, we are not raising the recommended maximum processor core count. While we are evaluating hardware with core counts dramatically larger than 24, we have additional work to do within the Exchange product to be able to safely recommend those core counts.

In summary, the updated Exchange 2016 processor and memory scalability guidance is as follows:

Recommended Maximum Processor Core Count

24

Recommended Maximum Memory

192 GB

Hopefully this helps to resolve some of the architectural challenges we have been hearing about.

Jeff Mealiffe
Principal PM Manager
Office 365 Customer Experience

TAP: Outlook mobile support for Exchange on-premises with Microsoft Enterprise Mobility + Security

$
0
0

As announced at Ignite 2017, Outlook for iOS & Android will soon be fully powered by the Microsoft Cloud for hybrid Exchange on-premises customers. These updates will also provide support for management via Microsoft Intune, included in Enterprise Mobility + Security (EMS). This article outlines what the changes will provide for customers and how to apply to participate in the Technology Adoption Program (TAP) for this new architecture.

Overview of the new Microsoft Cloud architecture for Exchange Server customers

For Exchange Server mailboxes, Outlook mobile’s new architecture will be similar in design to our legacy architecture. However, as the service is now built directly into the Microsoft Cloud (using Office 365 and Azure) customers receive the additional benefits of security, privacy, built-in compliance and transparent operations that Microsoft commits to in the Office 365 Trust Center and Azure Trust Center.

Hybrid

Data passing from Exchange Online to the Outlook app is passed via a TLS-secured connection. The protocol translator running on Azure serves to route data, commands and notifications, but has no ability to read the data itself.

The Exchange ActiveSync connection between Exchange Online and the on-premises environment enables synchronization of the user's on-premises data and includes 4 weeks of email, all calendar data, all contact data, and out of office status into your Exchange Online tenant. This data will be removed automatically from Exchange Online after 30 days of inactivity.

Data synchronization between the on-premises environment and Exchange Online happens independent of user behavior. This ensures that we can send new messages to the devices very quickly.

Benefits of the new Microsoft Cloud-based architecture

In order to deliver the best possible experience for our customers, we built Outlook for iOS & Android as a cloud-backed application. This means your experience consists of a locally installed app powered by a secure and scalable service running in the Microsoft Cloud.

Processing information in the Microsoft Cloud enables advanced features and capabilities, such as the categorization of email for the Focused Inbox, customized experience for travel and calendar, improved search speed and more. It enhances Outlook’s performance and stability, relying on the cloud for intensive processing and minimizing the resources required from users' devices. Lastly, it allows Outlook to build features that work across all email accounts, regardless of the technological capabilities of the underlying servers (e.g. different versions of Exchange, Office 365, etc.).

Specifically, this new architecture has the following improvements:

    1. EMS Support: Customers can take advantage of Microsoft Enterprise Mobility + Security (EMS) including Microsoft Intune and Azure Active Directory Premium to enable Conditional Access and Intune App Protection policies to control and secure corporate messaging data on the mobile device.
    2. Fully powered by Microsoft Cloud: The mailbox cache is moved off AWS, and is now built natively in Exchange Online. It provides the benefits of security, privacy, compliance and transparent operations that Microsoft commits to in the Office 365 Trust Center.
    3. OAuth protects user’s passwords: Outlook will leverage OAuth to protect user’s credentials. OAuth provides Outlook with a secure mechanism to access the Exchange data without ever touching or storing a user’s credentials. At sign in, the user authenticates directly against an identity platform (either Azure AD or an on-premises identity provider like ADFS) and receives an access token in return, which grants Outlook access to the user’s mailbox or files. At no time does the service have access to the user’s password in any form.
    4. Provides Unique Device IDs: Each Outlook connection will be uniquely registered in Microsoft Intune and be able to be managed as a unique connection.
    5. Unlocks new features on iOS & Android: This update will enable the Outlook app to take advantage of native Office 365 features that are not supported in Exchange on-premises today, such as leveraging full Exchange Online search and Focused Inbox. These features will only be available when using the Outlook apps for iOS & Android.

Note: Device management through the Exchange Admin Center will not be possible; Intune is required to manage mobile devices.

Other notes about Outlook mobile, Exchange Server & EMS

  • Managing mobile devices: Microsoft Intune is the only way to manage the devices and perform wipe operations. Individual device IDs will not be manageable in the on-premises Exchange environment.
  • Support for Exchange Server 2007: Users with an Exchange Server 2007 mailbox will be unable to access their email and calendar in Outlook for iOS & Android as Exchange Server 2007 is not in mainstream support.
  • Support for Exchange Server 2010: Exchange Server 2010 SP3 is out of mainstream support and will not work with Intune-managed Outlook mobile. In this architecture, Outlook mobile utilizes OAuth as the authentication mechanism. One of the on-premises configuration changes performed enables the OAuth endpoint to the Microsoft Cloud as the default authorization endpoint. When this change is made, clients can start negotiating the use of OAuth. As this is an organization-wide change, Exchange 2010 mailboxes fronted by either Exchange 2013 or 2016 will incorrectly think they can perform OAuth and will end up in a disconnected state as Exchange 2010 does not support OAuth as an authentication mechanism.

Technical and licensing requirements

Our new architecture will have the following technical requirements:

  1. Exchange on-premises setup:
    • A minimum of cumulative update (CU) deployment on all Exchange servers of Exchange Server 2016 CU6 or Exchange Server 2013 CU17.
    • All Exchange 2007 or Exchange 2010 servers must be removed from the environment.
  2. Active Directory Synchronization: Active Directory synchronization with Azure Active Directory via Azure AD Connect. Ensure the following attributes are synchronized:
    • Office 365 ProPlus
    • Exchange Online
    • Exchange Hybrid writeback
    • Azure RMS
    • Intune
  3. Exchange hybrid setup: Requires full hybrid relationship between Exchange on-premises with Exchange Online.
    • Hybrid Office 365 tenant is required that is configured in full hybrid configuration mode and is setup as specific in the hybrid configuration guide.
    • Requires an Office 365 Enterprise, Business or Education tenant.
    • The mailbox data will be synchronized in the same datacenter region where that Office 365 tenant is setup. For more about where Office 365 data is located, visit the “Where is my data?” section Office 365 Trust Center.
    • Use of Office 365 US Government Community and Defense, Office 365 Germany and Office 365 China operated by 21Vianet tenants will not be supported at launch.
    • The external URL hostname for EAS must be published as a service principal to AAD through the Hybrid Configuration Wizard.
    • Autodiscover and EAS namespaces must be accessible from the Internet and support anonymous connections.
  4. EMS setup: Both cloud only and hybrid deployment of Intune is supported (MDM for Office 365 is not supported).
  5. Office 365 licensing*: One of the following Office 365 licenses for each user that includes the Office client applications required for Outlook for iOS & Android commercial use:
    • Commercial: Enterprise E3, Enterprise E5, ProPlus or Business licenses
    • Government: U.S. Government Community G3, U.S. Government Community G5
    • Education: Office 365 Education E3, Office 365 Education E5
  6. EMS licensing*: One of the following licenses for each user:
    • Intune standalone + Azure Active Directory Premium standalone
    • Enterprise Mobility + Security E3, Enterprise Mobility + Security E5

*Microsoft Secure Productive Enterprise (SPE) includes all licenses necessary for Office 365 and EMS.

Data Security, Access, and Auditing Controls

Data within Exchange Online is protected via a variety of mechanisms. The Content Encryption whitepaper discusses how BitLocker is used for volume-level encryption. Service Encryption with Customer Key as discussed in the Content Encryption whitepaper will be supported in this architecture, but note that the user must have an Office 365 Enterprise E5 (or the corresponding versions of those plans for Government or Education) license to have an encryption policy assigned.

By default, Microsoft engineers have zero standing administrative privileges and zero standing access to customer content in Office 365. The Admin Access whitepaper discusses personnel screening, background checks, Lockbox and Customer Lockbox, and more.

ISO Audited Controls on Service Assurance documentation provides the status of audited controls from global information security standards and regulations that Office 365 has implemented.

Participating in the Technology Adoption Program (TAP)

Prior to rolling this updated architecture out to all customers, we are looking for customers to participate in the TAP. The TAP will allow Microsoft to work closely with customers to deploy the solution, and validate that it meets the needs and requirements of our customers.

What is in it for TAP customers:

  • Direct engagement and support from product engineering
  • Deployment assistance and support
  • Early product training
  • Regular conference calls
  • Opportunity to provide input and feedback that will be integrated into the product

What do customers have to commit to in order to participate in the TAP:

  • Must sign a non-disclosure agreement with Microsoft
  • Willing to work closely with Microsoft during TAP program, share any issues, bugs and feedback
  • Code Deployment: Must deploy pre-production Exchange Server software in production.
  • Wiling to deploy more than 25 devices utilized by real-world users
  • Deploy to production mailboxes that vary in size (medium, large, and very large)

To nominate yourself for the TAP, please work with your account team.

Additional technical requirements for participating in the TAP

In addition to the evergreen technical requirements outlined above, these additional requirements are necessary during the TAP program period:

  • Authentication support: OAuth is the only supported authentication mechanism.
  • Exchange mobile device access policies (also known as ABQ policies): these are not supported. ABQ policies from Exchange Server on-premises will block syncs from cloud. ABQ policies setup in Office 365 will not be enforced.
  • Exchange mobile device mailbox policies (also known as EAS policies): these will not be enforced by Outlook mobile. This means users must be managed by Intune to receive security policies.

If you have any questions, please let us know.

Ross Smith IV
Principal Program Manager
Office 365 Customer Experience

Exchange Server 2019

$
0
0

We wanted to post a quick note on our blog to mention to all that at Microsoft Ignite 2017 we have announced that we will be releasing Exchange Server 2019 as an on-premises release to our customers.

We are looking forward to sharing more details about this release with you in calendar year (CY) 2018. We expect to release a preview in mid CY 2018 with the final release near the end of CY 2018. Please review our TAP program post, as we will be looking for more customers to help us validate this release!

The Exchange Team

Looking back at Microsoft Ignite 2017

$
0
0

Ignite 2017 was busy and fun! We loved talking to many of you, answering many of your questions and listening to your feedback. Many teams are still collecting their thoughts into action items and following up with many of you. We also walked. A lot. You know what we mean if you were there!

Most of the sessions are now online. As we usually do, we picked some of the sessions that are closely related to subjects we often talk about and provided the list below. There are many more sessions available than the following list:

Keynotes:

Core Exchange / Exchange Online:

Hybrid:

Groups:

Protection:

Outlook:

See you next year!

We have already announced that Ignite 2018 is going to be back in Orlando! Pre-registration is available!

The Exchange Team


Why is my Address Rewriting not working as expected?

$
0
0

Address Rewriting is a feature of the Transport Agent that runs on the Edge Server role. It enables the modification of addresses for both senders and recipients on messages that enter and leave your Exchange organization. First introduced in Exchange 2007, customers are using Address rewriting to present a consistent appearance of E-mail Address for messages sent to external recipients. Two TechNet Articles published here and here document both Address Rewrite inbound and outbound agents, various situations where it's applicable, and commands that can be used to configure and control these agents. However, based on my experience in the Support Team, I have seen scenarios where Address Rewrite is not working as expected, and wanted to work through these.

A potential scenario with Address Rewrite would be Exchange treating certain messages as inbound whereas your expectation is the Address Rewrite outbound agent should work on that particular message. In other words, you were expecting the “From” address to change, but it is not happening. I have also seen cases where the Inbound agent is working fine but not the Outbound, or vice versa. Then there are situations when it works for MAPI submitted messages but not when an application is relaying mail thorough your Exchange environment. In this post, we will discuss how Exchange decides when the Address Rewrite Inbound agent should work and when Address Rewrite Outbound agent should work. We will also try to simplify the scenarios with various examples so that we understand it better.

There are two Address Rewrite agents:

  1. Address Rewrite Inbound Agent – works on inbound messages and changes the RCPT TO/TO
  2. Address Rewrite Outbound Agent – works on outbound messages and changes the MAIL FROM/FROM

How does your Edge Server decide which Address Rewrite Agent will work on a particular message? This is based on combination of below three rules:

  1. If the sender domain (Mail From address) is part of the Accepted Domain (Authoritative or Internal Relay, External Relay domain will be treated as external).
  2. If the mail is submitted Anonymously or with Authentication.
  3. If recipient's address is part of Accepted domain or not.

If the “Mail From” is part of the Accepted Domain, and the session is also authenticated, the mail will be treated as Outbound mail and the “Address Rewrite Outbound Agent” will work. If the “Mail From” is not part of the Accepted Domain or the session is not authenticated, the mail will be treated as Inbound and the “Address Rewrite Inbound Agent” will work. We also have to remember the Address Rewrite Inbound Agent (Priority 2) works before the Address Rewrite Outbound Agent (Priority 10).

Let’s discuss various scenarios and which of the Address Rewrite Agents will work on each of these situations. These scenarios are true for both on-premises and Hybrid environments:

Scenario Result
Message is submitted from one of the internal addresses (sender’s address is part of the Accepted Domains) to another internal address (recipient’s address is also part of Accepted Domain) Neither Address Rewrite Inbound or Address Rewrite Outbound will work on this message. As the sender address is internal, the Address Rewrite Inbound Agent will be skipped. As the recipient has an internal address, Address Rewrite Outbound will be skipped also.
Message is submitted from one of the internal users to an external recipient. But the sender’s primary SMTP address is not part of the Accepted Domains, something which can happen in a company merger/takeover scenario. Message is treated as sent by an external sender as the sender’s SMTP address is not part of the Accepted Domain. So, the mail will be treated as inbound mail and Inbound Address Rewrite will work although the recipient is external.
Message is submitted from an internal address to an external recipient, but the session was not authenticated. For example, mail is anonymously sent from an application through a relay allowed Receive Connector to the Internet. Message is treated as sent by external sender as the session was not authenticated. So, the mail will be treated as Inbound and Inbound Address Rewrite will work.
Message is submitted from an external address (sender’s address is not part of Accepted Domain), to an internal address (recipient’s address is part of Accepted Domain) The Address Rewrite Inbound agent will work as Exchange will treat this mail as originating from an external source, Address Rewrite Outbound will not work as the sender is treated as external.
Message is sent from an external address (not part of Accepted Domain), and recipient’s address is also an external address (not part of Accepted Domain) The message will be treated as inbound as the sender is external address and Inbound Address Rewrite will work. As the mail is sent from external address, Exchange will not treat the mail as outbound and the Outbound Address Rewrite would not work in this scenario.
Message is submitted from authentication source (from Outlook/Outlook on the web or through SMTP with authentication or to an Externally Secured Connector) and sender’s address is internal (part of Accepted Domain), and the recipient’s address is also an internal address (recipient's address is part of Accepted Domain) Neither Rewrite Agent will trigger. Address Rewrite Inbound will not work as the sender is Internal. Also, Address Rewrite Outbound will not work as the recipient is internal.
Message is submitted from an authenticated source (from Outlook/Outlook on the web or through SMTP with authentication or to an Externally Secured Connector) and sender’s address is internal (part of Accepted Domain), and sent to an external address (recipient’s address not part of Accepted Domain) Mail is sent from an internal address and from an authenticated source, so the sender will be treated as Internal and mail will be treated as Outbound. Address Rewrite Inbound agent will not work in this case. Address Rewrite outbound agent will work, and the Mail From/From address would change.

 

Based on the above scenarios, it is clear the Address Rewrite Outbound agent will work only when the sender’s SMTP address is internal, and the session is authenticated. There might be situations where mail is submitted from an application or third-party source using an internal address, but it can’t authenticate against Exchange, and you want the Address Rewrite Outbound agent to work on these messages. You can force Exchange to treat the message as submitted from an authenticated source by creating a Receive Connector with the “ExternalAuthoritative” Authentication mechanism. Make sure you only have the IP address of the application or third-party source under the remote IP Address range in this receive connector. This is important, since when you select ExternalAuthoritative for authentication, you’re telling Exchange to completely trust the IP address(es) or subnets specified in the RemoteIPRanges parameter of that connector, allowing those IP addresses to relay through your server.

You can run the below commands to create a connector with ExternalAuthoritative Authentication enabled:

New-ReceiveConnector -Name “Application relay” -RemoteIPRanges 192.168.0.1 -Usage custom -AuthMechanism Tls -PermissionGroups AnonymousUsers, ExchangeUsers, ExchangeServers -Bindings 0.0.0.0:25
Set-ReceiveConnector -Name “Application relay” -AuthMechanism ExternalAuthoritative

After running the above commands, mail received from IP Address 192.168.0.1 will be treated as Authenticated and trusted and if the sender address is part of the accepted domain, the Outbound Address Rewrite agent will work on them.

In this post, I tried to cover as many scenarios as possible. However, if you have something which does not match any of those scenarios and you are facing an issue setting up the Address Rewrite, please leave details in the comment section.

Arindam Thokder

Understanding modern public folder quotas

$
0
0

As a part of our ‘demystifying modern public folders’ series we have so far discussed the modern public folder deployment best practices and available logging for monitoring public folder connections. In this blog post, we are going to discuss public folder quotas. Let’s get to it!

Public folder mailboxes and quotas

Mailbox quotas are not a new thing. Planning and setting quotas has always been important for Exchange administrators and is equally important when it comes to deployment of public folders. Here is an illustration of types of quotas impacting public folders available for Microsoft Exchange 2013 / 2016 and Exchange Online

image

Organizational quotas

Those quota settings can be seen by running the command Get-OrganizationConfig | fl *defaultpublic*

image

DefaultPublicFolderProhibitPostQuota parameter specifies the size of a public folder at which users are notified that the public folder is full. Users can't post to a folder whose size is larger than the DefaultPublicFolderProhibitPostQuota parameter value. The default value of this attribute on-premises is unlimited.

Organizational Quotas in Exchange online are not unlimited and have predefined values. In Exchange Online the default values for DefaultPublicFolderIssueWarningQuota will be 1.7 GB and DefaultPublicFolderProhibitPostQuota is set at 2 GB.

image

What happens when DefaultPublicFolderProhibitPostQuota is reached in Exchange Online?

The below error will be shown if someone tried to post content to the public folder exceeding the DefaultPublicFolderProhibitPostQuota value.

image

If user tries to email the folder which exceeded the DefaultPublicFolderProhibitPostQuota limit, they will get the “554 5.2.2 mailbox full” non-delivery report.

If there are any public folders which are exceeding those values, the public folder migration to Exchange online will encounter problems as the mailbox size will be exceeded and the migration will fail.

Though the values can be modified using Set-OrganizationConfig before the start of the migration, we do not encourage this practice as our recommendation and official guidance suggest migrating the public folders below 2 GB. If any public folder in your organization is greater than 2 GB, we recommend either deleting content from that folder or splitting it up into multiple public folders.

The details of other additional parameters can be found here

Mailbox database level quota and public folder mailboxes (on-premises only)

Mailbox database level quotas apply to public folder mailboxes and not to public folders themselves. Because public folder mailboxes architecturally are normal Exchange mailboxes then these values can come into play as they will limit how large or for how long content related to public folder mailboxes can grow or will be kept.

image

· RecoverableItemsQuota: determines how much content can be stored within the Recoverable Items folder of a public folder mailbox. If the quota for this parameter is reached, then no emails can be deleted and the following error will be shown:

image

· RecoverableItemsWarningQuota: defines when a public folder mailbox will start to warn that it is reaching its Recoverable Items quota. Warning events 10024 and 1077 will be logged on respective mailbox server (where the mailbox database hosting those public folder mailboxes is active) and the event ID will contain the Guid of the public folder mailbox. Since the public folder mailbox is not a user mailbox and therefore there is no active logon into the mailbox, keeping track of event logs is important to keep an eye on public folder mailboxes approaching the storage limit.

image

image

Details of additional parameters can be found here.

Note: In Exchange Online, public folder mailboxes are not using the quota settings set at database level. If you try to set the public folder mailbox to use the database level quota it will error out as below

image

Public folder mailbox level quotas

By default, a mailbox will use the values set forth by the mailbox database the mailbox resides in. Optionally you may turn off this inheritance and set specific values for the public folder mailbox in question if you need to utilize values outside of your defaults.

image

Note: You need to keep in mind that quota settings on the public folder mailbox should always be greater than the values specified at individual public folder level.

Only one parameter to discuss here as the others have been covered in prior sections.

UseDatabaseQuotaDefaults: The Boolean attribute that determines if the public folder mailbox will use the inherited values of its mailbox database ($True) or the values specified specifically on the public folder mailbox itself ($False).

Public folder level quota

Public folder level quota applies to individual public folder itself and can be configured to use different values than the one specified on public folder mailbox. If you create any new child public folders the quota settings will be inherited from the parent public folder.

If quota settings on existing parent public folder are modified, the values will not be “pushed down” to existing child public folders.

Retention settings on individual public folders

There is an option to set the retention setting value on individual public folders and this setting can be inherited by existing child public folders and new child public folders created under the parent folder will inherit the settings

image

The inheritance can also be applied to public folder age limit. If the inheritance for AgeLimit is applied at the parent public folder, the setting will apply to existing child public folders. If you wish to have the setting on new child public folders, you need to either use PowerShell to configure the value at the individual public folder or GUI to configure those as shown below or you may need to select the option highlighted below again on parent public folders to inherit the changes to the new child public folder.

image

This inheritance option is only available on the parent public folder itself. It will not be available on the child public folders

If those settings are not set, the organization level quotas will be used.

Also remember - If the inheritance has been enabled in Retention settings of parent public folders, it will automatically be inherited on existing child public folders and the new child public folders created in the future.

Note: In Exchange online the Deleted retention / Age limit settings must be enabled at individual public folder level itself using PowerShell cmdlets only.

How to find a list of public folders which exist in a public folder content mailbox?

To find out the content mailbox location for each public folder, you should run:

Get-PublicFolder –Recurse –resultsize "unlimited" | FT Name,*ContentMailboxName*

The result can be exported to Excel and then filter out data to find out the number of public folders present in a specific public folder mailbox.

How to find public folder mailboxes which are not using the DatabaseQuotaDefaults?

The following command can be run to check for those mailboxes:

Get-Mailbox -PublicFolder | Where {$_.usedatabasequotadefaults -ne "true"}

Method 1:

To calculate the total items size present on the public folder mailbox and get the size of the actual mailbox the following command can be run:

Get-PublicFolder –Identity "\" –Recurse –ResultSize unlimited | Where {$_.ContentMailboxName –eq "mailbox name"} | Get-PublicFolderStatistics | FT name,@{Label="MB"; Expression={$_.Totalitemsize.ToMB()}}

The output can be exported to CSV or TXT file and then opened in Excel.

While this method is handy, we can make full use of it by exporting the data on larger scale for all public folder mailboxes in the organization and then filtering that using Excel as shown below

Example:

Get-PublicFolder -Identity "\" -Recurse | Where {$_.Mailboxownerid -ne $null} | Get-publicfolderstatistics | FT name,*mailboxownerid*,*path*,@{Label="MB"; Expression={$_.Totalitemsize.ToMB()}} -Autosize >c:\total4.txt

Using the specific filter MailboxOwnerId and then exporting the data to TXT or CSV file and opening in Excel gives us this:

image

From here, it is easy to filter and sum up as needed.

Method 2:

While the above method provides information on individual public folders, it does not provide information about DeletedItems or TotalDeletedItemSize.

To sum-up the TotalItemSize of the public folder mailbox and fetch information for the DeletedItems, TotalDeletedItemSize, the following command can be run:

Get-Mailbox -PublicFolder | Get-MailboxStatistics | FT Displayname,*Item* –wrap

image

If you want to specifically filter out mailboxes present on a specific mailbox server, the following command can be run to get list of those public folder mailboxes and then the previous command can be used to check for the public folders hosted on those public folder mailboxes.

image

The output can be exported to CSV or TXT file as needed and then the values can be summed up to ensure that public folders present within the mailbox are not reaching the quota.

How do all those settings work together?

A specific public folder limit (whether they are defined by the organization or explicitly on the public folder) can never exceed the limits being applied to a public folder mailbox containing it.

For example, you should never set a public folder limit of 30 GB if the underlying public folder mailbox has a quota of 15 GB. Your goal should be that all public folders contained within a single public folder mailbox do not add up to a limit greater than the public folder mailbox they are contained in. To keep track of the size of the mailboxes you can use the method discussed earlier in this post.

To following image was created as an attempt to visualize how various quota settings relate:

image

Mailbox database named Database contains four of the organization’s public folder mailboxes. Three of the public folder mailboxes (green) utilize the mailbox database size limits via inheritance while the fourth public folder mailbox (yellow) uses a non-inherited explicitly defined smaller value. Each public folder mailbox has one or more public folders within them. Six public folders (purple) are using the organization defined limits and five public folders (red) are all using different custom values.

Public folder mailbox sizing best practices

This is one of the frequently asked question when it comes to deployment of public folder mailboxes and setting sizes for them. Our simple advice will be to follow the supported guidelines and best practices for the public folders as mentioned in our published guidance:

Limits for public folders

You should monitor the size of public folder mailboxes and see which public folders might be getting more use than the others (as they might need to be moved to a different mailbox). Use the Get-PublicFolderStatistics to track the number of items being added to public folders.

I would like to say Thanks to Brian Day, Nasir Ali, Ross Smith IV, Scott Oseychik and Bhalchandra Atre for their help reviewing this blog post and providing inputs and Nino Bilic in helping to get this blog post ready!

Siddhesh Dalvi
Support Escalation Engineer

PAW your way into Office 365 Migrations

$
0
0

We have had lots of questions regarding what PAW is when it comes to MRS Migrations, so let’s take a few minutes to explain PAW benefits to you. First off, what is this PAW we keep speaking of? PAW, or Protocol Agnostic Workflow, is new functionality within the Migration Service that really enhances the experience of migrating your data to Office 365. From an Exchange Administrator’s perspective, you should see differences such as the following while managing your migrations.

Feature Pre-PAW (Legacy) PAW
Start/Stop/Remove Only allowed at certain times, making it difficult for admins to start, stop, and remove batches. Allows start, stop, and remove at any time for the batch.
Failure Retry behavior Restarts whole batch and all users within it from the beginning of the migration process. Restarts each failed user from the beginning of the step where it left off.
Failure Retry management Administrator must use Start-MigrationBatch to retry failures, unless batch has completed, in which case they must use Complete-MigrationBatch. Administrator always uses Start-MigrationBatch to retry failures.
Completion options Choose between AutoComplete or Manual Completion Choose between AutoComplete, Manual Completion, or Scheduled Completion.
Completion semantics Administrator must choose between "AutoComplete" and "Manual Completion" at the beginning. Administrator can convert between any completion option at any time before completion has occurred.
User management Administrator can only remove Synced/Stopped users. Administrator can remove a user from a batch at any time.  Also, Administrator can start/stop/modify individual users.
Duplicate users Results in "Validation Warnings" that are hard to notice, resulting in batches that are confusingly of size 0. Results in two MigrationUser objects, only one of which can be active at a time.  If the first one was Completed, it will process the second one.  Otherwise, it will fail the second one with a message indicating that the first one is being processed.  That failed user can later be resumed and complete successfully.
Throttling Handled by MigrationService, leading to inefficient resource utilization (throttling limit is never reached). Handled by MRS, which is already used to handling resource utilization (throttling limit is usually reached).
Reports Only Initial Sync and Completion reports. Initial Sync reports, Completion reports, and Periodic status reports.
Counts Not exactly accurate (delayed by ~15 minutes). Almost always accurate (and, cheaper to generate).

As you can see we have introduced things like the ability to start, stop, and remove whole batches or certain users within a batch at any time while the batch is being processed. Our retry behavior will now process just the failed users instead of the whole batch, and we can now schedule completions of a batch in advance. We even gain improvements in throttling and reporting just to name a few.

Here is an example of the new scheduled completion option for migration. This is great for those who want to complete the migrations over a weekend without the administrator having to be there to press the button.

image

One thing to be aware of is if you do not have PAW enabled in your tenant, you may get a warning message like the below when creating a new migration batch:

Warning

One of the required migration functions (PAW) isn’t enabled.
On December 1st, 2017 you will no longer be able to create batches until you upgrade which features are enabled. Remove all exiting batches to trigger an upgrade of the available features.

To check if PAW is enabled in your tenant you will first need to connect to Exchange Online PowerShell and then run Get-MigrationConfig to check what features are enabled.

PS C:\PowerShell> Get-MigrationConfig | Format-List
RunspaceId              : d0ee8150-d417-44fb-bd42-50c04e25232b
Identity                : contoso.onmicrosoft.com
MaxNumberOfBatches      : 100
MaxConcurrentMigrations : 300
Features                : MultiBatch, PAW
CanSubmitNewBatch       : True
SupportsCutover         : False
IsValid                 : True
ObjectState             : Unchanged

In the above example, we see MultiBatch and PAW as the Migration Features that are enabled for our tenant. MultiBatch is our older way of processing migrations within MRS and PAW is our new way. If you do not have PAW listed have no fear, you probably just have some existing migration batches hanging around from either Mailbox or Public Folders Migrations. So just run Get-MigrationBatch to confirm that all batches are completed.

PS C:\PowerShell> Get-MigrationBatch
Identity Status    Type               TotalCount
-------- ------    ----               ----------
AlexD    Completed ExchangeRemoteMove 1

If any of your batches are not, complete the batches.  Remove any completed migration batches so that when you run the cmdlet it returns no results.

Once all the migration batches have been removed, your tenant should automatically be updated to have the most recent features available.  You can run Get-MigrationConfig to check if the PAW feature is enabled.  Then you can continue your migrations using the latest migration technology available in Exchange Online.

Rob Whaley
Beta Engineer for Exchange and Office 365

Announcing Hybrid Modern Authentication for Exchange On-Premises

$
0
0

We’re very happy to announce support for Hybrid Modern Authentication (HMA) with the next set of cumulative updates (CU) for Exchange 2013 and Exchange 2016, that’s CU8 for Exchange Server 2016, and CU19 for Exchange Server 2013.

What is HMA?

HMA (not HAM, which Word keeps trying to correct it to for me) provides users the ability to access on-premises application using authorization tokens obtained from the cloud. For Exchange (that’s why you’re here right?), this means on-premises mailbox users get the ability to use these tokens (OAuth tokens specifically) for authentication to on-premises Exchange. Sounds thrilling I know, but what exactly are these tokens? And how do users get hold of them?

Rather than repeat many things here, I’m going to suggest you take a break and read the How Hybrid Authentication Really Works post, and if you really want to help boost my YouTube viewing numbers, watch this Ignite session recording too. They will respectively give you a pretty solid grounding in OAuth concepts and to help you understand what HMA is really all about.

See how much space we saved in this post by sending you somewhere else?

If you ignored my advice, the tl’dr version is this: HMA enables Outlook to obtain Access and Refresh OAuth tokens from Azure AD (either directly for password hash sync or Pass-Through Auth identities, or from their own STS for federated identities) and Exchange on-premises will accept them and provide mailbox access.

How users get those tokens, what they have to provide for credentials, is entirely up to you and the capabilities of the identity provider (iDP) – it could be simple username and password, or certificates, or phone auth, or fingerprints, blood, eyeball scanning, the ability to recite poetry, whatever your iDP can do.

Note that the user’s identity has to be present in AAD for this to work, and there is some configuration required that the Exchange Hybrid Configuration Wizard does for us. That’s why we put the H in HMA, you need to be configured Hybrid with Exchange Online for this feature.

It’s also worth knowing that HMA shares many of the same technology as the upcoming Outlook mobile support for Exchange on-premises with Microsoft Enterprise Mobility + Security feature, which as you’ll see from the blog post also requires Hybrid be in place. Once you have that figured out you’ll be able to benefit from both these features with very little additional work.

How Does HMA Work?

The video linked above goes into detail, but I’ll share some details here for anyone without the time to watch it.

Here’s a diagram that explains HMA when the identity is federated.

hma1

I think that picture is pretty clear, I spent a lot of time making it pretty clear so I don’t think I need to add much to it other than to say, if it’s not clear, you might want to try reading it again.

Why Should I Enable HMA?

Great question. There are a few good reasons, but mainly this is a security thing.

HMA should be considered ‘more secure’ than the authentication methods previously available in Exchange. That’s a nebulous statement if there ever was one (I could have said it’s more ‘Modern’ but I know you weren’t going to fall for that) but there are a few good arguments as to why that’s true.

When you enable HMA you are essentially outsourcing user authentication to your iDP, Exchange becomes the consumer of the resulting authorization tokens. You can enforce whatever authentication the iDP can do, rather than teach Exchange how to handle things like text messaged based MFA, blood analysis or retina scanning. If your iDP can do that, Exchange can consume the result. Exchange doesn’t care how you authenticated, only that you did, and came away with a token it can consume.

So it’s clearly ‘more secure’ if you choose to enforce authentication types or requirements stronger than those that come free with Exchange, but even if you stick to usernames and passwords it’s also more secure as passwords are no longer being sent from client to server once the user is authenticated (though of course that depends on whether you are using Basic, NTLM or Kerberos). It’s all token based, the tokens have specific lifetimes, and are for specific applications and endpoints.

One other interesting and important benefit to all this is that your auth flow is now exactly the same for both your cloud and on-premises users. Any MFA or Conditional Access policies you have configured are applied the same, regardless of the mailbox location. It’s simpler to stay secure.

HMA also results in an improved user experience as there will be less authentication prompts. Once the user logs in once to AAD they can access any app that uses AAD tokens – that’s anything in O365 and even Skype for Business on-premises configured for HMA (read more about Skype for Business’s HMA support here).

And don’t forget there’s the fact it’s more ‘Modern’. It’s newer and we put the word Modern on it. So it must be better, or at the very least, newer. Excellent, moving on.

Will It Cost Me?

Not if you just want to use free Azure ID’s or Federated identities and do MFA at your iDP. If you want to take advantage of advanced Azure features, then yes, you’ll have to pay for those. But to set this up the tenant admin needs only an Exchange and an Azure license assigned, to run the tools and enable the config.

What do I need to enable HMA?

There are some pre-requisites.

  1. The following Identity configurations with AAD are supported
    1. Federated Identity with AAD with any on-premises STS supported by Office 365
    2. Password Hash Synchronization
    3. Pass Through Authentication
  2. In all cases, the entire on-premises directory must be synchronized to AAD, and all domains used for logon must be included in the sync configuration.
  3. Exchange Server
    1. All servers must be Exchange 2013 (CU19+) and/or Exchange 2016 (CU8+)
    2. No Exchange 2010 in the environment
    3. MAPI over HTTP enabled. It is usually enabled or True for new installs of Exchange 2013 Service Pack 1 and above.
    4. OAuth must be enabled on all Virtual Directories used by Outlook (/AutoDiscover, /EWS, /Mapi, /OAB)
  4. You must use clients that support ADAL (the client-side library that allows the client to work with OAuth tokens) to use the Modern Auth enabled features. Outlook 2013 requires the EnableADAL registry key be set, Outlook 2016 has this key set by default, Outlook 2016 for Mac works as it is, support for Outlook mobile (iOS and Android) is coming.
  5. Ensure AAD Connect between on-premises AD and the O365 tenant has the “Exchange hybrid deployment” setting enabled in the Optional Features settings of Azure AD Connect.
  6. Ensure SSL offloading is not being used between the load balancer and Exchange servers.
  7. Ensure all user networks can reach AAD efficiently.

Let’s pick a few of those apart.

No Exchange 2010 in the environment. That’s right, if you have E2010 you can’t enable HMA. Why? Because worst case is everyone with a mailbox on E2010 will be cut off from email. You don’t want that. It’s because OAuth happens anonymously upon initial connection. We send the user to AAD to get authenticated before we know where their mailbox is – and if that mailbox is on E2010, when they return with a token we’ll refuse to proxy from E2013/16 to E2010. Game over. Please insert coins.

So we have drawn a line here and are stating no support for E2010, and the HCW won’t let you enable OAuth if E2010 exists. Don’t try and make it work, remember that scene from Ghostbusters, the whole crossing the streams thing? It’ll be like that, but worse.

Next, MAPI/HTTP – you need to be using MAPI/HTTP not RPC/HTTP (Outlook Anywhere). This feature only works with MAPI/HTTP, and anyway, it’s time to get off RPC/HTTP. That’s very old code and as you might know we ended support for its use in O365, so it would be good to switch. It just works.

Then there’s the ‘everyone should be in AAD’ thing. That’s because when you enable HMA, it’s Org wide. It affects every user connecting to Exchange. So, all users trying to access Exchange from a client that support Modern Auth will be sent to AAD. If you only have some users represented in AAD, only those users will be able to auth. The rest will come find you at lunch and make your life a misery. Unless you like misery, I wouldn’t recommend that route.

Needing clients that support Modern Auth clearly, makes sense. And you need to make sure all the Exchange VDirs have OAuth enabled on them. Sounds obvious, and they are enabled by default, but some admins like to tinker… so it’s worth checking, and I’ll explain how later.

SSL offloading works by terminating the SSL/TLS encryption on the load balancer and transmitting the request as HTTP. In the context of OAuth, using SSL offloading has implications because if the audience claim value specifies a HTTPS record, then when Exchange receives the decrypted request over HTTP, the request is considered not valid. By removing SSL offloading, Exchange will not fail the OAuth session due to a change in the audience claim value.

Lastly, the ensuring all user networks can reach AAD comment. This change affects all connectivity from supported clients to Exchange, internal and external. When a user tries to connect to Exchange, whether that server is 10 feet away under the new guys desk or in a datacenter on the other side of the planet the HMA flow will kick in. If the user doesn’t have a valid token the traffic will include a trip to AAD. If you are one of those customers with complex networking in place, consider that.

How do I Enable HMA?

You’ve checked the pre-reqs, and you think you’re good to go. You can do a lot of this up front without impacting clients, I’ll point out where clients begin to see changes, so you can be prepared.

We do recommend trying HMA in your test or lab environment if you can before doing it in production. You are changing auth, it’s something you need to be careful doing, as cutting everyone off from email is never a good thing.

Here’s what to do. First, we have some Azure Active Directory Configuration to do.

You need to register all the URL’s a client might use to connect to on-premises Exchange in AAD, so that AAD can issue tokens for those endpoints. This includes all internal and external namespaces, as AAD will become the default auth method for all connections, internal and external. Here’s a tip – look at the SSL certificates you have on Exchange and make sure all those names are considered for inclusion.

Run the following cmdlets to gather the URL’s you need to add/verify are in AAD.

Get-MapiVirtualDirectory | FL server,*url*
Get-WebServicesVirtualDirectory | FL server,*url*
Get-OABVirtualDirectory | FL server,*url*>

Now you need to ensure all URL’s clients may connect to are listed as https service principal names (SPN’s):

    1. Connect to your AAD tenant using these instructions.
    2. For Exchange-related URL’s, execute the following command (note the AppId ends …02):

      Get-MsolServicePrincipal -AppPrincipalId 00000002-0000-0ff1-ce00-000000000000 | select -ExpandProperty ServicePrincipalNames

      The output will look similar to the following:

      [PS] C:\WINDOWS\system32> Get-MsolServicePrincipal -AppPrincipalId 00000002-0000-0ff1-ce00-000000000000 | select -ExpandProperty ServicePrincipalNames
      https://autodiscover.contoso.com/
      https://mail.contoso.com/
      00000002-0000-0ff1-ce00-000000000000/*.outlook.com
      00000002-0000-0ff1-ce00-000000000000/outlook.com
      00000002-0000-0ff1-ce00-000000000000/mail.office365.com
      00000002-0000-0ff1-ce00-000000000000/outlook.office365.com
      00000002-0000-0ff1-ce00-000000000000/contoso.com
      00000002-0000-0ff1-ce00-000000000000/autodiscover.contoso.com
      00000002-0000-0ff1-ce00-000000000000/contoso.mail.onmicrosoft.com
      00000002-0000-0ff1-ce00-000000000000/autodiscover.contoso.mail.onmicrosoft.com
      00000002-0000-0ff1-ce00-000000000000/mail.contoso.com
      00000002-0000-0ff1-ce00-000000000000

    3. If you do not already have your internal and external MAPI/HTTP, EWS, OAB and AutoDiscover https records listed (i.e., https://mail.contoso.com and https://mail.corp.contoso.com), add them using the following command (replacing the fully qualified domain names with the correct namespaces and/or deleting the appropriate addition line if one of the records already exists):

      $x= Get-MsolServicePrincipal -AppPrincipalId 00000002-0000-0ff1-ce00-000000000000
      $x.ServicePrincipalnames.Add("https://mail.corp.contoso.com/")
      $x.ServicePrincipalnames.Add("https://owa.contoso.com/")
      Set-MSOLServicePrincipal -AppPrincipalId 00000002-0000-0ff1-ce00-000000000000 -ServicePrincipalNames $x.ServicePrincipalNames

    4. Repeat step 2 and verify the records were added. We’re looking for https://namespace entries for all the URL’s, not <span class="consoletext"00000002-0000-0ff1-ce00-000000000000/namespace entries.

For example,

[PS] C:\WINDOWS\system32> Get-MsolServicePrincipal -AppPrincipalId 00000002-0000-0ff1-ce00-000000000000 | select -ExpandProperty ServicePrincipalNames
https://autodiscover.contoso.com/
https://mail.contoso.com/
https://mail.corp.contoso.com
https://owa.contoso.com
00000002-0000-0ff1-ce00-000000000000/*.outlook.com
00000002-0000-0ff1-ce00-000000000000/outlook.com
00000002-0000-0ff1-ce00-000000000000/mail.office365.com
00000002-0000-0ff1-ce00-000000000000/outlook.office365.com
00000002-0000-0ff1-ce00-000000000000/contoso.com
00000002-0000-0ff1-ce00-000000000000/autodiscover.contoso.com
00000002-0000-0ff1-ce00-000000000000/contoso.mail.onmicrosoft.com
00000002-0000-0ff1-ce00-000000000000/autodiscover.contoso.mail.onmicrosoft.com
00000002-0000-0ff1-ce00-000000000000/mail.contoso.com
00000002-0000-0ff1-ce00-000000000000

Then we need to validate the EvoSts authentication provider is present using the Exchange using Exchange Management Shell (this is created by the Hybrid Configuration Wizard):

Get-AuthServer | where {$_.Name -eq "EvoSts"}

HMA2

If it is not present, please download and execute the latest version of the Hybrid Configuration Wizard. Note that this authentication provider is not created if Exchange 2010 (this includes Edge Transport servers) is detected in the environment.

Now let’s make sure OAuth is properly enabled in Exchange on all the right virtual directories Outlook might use.

Run the following cmdlets (and a tip, don’t use -ADPropertiesOnly as that sometimes tells little white lies, try it and see if you don’t believe me)

Get-MapiVirtualDirectory | FL server,*url*,*auth*
Get-WebServicesVirtualDirectory | FL server,*url*,*oauth*
Get-OABVirtualDirectory | FL server,*url*,*oauth*
Get-AutoDiscoverVirtualDirectory | FL server,*oauth*

You are looking to make sure OAuth is enabled on each of these VDirs, it will look something like this (and the key things to look at are highlighted);

[PS] C:\Windows\system32>Get-MapiVirtualDirectory | fl server,*url*,*auth*
Server : EX1
InternalUrl : https://mail.contoso.com/mapi
ExternalUrl : https://mail.contoso.com/mapi
IISAuthenticationMethods : {Ntlm, OAuth, Negotiate}
InternalAuthenticationMethods : {Ntlm, OAuth, Negotiate}
ExternalAuthenticationMethods : {Ntlm, OAuth, Negotiate}

[PS] C:\Windows\system32> Get-WebServicesVirtualDirectory | fl server,*url*,*auth*
Server : EX1
InternalNLBBypassUrl :
InternalUrl : https://mail.contoso.com/EWS/Exchange.asmx
ExternalUrl : https://mail.contoso.com/EWS/Exchange.asmx
CertificateAuthentication :
InternalAuthenticationMethods : {Ntlm, WindowsIntegrated, WSSecurity, OAuth}
ExternalAuthenticationMethods : {Ntlm, WindowsIntegrated, WSSecurity, OAuth}
LiveIdNegotiateAuthentication :
WSSecurityAuthentication : True
LiveIdBasicAuthentication : False
BasicAuthentication : False
DigestAuthentication : False
WindowsAuthentication : True
OAuthAuthentication : True
AdfsAuthentication : False

[PS] C:\Windows\system32> Get-OabVirtualDirectory | fl server,*url*,*auth*
Server : EX1
InternalUrl : https://mail.contoso.com/OAB
ExternalUrl : https://mail.contoso.com/OAB
BasicAuthentication : False
WindowsAuthentication : True
OAuthAuthentication : True
InternalAuthenticationMethods : {WindowsIntegrated, OAuth}
ExternalAuthenticationMethods : {WindowsIntegrated, OAuth}

[PS] C:\Windows\system32>Get-AutodiscoverVirtualDirectory | fl server,*auth*
Server : EX1
InternalAuthenticationMethods : {Basic, Ntlm, WindowsIntegrated, WSSecurity, OAuth}
ExternalAuthenticationMethods : {Basic, Ntlm, WindowsIntegrated, WSSecurity, OAuth}
LiveIdNegotiateAuthentication : False
WSSecurityAuthentication : True
LiveIdBasicAuthentication : False
BasicAuthentication : True
DigestAuthentication : False
WindowsAuthentication : True
OAuthAuthentication : True
AdfsAuthentication : False

Once you have checked these over, you might need to add OAuth here and there. It’s important to make sure all the servers are consistent, there’s really nothing harder to troubleshoot than when one server out of ten is wrong…

(Top Nerd Note: I hope you know why we didn’t include *url* in the Get-AutodiscoverVirtualDirectory cmdlet? Answers in the comments section if you do. There are no prizes to be won!)

If you need to add an Auth method, here’s a tip. For all except /Mapi, just set the -OAuthAuthentication property to $True. Done.

But for /Mapi you need add it explicitly, and not using some fancy @Add PowerShell thing you learned in some online course or from that smart guy in the office who tells everyone he doesn’t use ECP as it’s for kids and dogs. Because I've learned too that sometimes that doesn’t always work the way it should.

If you needed to add OAuth to all the Mapi Vdirs in the org, do it like this;

Get-MapiVirtualDirectory | Set-MapiVirtualDirectory -IISAuthenticationMethods Ntlm, OAuth, Negotiate

Up to this point no clients should have been impacted (unless you messed the Vdir auth up, and if you did, you should only have been adding OAuth, not taking others away…you know that now don’t you). So next we start to impact clients – so this is the bit you want to do out of normal business hours. For career reasons.

So, make sure you validate the following:

  1. Make sure you have completed the steps above in the Azure AD Configuration section. All the SPN’s you need should be in there.
  2. Make sure OAuth is enabled on all virtual directories used by Outlook.
  3. Make sure your clients are up to date and HMA capable by validating you have the minimal version as defined in our supportability requirements.
  4. Make sure you have communicated what you are doing.
  5. Set the EvoSts authentication provider as the default provider (this step affects Outlook 2016 for Mac and native EAS clients that support OAuth right away):

    Set-AuthServer EvoSTS -IsDefaultAuthorizationEndpoint $true

  6. Enable the OAuth client feature for Windows Outlook:

    Set-OrganizationConfig -OAuth2ClientProfileEnabled $True

That’s it. All the prep you did means it comes down to two cmdlets. Wield the power wisely.

How do I Know I’m Using HMA?

After HMA is enabled, the next time a client needs to authenticate it will use the new auth flow. Just turning on HMA may not immediately trigger a re-auth for any client.

To test that HMA is working after you have enabled it, restart Outlook. The client should switch to use the Modern Auth flow.

You should see an ADAL generated auth dialog, from Office 365. Once you enter the username you might be redirected to your on-premises IDP, like ADFS (and might not see anything at all if Integrated auth is configured), or you might need to enter a password. You might have to do MFA, it depends on how much stuff you’ve set up in AAD already.

Once you get connected (and I hope you do), check Outlook’s Connection Status dialog (Ctrl-Right Click the Outlook tray icon) you will see the word Bearer in the Authn column – which is the sign that it’s using HMA.

hma3

Well done you. Check everyone else is ok before heading home though, eh?

Something Went Wrong. How do I Troubleshoot HMA?

Ah, you’re reading this section. It’s panic time, right? I was thinking of not publishing this section until next year, just for giggles. Mine, not yours. But I didn’t. Here’s what to think about if stuff isn’t working like I said it would.

Firstly, make sure you did ALL the steps above, not some, not just the ones you understood. We’ve all seen it, 10 steps to make something work, and someone picks the steps they do like it’s a buffet.

If you’re sure you’ve done them all, let’s troubleshoot this together.

If you need to simply turn this back off then just run the last two cmdlets we ran again, but setting them to False this time. You might need to run IISReset on Exchange more than once, we cache settings all over the place for performance reasons, but those two will put you back to where you were if all hope is lost (hopefully you still have a chance to capture a trace as detailed in a moment before you do this, as it will help identity what went wrong).

If you aren’t reverting the settings just yet, you clearly want to troubleshoot this a bit.

First thing is – is the client seeing any kind of pop up warning dialog? Are they seeing any certificate errors? Trust or name mismatches, that sort of thing? Anything like that will stop this flow in its tracks. The clients don’t need anything more than trusting the endpoints they need to talk to – Exchange, AAD (login.windows.net and login.microsoftonline.com) and ADFS or your iDP of choice if in use. If they trust the issuer of the certs securing those sites, great. If you have some kind of name translation thing going on somewhere, that might cause a warning, or worse, a silent failure.

Here’s an example of this I saw recently. Exchange was published using Web Application Proxy (WAP). You can do that, but only in pass-through mode. The publishing rule for AutoDiscover in this case was using autodiscover.contoso.com to the outside world, but the WAP publishing rule was set up to forward that traffic to mail.contoso.com on the inside. That causes this to fail, as Outlook heads to AAD to get a token for the resource called https://autodiscover.contoso.com and it does. Then it hands that to WAP, who then forwards to Exchange using the https://mail.contoso.com target URI – the uri used in the token isn’t equal to the uri used by WAP… kaboom. So, don’t do that. But I’ll show you later how an error like that shows up and can be discovered.

Assuming certificates are good, we need to get deeper. We need to trace the traffic. The tool I prefer to use for this is Fiddler, but there are others out there that can be used.

Now, Fiddler or the like can capture everything that happens between client and server – and I mean everything. If you are doing Basic auth, Fiddler will capture those creds. So, don’t run a Fiddler trace capturing everything going on and share it with your buddies or Microsoft. We don’t want your password. Use a test account or learn enough about Fiddler to delete the passwords.

I’ll leave it to the Telerik people who create Fiddler to tell you how to install and really use their tool, but I’ll share these few snippets I’ve learned, and how I use it to debug HMA.

Once installed and with the Fiddler root certs in the trusted root store (Fiddler acts as a man-in-the-middle proxy) it will capture traffic from whatever clients you choose. You need to enable HTTPS decryption (Tools, Options, HTTPS), as all our traffic is encased in TLS.

If you have ADFS you can either choose to configure Fiddler to Skip Decryption for the ADFS url, if you don’t want to see what happens at ADFS, but if you do, you will have to relax the security stance of ADFS a bit to allow the traffic to be properly captured. Only do this while capturing the traffic for debug purposes, then reset it back. Start with bypassing decryption for the iDP first, come back to this if you suspect that is the issue.

To set level of extended protection for authentication supported by the federation server to none (off)

Set-AdfsProperties -extendedprotectiontokencheck none

Then to set it back to the default once you have the capture:

Set-AdfsProperties -extendedprotectiontokencheck Allow

Read more about all that clever ADFSstuff here.

Now you run the capture. Start Fiddler first, then start Outlook. I suggest closing all other apps and browsers, so as not to muddy the Fiddling waters. Keep an eye on Fiddler and Outlook, try and log in using Outlook, or repro the issue, then stop tracing (F12).

Now we shall try to figure out what’s going on. I prefer the view where I have the traffic listed in the left hand pane, then on the right the top section is the request, and hte lower right in the response. But you do whatever works for you. But Fiddler shows each frame, then splits each into the Request, and the Response. That’s how you need to orient yourself.

So the flow you’ll see will be something like this;

Client connects to Exchange, sending an empty ‘Bearer‘ header. This is the hint to tell Exchange it can do OAuth but does not yet have a token. If it sends Bearer and a string of gobbledygook, that’s your token.

Here are two examples of this. The header section to look at is Security. This is using Fiddler’s Header view. Do you see how the Security header says just Bearer on the left, but shows Bearer + Token on the right.

hma4 hma5

Exchange responds with (lower pane of the same packet in Fiddler, raw view), here’s where you can get a token (link to AAD).

hma6

If you scroll all the way to the right you’ll see the authorization_uri (AAD)

hma7

Normally, Outlook goes to that location, does Auth, gets a token, comes back to Exchange, and then tries to connect using Bearer + Token as above. If it’s accepted, it’s 200’s and beers all round and we’re done.

Where could it go wrong?

Client Failure

Firstly, the client doesn’t send the empty Bearer header. That means isn’t even trying to do Bearer. This could be a few things.

It could be that you are testing with Outlook 2010 which doesn’t support Bearer (so stop trying and upgrade).

Maybe you are using Outlook 2013 but forgot to set the EnableADAL reg keys set? See the link below for those.

But what if this is Outlook 2016, which has EnableADAL set by default and it is still not sending the Header…. Huh?

Most likely cause, someone has been tinkering around in the registry or with GPO’s to set registry keys. I knew a guy who edited the registry once and three days later crashed his car. So, do not tell me you were not warned.

You need to make sure keys are set as per https://support.office.com/en-us/article/Enable-Modern-Authentication-for-Office-2013-on-Windows-devices-7dc1c01a-090f-4971-9677-f1b192d6c910

Outlook2016 for Mac can also have MA disabled (though it’s enabled by default). You can set it back to the default by running this from Terminal:

defaults write com.microsoft.Outlook DisableModernAuth -bool NO

That’s how we deal with the client not sending the Header. Check again and see the Header in all its Header glory.

Auth_URI Failures

Next thing that might happen is the server doesn’t respond with the authorization-uri, or it’s the wrong one.

If there’s no authorization_uri at all then the EvoSts AuthServer does not have IsDefaultAuthorizationEndpoint set to $true. Recheck you ran

Set-AuthServer EvoSts -IsDefaultAuthorizationEndpoint $true

If it comes back, but with some other value than expected, make sure the right AuthServer is set as default, we only support you using AAD for this flow. If you think setting this to your on-premises ADFS endpoint will make this work without AAD… you’re wrong, as you discovered when you tried. If you are thinking of trying it, don’t bother. That’s an Exchange 2019 thing. Oh, did I just let that out of the bag?

If HMA is enabled at the org level, but connections still don’t elicit the authorization_uri you expect it’s likely OAuth isn’t enabled on the Virtual Directory Outlook is trying to connect to. You need to simply make sure you have OAuth enabled on all VDirs, on all servers. Go back to the How Do I Enable section and check those VDirs again.

Now, sometimes that all comes back ok but the client still doesn’t take the bait. If so, check for the following in the response;

HTTP/1.1 401 Unauthorized
Content-Length: 0
Server: Microsoft-IIS/8.5 Microsoft-HTTPAPI/2.0
request-id: a8e9dfb4-cb06-4b18-80a0-b110220177e1
Www-Authenticate: Negotiate
Www-Authenticate: NTLM
Www-Authenticate: Basic realm="autodiscover.contoso.com"
X-FEServer: CONTOSOEX16
x-ms-diagnostics: 4000000;reason="Flighting is not enabled for domain 'gregt@contoso.com'.";error_category="oauth_not_available"
X-Powered-By: ASP.NET
WWW-Authenticate: Bearer client_id="00000002-0000-0ff1-ce00-000000000000", trusted_issuers="00000001-0000-0000-c000-000000000000@f31f3647-5d87-4b69-a0b6-73f62aeab14c", token_types="app_asserted_user_v1 service_asserted_app_v1", authorization_uri="https://login.windows.net/common/oauth2/authorize"
Date: Thu, 13 Jul 2017 18:22:13 GMT
Proxy-Support: Session-Based-Authentication

Now this response is interesting because it says, go get a token (www-authenticate), but in x-ms-diagnostics it says, no, don’t. Is Exchange unsure?

This means OAuth is enabled, but not for Outlook for Windows. So, you ran one of the two commands above (or you ran them both but not enough time has passed for them to kick in)

Verify that the OAuth2ClientProfileEnabled property is set to $true by checking;

(Get-OrganizationConfig).OAuth2ClientProfileEnabled

Other Failures

We have a token, we know OAuth is enabled at the Org level in Exchange, we know all the Vdirs are good. But it still won’t connect. Dang, what now?

Now you’ll have to start to dig into server responses more closely, and start looking for things that look like errors. The errors you’ll see are usually in plain English, though of course that doesn’t mean they make sense. But here are some examples.

Missing SPNs

Client goes to AAD to get a token and get this:

Location: urn:ietf:wg:oauth:2.0:oob?error=invalid_resource&error_description=AADSTS50001%3a+The+application+named+https%3a%2f%2fmail.contoso.com%2f+was+not+found+in+the+tenant+named+contoso.com.++This+can+happen+if+the+application+has+not+been+installed+by+the+administrator+of+the+tenant+or+consented+to+by+any+user+in+the+tenant.++You+might+have+sent+your+authentication+request+to+the+wrong+tenant.%0d%0aTrace+ID%3a+cf03a6bd-610b-47d5-bf0b-90e59d0e0100%0d%0aCorrelation+ID%3a+87a777b4-fb7b-4d22-a82b-b97fcc2c67d4%0d%0aTimestamp%3a+2017-11-17+23%3a31%3a02Z

Name Mismatches

Here’s one I mentioned earlier. There’s some device between client and server changing the names being used. Tokens are issued for specific uri’s, so when you change the names…

HTTP/1.1 401 Unauthorized
Content-Length: 0
WWW-Authenticate: Bearer client_id="00000002-0000-0ff1-ce00-000000000000", trusted_issuers="00000001-0000-0000-c000-000000000000@8da56bec-0d27-4cac-ab06-52ee2c40ea22,00000004-0000-0ff1-ce00-000000000000@contoso.com,00000003-0000-0ff1-ce00-000000000000@8da56bec-0d27-4cac-ab06-52ee2c40ea22", token_types="app_asserted_user_v1 service_asserted_app_v1", authorization_uri="https://login.windows.net/common/oauth2/authorize", error="invalid_token"
Server: Microsoft-IIS/8.5 Microsoft-HTTPAPI/2.0
request-id: 5fdfec03-2389-42b9-bab9-c787a49d09ca
Www-Authenticate: Negotiate
Www-Authenticate: NTLM
Www-Authenticate: Basic realm="mail.contoso.com"
X-FEServer: RGBMSX02
x-ms-diagnostics: 2000003;reason="The hostname component of the audience claim value 'https://autodiscover.contoso.com' is invalid";error_category="invalid_resource"
X-Powered-By: ASP.NET
Date: Thu, 16 Nov 2017 20:37:48 GMT

SSL Offloading

As mentioned in the previous section, tokens are issued for a specific uri and that value includes the protocol value ("https://"). When the load balancer offloads the SSL, the request Exchange will receives comes in via HTTP, resulting in a claim mismatch due to the protocol value being "http://":

Content-Length →0
Date →Thu, 30 Nov 2017 07:52:52 GMT
Server →Microsoft-IIS/8.5
WWW-Authenticate →Bearer client_id="00000002-0000-0ff1-ce00-000000000000", trusted_issuers="00000001-0000-0000-c000-000000000000@00c118a9-2de9-41d3-b39a-81648a7a5e4d", authorization_uri="https://login.windows.net/common/oauth2/authorize", error="invalid_token"
WWW-Authenticate →Basic realm="mail.contoso.com"
X-FEServer →CTSINPUNDEVMB02
X-Powered-By →ASP.NET
request-id →2323088f-8838-4f97-a88d-559bfcf92866
x-ms-diagnostics →2000003;reason="The hostname component of the audience claim value is invalid. Expected 'https://mail.contoso.com'. Actual 'http://mail.contoso.com'.";error_category="invalid_resource"

Who’s This?

Perhaps you ignored my advice about syncing all your users to AAD?

HTTP/1.1 401 Unauthorized
Cache-Control: private
Server: Microsoft-IIS/7.5
request-id: 63b3e26c-e7fe-4c4e-a0fb-26feddcb1a33
Set-Cookie: ClientId=E9459F787DAA4FA880A70B0941F02AC3; expires=Wed, 25-Oct-2017 11:59:16 GMT; path=/; HttpOnly
X-CalculatedBETarget: ex1.contoso.com
WWW-Authenticate: Bearer client_id="00000002-0000-0ff1-ce00-000000000000", trusted_issuers="00000001-0000-0000-c000-000000000000@cc2e9d54-565d-4b36-b7f0-9866c19f9b17"
x-ms-diagnostics: 2000005;reason="The user specified by the user-context in the token does not exist.";error_category="invalid_user"
X-AspNet-Version: 4.0.30319
WWW-Authenticate: Basic realm="mail.contoso.com"
WWW-Authenticate: Negotiate
WWW-Authenticate: NTLM
X-Powered-By: ASP.NET
X-FEServer: E15
Date: Tue, 25 Oct 2016 11:59:16 GMT
Content-Length: 0

Password Changed?

When the user changes their password they must re-authenticate to get a new Refresh/Access token pair.

HTTP/1.1 400 Bad Request
Cache-Control: no-cache, no-store
Pragma: no-cache
Content-Type: application/json; charset=utf-8
Expires: -1
Server: Microsoft-IIS/8.5
Strict-Transport-Security: max-age=31536000; includeSubDomains
X-Content-Type-Options: nosniff
x-ms-request-id: f840b3e7-8740-4698-b252-d759825e0300
P3P: CP="DSP CUR OTPi IND OTRi ONL FIN"
Set-Cookie: esctx=AQABAAAAAABHh4kmS_aKT5XrjzxRAtHz3lyJfwgypqTMzLvXD-deUmtaub0aqU_17uPZe3xCZbgKz8Ws99KNxVJSM0AglTVLUEtzTz8y8wTTavHlEG6on2cOjXqRtbgr2DLezsw_OZ7JP4M42qZfMd1mR0BlTLWI3dSllBFpS9Epvh5Yi0Of5eQkOHL7x97IDk_o1EWB7lEgAA; domain=.login.windows.net; path=/; secure; HttpOnly
Set-Cookie: x-ms-gateway-slice=008; path=/; secure; HttpOnly
Set-Cookie: stsservicecookie=ests; path=/; secure; HttpOnly
X-Powered-By: ASP.NET
Date: Thu, 16 Nov 2017 20:36:16 GMT
Content-Length: 605
{"error":"invalid_grant","error_description":"AADSTS50173: The provided grant has expired due to it being revoked. The user might have changed or reset their password. The grant was issued on '2017-10-28T17:20:13.2960000Z' and the TokensValidFrom date for this user is '2017-11-16T20:27:45.0000000Z'\r\nTrace ID: f840b3e7-8740-4698-b252-d759825e0300\r\nCorrelation ID: f3fc8b2f-7cf1-4ce8-b34d-5dd41aba0a02\r\nTimestamp: 2017-11-16 20:36:16Z","error_codes":[50173],"timestamp":"2017-11-16 20:36:16Z","trace_id":"f840b3e7-8740-4698-b252-d759825e0300","correlation_id":"f3fc8b2f-7cf1-4ce8-b34d-5dd41aba0a02"}

Unicorn Rampage?

When a Unicorn Rampage has taken place and all tokens are invalidated you’ll see this.

HTTP/1.1 400 Bad Unicorn
Cache-Control: no-cache, no-store, not-bloody-safe
Pragma: no-cache
Content-Type: application/json; charset=utf-8
Expires: -1
Server: Microsoft-IIS/8.5
Strict-Transport-Security: max-age=31536000; includeSubDomains
X-Content-Type-Options: nosniff
x-ms-request-id: f840b3e7-8740-4698-b252-d759825e0300
P3P: CP="DSP CUR OTPi IND OTRi ONL FIN"
Set-Cookie: esctx=AQABAAAAAABHh4kmS_aKT5XrjzxRAtHz3lyJfwgypqTMzLvXD-deUmtaub0aqU_17uPZe3xCZbgKz8Ws99KNxVJSM0AglTVLUEtzTz8y8wTTavHlEG6on2cOjXqRtbgr2DLezsw_OZ7JP4M42qZfMd1mR0BlTLWI3dSllBFpS9Epvh5Yi0Of5eQkOHL7x97IDk_o1EWB7lEgAA; domain=.login.windows.net; path=/; secure; HttpOnly
Set-Cookie: x-ms-gateway-slice=008; path=/; secure; HttpOnly
Set-Cookie: stsservicecookie=ests; path=/; secure; HttpOnly
X-Powered-By: ASP.NET
Date: Thu, 16 Nov 2017 20:36:16 GMT
Content-Length: 605
{"error":"unicorn_rampage","error_description":"The Unicorns are on a rampage. It’s time go home” '2017-11-16T20:27:45.0000000Z'\r\nTrace ID: f840b3e7-8740-4698-b252-d759825e0300\r\nCorrelation ID: f3fc8b2f-7cf1-4ce8-b34d-5dd41aba0a02\r\nTimestamp: 2017-11-16 20:36:16Z","error_codes":[50173],"timestamp":"2017-11-16 20:36:16Z","trace_id":"f840b3e7-8740-4698-b252-d759825e0300","correlation_id":"f3fc8b2f-7cf1-4ce8-b34d-5dd41aba0a02"}

And so on. You can see there are a few things that can go wrong, but Fiddler is your friend, so use it to debug and look closely and often the answer is staring you right there in the face.

Viewing Tokens

Lastly, and just for fun, if you want to see what an actual, real life Access token looks like, I’ll show you how… calm down, it’s not that exciting.

In Fiddler, in the Request (upper pane), where you see Header + Value (begins ey…), you can right click the value and choose Send to Text Wizard, and set Transform to ‘From Base64’. Or you can copy the entire value and use a web site such as https://jwt.io to transform them into a readable format like this.

{
"aud": "https://autodiscover.contoso.com/",
"iss": "https://sts.windows.net/f31f3647-5d87-4b69-a0b6-73f62aeab14c/",
"acr": "1",
"aio": "ASQA2/8DAAAAn27t2aiyI+heHYucfj0pMmQhcEEYkgRP6+2ox9akUsM=",
"amr": [
"pwd"
],
"appid": "d3590ed6-52b3-4102-aeff-aad2292ab01c",
"appidacr": "0",
"e_exp": 262800,
"enfpolids": [],
"family_name": "Taylor",
"given_name": "Greg",
"ipaddr": “100.100.100.100",
"name": "Greg Taylor (sounds like a cool guy)",
"oid": "7f199a96-50b1-4675-9db0-57b362c5d564",
"onprem_sid": "S-1-5-21-2366433183-230171048-1893555995-1654",
"platf": "3",
"puid": "1003BFFD9ACA40EE",
"scp": "Calendars.ReadWrite Contacts.ReadWrite Files.ReadWrite.All Group.ReadWrite.All Mail.ReadWrite Mail.Send Privilege.ELT Signals-Internal.Read Signals-Internal.ReadWrite Tags.ReadWrite user_impersonation",
"sub": "32Q7MW8A7kNX5dPed4_XkHP4YwuC6rA8yBwnoROnSlU",
"tid": "f31f3647-5d87-4b69-a0b6-73f62aeab14c",
"unique_name": "GregT@contoso.com",
"upn": "GregT@contoso.com",
"ver": "1.0"
}

Fun times, eh? I was just relieved to see my enfpolids claim was empty when I saw that line, that sounds quite worrying and something I was going to ask my doctor about.

Summary

We’ve covered why HMA is great, why it’s more secure, how to get ready for it and how to enable it. And even how to troubleshoot it.

Like all changes it requires careful planning and execution, and particularly when messing with auth, be super careful, please. If people can’t connect, that’s bad.

We’ve been running like this for months inside Microsoft, and we too missed an SPN when we first did it, so it can happen. But if you take your time and do it right, stronger, better and heck, a more Modern auth can be yours.

Good luck

 

Greg Taylor
Principal PM Manager
Office 365 Customer Experience

Released: December 2017 Quarterly Exchange Updates

$
0
0

The December quarterly release updates for Exchange Server are now available on the download center (links below). In addition to the planned cumulative updates for Exchange Server 2013 and 2016, we have published an update rollup for Exchange Server 2010. These releases include all previously released updates, fixes for customer reported issues and limited new functionality.

Update Rollup 19 for Exchange Server 2010

Update Rollup 19 for Exchange Server 2010 contains a fix for an important issue affecting Exchange Server 2016 and Exchange Server 2010 coexistence. Our deployment guidance states when these versions are deployed together, load balancer VIP’s can (should) be pointed to servers running Exchange Server 2016. Exchange Server 2016 will proxy calls to an appropriate server version based upon where the mailbox being accessed is located. We have become aware of a condition which could allow proxied EWS calls to gain access to mailboxes on the 2010 server to which a user should not have access. This issue, tracked by KB4054456, is resolved in Service Pack 3 Update Rollup 19 for Exchange Server 2010. Customers who have deployed Exchange Server 2010 and 2016 together are encouraged to apply Update Rollup 19 with high priority.

Note: Exchange Server 2010 is in extended support phase of lifecycle. Customers should not expect regular updates to this product. Updates are released on an as needed basis only.

Change in TLS Settings Behavior in Exchange Server 2013 and 2016

The cumulative updates for Exchange Server 2013 and 2016 released today include a change in behavior as it relates to configuring TLS and cryptography settings. Previous cumulative updates would overwrite a customer’s existing configuration. Due to customer feedback, we have changed product behavior to configure TLS and cryptography settings only when a new Exchange server is installed. Applying a cumulative update will no longer overwrite the customer’s existing configuration. In the future, the Exchange team will publish guidance on what we believe customers should use to optimally configure a server. It will be up to customers to ensure their servers are configured to meet their security needs. Exchange SETUP will ensure that our current recommendations are applied automatically when a new Exchange server is installed using current and future cumulative updates.

Note: Customers can always use the latest cumulative update directly to install a newly provisioned server.

Support for Hybrid Modern Authentication

As announced by Greg in his excellent and highly popular blog post, Exchange Server 2013 and 2016 have introduced a spiffy new authentication option. Those of you still running Exchange Server 2010 will have to wait a bit but anyone running Exchange Server 2013 or 2016 will certainly want to have a look at a revolutionary change introduced in these cumulative updates.

Support for .NET Framework 4.7.1

.NET Framework 4.7.1 is now fully supported with Exchange Server 2013 and 2016. .NET Framework 4.7.1 will be required on Exchange Server 2013 and 2016 installations starting with our June 2018 quarterly releases. Customers should plan to upgrade to .NET Framework 4.7.1 after applying the December 2017 or March 2018 quarterly release to avoid blocking installation of the June 2018 quarterly releases for Exchange Server 2013 and 2016.

Known unresolved issues in these releases

The following known issues exist in these releases and will be resolved in a future update:

  • Information protected e-Mails may show hyperlinks which are not fully translated to a supported, local language
  • When sending a calendar sharing invitation in OWA, users opening the invitation in OWA may not see the ‘Accept’ button. Using Outlook client, calendar sharing invitations work normally.
  • When configuring ‘Offline Settings’ in OWA, users may receive a message to update the application and the OWA session becomes disconnected from the Exchange server.

Release Details

KB articles that describe the fixes in each release are available as follows:

None of the updates released today include new Active Directory schema since the September 2017 quarterly updates were released. If upgrading from an older Exchange version or cumulative update, Active Directory schema updates may still be required. These updates will apply automatically during setup if the logged on user has the required permissions. If the Exchange Administrator lacks permissions to update Active Directory schema, a Schema Admin must execute SETUP /PrepareSchema prior to the first Exchange Server installation or upgrade. The Exchange Administrator should execute SETUP /PrepareAD to ensure RBAC roles are current. PrepareAD will run automatically during the first server upgrade if Exchange Setup detects this is required and the logged on user has sufficient permission.

Additional Information

Microsoft recommends all customers test the deployment of any update in their lab environment to determine the proper installation process for your production environment. For information on extending the schema and configuring Active Directory, please review the appropriate TechNet documentation.

Also, to prevent installation issues you should ensure that the Windows PowerShell Script Execution Policy is set to “Unrestricted” on the server being upgraded or installed. To verify the policy settings, run the Get-ExecutionPolicy cmdlet from PowerShell on the machine being upgraded. If the policies are NOT set to Unrestricted you should use the resolution steps in KB981474 to adjust the settings.

Reminder: Customers in hybrid deployments where Exchange is deployed on-premises and in the cloud, or who are using Exchange Online Archiving (EOA) with their on-premises Exchange deployment are required to deploy the most current (e.g., 2013 CU19, 2016 CU8) or the prior (e.g., 2013 CU18, 2016 CU7) Cumulative Update release.

For the latest information on Exchange Server and product announcements please see What's New in Exchange Server 2016 and Exchange Server 2016 Release Notes. You can also find updated information on Exchange Server 2013 in What’s New in Exchange Server 2013, Release Notes and product documentation available on TechNet.

Note: Documentation may not be fully available at the time this post is published.

The Exchange Team

The many ways to block automatic email forwarding in Exchange Online

$
0
0

In support, I get this question quite frequently: “How do I block users from auto forwarding their mail outside my environment?” There are plenty of good reasons you may not want auto forwarding: you may have HIPAA laws to follow, regulatory compliance or data privacy concerns or simply because it makes you uncomfortable.

A user can set up forwarding in a few different ways:

1. Create an inbox rule to forward using Outlook or Outlook on the web (also sometimes called by OWA, it’s old name). The types of forwarding via this method are: forward, forward as an attachment and redirect.

  • In Outlook this is accessed through File > Manage Rules and Alerts
  • In OWA this is accessed through Options > Mail > Inbox and sweep rules

2. Set forwarding on their mailbox using OWA options.

  • In OWA this is accessed through Options > Mail > Forwarding. Users can select to Stop or Start forwarding and enter the address to forward to. This is set as a “ForwardingSMTPAddress” parameter on the mailbox.

Methods to stop auto forwarding

As an admin, you have a few different ways to prevent forwarding of emails outside of your environment. The main ways I have identified are listed below, along with a brief description of their pros and cons. Select the link to learn more:

Remote Domain

  • Pros: Applies to all the above-mentioned types of forwarding a user can set up. Quick and easy to configure.
  • Cons: The user is not notified their forwarded message is dropped
  • Use If: You have few exceptions to consider and just want an easy blanket option

Transport Rule

  • Pros: Allows you more granularity on conditions and actions, reporting is available
  • Cons: Does not block the OWA “Start/Stop Forwarding” method
  • Use If: You want to be able to notify the user their message was blocked, or if you have complex exceptions you need to allow for

Role Based Access Control (RBAC)

  • Pros: In OWA, users simply do not see the option to set forwarding up
  • Cons: Does not remove the options in Outlook and does nothing for forwarding that was already set up. It only removes the option to set it up from view; it does not remove any rules already in place and for that matter, it continues to allow those rules to function (though admittedly you could always run a script to null out the parameter).
  • Use If: You are a company that primarily uses OWA and have already ensured users do not have forwarding set to begin with.

Remote Domain

You can set up the remote domain option through the Exchange Online Admin Center > Mail Flow > Remote Domains and select the default remote domain. Uncheck the “allow automatic forwarding” box and repeat for any additional remote domains you may have set up that you want to drop auto forwarded messages to.

Forward1

The downside to this method is that the user is not notified that their forwarded message is dropped. However, as an admin, you would see the drop in a message trace as a failed message with the following Drop reason: “[{LED=250 2.1.5 RESOLVER.MSGTYPE.AF; handled AutoForward addressed to external recipient};{MSG=};{FQDN=};{IP=};{LRT=}]”

Say you have a partner company, and your users may have legitimate reasoning to forward their mail to the partner; you can configure an additional remote domain for the partner domain with different settings.

Transport Rule

To set up a transport rule in Exchange Online Admin Center, navigate to Mail Flow > Rules and select the plus sign to create a new rule. If you are not seeing all options, ensure you select “More options” towards the bottom of the screen. You can add multiple conditions, but the key is to include “the message type is… Auto-forward”. In PowerShell that would be the parameter ‘-MessageTypeMatches AutoForward’ In the image, I have chosen to apply the rule to messages forwarded to all recipients outside the organization and I am rejecting the message with an explanation so the user is informed of the policy.

Forward2

You can also easily add exceptions here via the “add exception” button if certain senders or recipient domains should be allowed to forward. In addition, you can easily identify the users hitting this rule as well through PowerShell reporting or by the generating an incident report action.

Role Based Access Control (RBAC)

RBAC is the method to remove the forwarding options from user’s view in Outlook on the web. You may want to note that RBAC is cumulative, so if an administrator has an admin role that includes New-Inbox rule with the forwarding parameters, removing it with the steps above will not make it disappear.

If you are less familiar with manipulating RBAC, I would point you to this blog post which does a deeper dive into RBAC in general. This tips and tricks guide is also incredibly handy.

I have already identified the main user role that includes the cmdlets and parameters that need to be removed, however if you would like to find which roles include other commands you could run the following:

Get-ManagementRoleEntry “*\New-InboxRule”

Here are the steps to create a new management role and remove the forwarding options.

1.) Create the new role with parent “MyBaseOptions”

  • New-ManagementRole -Parent MyBaseOptions -Name DenyForwarding

2.) Depending on what you want to do, I have 3 sets of sample CMDlets for you:

  • Removes ability to create a new inbox rule in Outlook on the web with 3 specified actions: Set-ManagementRoleEntry DenyForwarding\New-InboxRule -RemoveParameter -Parameters ForwardTo, RedirectTo, ForwardAsAttachmentTo
  • Removes ability to edit an existing inbox rule to change it to one of specified 3 actions: Set-ManagementRoleEntry DenyForwarding\Set-InboxRule -RemoveParameter -Parameters ForwardTo, RedirectTo, ForwardAsAttachmentTo
  • Removes the page in “Forwarding” page in Outlook on the web options: Set-ManagementRoleEntry DenyForwarding\Set-Mailbox -RemoveParameter -Parameters DeliverToMailboxAndForward,ForwardingAddress,ForwardingSmtpAddress

3.) Create a new policy and add all the management roles, including our new one. You may need to tweak this command some if you already have other custom entries

  • New-RoleAssignmentPolicy -Name DenyForwardingRoleAssignmentPolicy -Roles DenyForwarding, MyContactInformation, MyRetentionPolicies,MyMailSubscriptions,MyTextMessaging, MyVoiceMail,MyDistributionGroupMembership, MyDistributionGroups, MyProfileInformation

4.) Lastly, assign your policy to your cloud mailboxes

  • Set-Mailbox –Identity user@contoso.com -RoleAssignmentPolicy DenyForwardingRoleAssignmentPolicy

The result (if all of the CMDlets were used):

Forward3

What do I suggest?

How restrictive do you want to be? What are you worried about in your environment? There’s no one size fits all option. You can implement all three options if you really want. Personally, I like the combination of transport rule + RBAC. This combination covers all bases, yet still allows for exceptions if necessary. In that setup, the forwarding options in Outlook on the Web are completely removed, and if a forwarding Inbox rule in Outlook is created, messages can be blocked with an informational Non-delivery report back to the user.

Special thanks to Ben Winzenz and Tim Heeney for their assistance and review of this content.

Alana Wegfahrt

EAI support announcement

$
0
0

Out of 7.6 billion people in the world, only 360 million are native English speakers.

Although email has been the earliest and most widely-adopted platform for modern electronic communication, email addresses have only supported a limited subset of Latin characters (mostly due to historical reasons). People who don't read or speak English have been forced to use email addresses containing characters not used in their own language.

Many people have been working together to try to fix this situation. In fact, new email standards to support email address internationalization (RFCs: 6530, 6531,6532, 6533) were published in 2012. However, changes in standards are difficult to implement in the world of technology due to the millions of legacy systems that are still out there. 

Microsoft is pleased to announce that we're joining the effort to adopt the new standards. Office 365 will enable Email Address Internationalization (EAI) support in Q1 2018. As a first step, Office 365 users will be able to send messages to and receive messages from internationalized email addresses. Admins can also use internationalized email addresses in other Office 365 features (for example, mail flow rules that look for EAI addresses, or outbound connectors to Internationalized Domain Name (IDN) domains). But please note that this new release will not support adding EAI addresses for Office 365 users, or IDN domains for Office 365 organizations.  We will continue to evaluate these features as the standards are more widely adopted. We will also keep you posted on the plan to release this to Exchange Enterprise version.

Carolyn Liu


The case of Reply Log Manager not letting lagged copy lag

$
0
0

In a previous blog post Ross Smith IV had explained what the Replay Lag Manager is and what it does. It's a great feature that's somewhat underappreciated. We've seen a few support cases that seemed to have been opened out of the misunderstanding of what the Replay Lag Manager is doing. I wanted to cover a real world scenario I had dealt with recently with a customer that I believe will clarify some things.

What is a Replay Lag Manager?

In a nutshell, Replay Lag Manager provides higher availability for Exchange through the automatic invocation of a lagged database copy. To further explain, a lagged database copy is a database that Exchange delays committing changes to for a specified period of time. The Replay Lag Manager was first introduced in Exchange 2013 and is actually enabled by default beginning with Exchange 2016 CU1.

To understand what it is let's look at the Preferred Architecture (PA) in regards to a database layout. The PA uses 4 database copies like the following:

clip_image002

As you can see the 4th copy is a lagged copy. Even though we're showing it in a secondary site, it can exist in any site where a node in the same DAG resides.

The Replay Lag Manager will constantly watch for any of the three things to happen to the copies of DB1. Ross Smith's post does a wonderful job of explaining them and how Exchange will take other factors (i.e. disk IO) into consideration before invoking the lagged copy. In general, a log play down will occur:

  • When a low disk space threshold (10,000MB) is reached
  • When the lagged DB copy has physical corruption and needs to be page patched
  • When there are fewer than three available healthy HA copies for more than 24 hours

A log "play down" essentially means that Replay Lag Manager is going to force that lagged database copy to catch up on all of the changes to make that copy current. By doing this it ensures that Exchange maintains at least 3 copies of each database.

When things are less than perfect…

In the real world we don't always see Exchange setup according to our Preferred Architecture because of environment constraints or business requirements. There was a recent case that was the best example of Lag Replay Manager working in the real world. The customer had over 100 DB's, all with 6 copies each. There were 3 copies in the main site and 3 copies in the Disaster Recovery site with one of those copies at each site being lagged. The DB copies were configured like this for all databases.

clip_image002[5]

As you can see in this particular instance the lagged copy at Site A was being forced to play down while the other copy showed a Replay Queue Length (RQL) of 4919. This case was opened due to the fact that the lagged DB copy at Site A was not lagging.

The customer stated that the DB was lagging fine until recently. However, after a quick check of the Replay Queue Length counter in the Daily Performance Logs it didn't appear to have ever lagged successfully for this copy.

So, what we're seeing is the database has 6 copies, 2 lagged but 1 of those lagged copies isn't lagging. Naturally, you may try removing the lag by setting the -ReplayLagTime to 0 then changing back to 7 (or what it was before). You may even try recreating the database copy thinking something was wrong with it. These still don't cause Exchange to lag this copy.

The next step is to check if it's actually the Replay Lag Manager causing the log play down. You can quickly see this by running the following command specifying the lagged DB\Server Name. On this example will use SERVER3 as the server hosting the lagged copy of DB1.

Get-MailboxDatabaseCopyStatus DB1\SERVER3 | Select Id,ReplayLagStatus
Id                                      : DB1\SERVER3
ReplayLagStatus                         : Enabled:False; PlayDownReason:LagDisabled; ReplaySuspendReason:None;
Percentage:0; Configured:7.00:00:00; MaxDelay:1.00:00:00; Actual:00:01:22

What we see is that the ReplayLagStatus is actually disabled and the PlayDownReason is LagDisabled. That tells us it's disabled but it doesn't really give us more detail as to why..

We can dig further by looking at the Microsoft-Exchange/HighAvailability log and we see a pattern of 3 events. The first event we encounter is the 708 but it doesn't give us any more information than the previous command does.

Time:     11/31/2017 3:32:55 PM
ID:       708
Level:    Information
Source: Microsoft-Exchange-HighAvailability
Machine:  server3.domain.com
Message:  Log Replay for database 'DB1' is replaying logs in the replay lag range. Reason: Replay lag has been disabled. (LogFileAge=00:06:00.8929066, ReasonCode=LagDisabled)

The second event we see has a little more information. At this point we know for sure it's the Replay Lag Manger because of its FastLagPlaydownDesired status.

Time:     11/31/2017 3:32:55 PM
ID:       2001
Level:    Warning
Source: Microsoft-Exchange-HighAvailability
Machine:  server3.domain.com
Message:  Database scanning during passive replay is disabled on 'DB1'. Explanation: FastLagPlaydownDesired.

On the third event we see the 738 which actually explains what's going on here.

Time:     11/30/2017 1:50:15 PM
ID:       738
Level:    Information
Source: Microsoft-Exchange-HighAvailability
Machine:  server3.domain.com
Message:  Replay Lag Manager suppressed a request to disable replay lag for database copy 'DB1\SERVER3' after a suppression interval of 1.00:00:00. Disable Reason: There were database availability check failures for database 'DB1' that may be lowering its availability. Availability Count: 3. Expected Availability Count: 3. Detailed error(s):
SERVER4:
Server 'server4.domain.com' has database copy auto activation policy configuration of 'Blocked'.
SERVER5:
Server 'server5.domain.com' has database copy auto activation policy configuration of 'Blocked'.
SERVER6:
Server 'server6.domain.com' has database copy auto activation policy configuration of 'Blocked'.

The "Availability Count: 3. Expected Availability Count: 3." is a tad confusing but the heart the issue is in the detailed errors below that…

It's Replay Lag Manager doing it… but why?

The entire reason for this blog post comes out of the fact that we've seen the Replay Lag Manager blamed for not letting a lagged copy lag. So, the next step someone will do is to disable it. Please don't do that! It only wants to help!

Let's look at how we can resolve the our above example. The logs are showing that it's expecting 3 copies but there aren't 3 available.  How can that be? They have at least 4 copies of this database available?!? If we run the following command we see a hint at culprit.

Get-mailboxdatabasecopystatus  DB1 | Select Identity,AutoActivationPolicy
Identity          AutoActivationPolicy
--------          --------------------
DB1\SERVER1 Unrestricted
DB1\SERVER2 Unrestricted
DB1\SERVER3 Unrestricted - Lagged Copy (Not lagging)
DB1\SERVER4 Blocked
DB1\SERVER5 Blocked
DB1\SERVER6 Blocked - Lagged Copy (Working)

There it is! There are 6 database copies, however, the copies in Site B are all blocked due to the AutoActivationPolicy. Now things are starting to make sense. In the eyes of the Replay Lag Manager those copies in Site B are not available because Exchange cannot activate them automatically. So, what's happening is the Replay Lag Manager only sees the 2 copies (in the green square below) as available. Therefore, it forces a play down of the logs on the lagged copy to maintain it's 3 available copies.

clip_image002[8]

That explains why the lagged copy at Site A isn't lagging but why is the lagged copy at Site B working fine? This is because from the perspective of that database there are 3 available copies in Site A once that lagged copy was played down.

That's cool… how do I fix it?

There are essentially two ways to resolve this example and allow that lagged copy at Site A to properly lag.

The first way is to revisit the decision to block Auto Activation at Site B. The mindset in this particular instance was that their other site was actually for Disaster Recovery. They wanted some manual intervention if databases needed to fail over to the DR site. That's all well and good but it doesn't allow for a lagged copy at Site A to work properly due to the Replay Lag Manager. The customer did actually end up allowing 1 copy at the DR site (site B in our example) for Auto Activation. To do this you can run the following command:

Set-Mailboxdatabasecopystatus SERVER4\DB1 -AutoActivationPolicy Allowed

The other option here would be to create another database copy at Site A. Obviously, that's going to require a lot more effort and storage. However, doing this would allow for the Replay Lag Manager to resume lagging on the lagged database copy.

I hope this post clarifies some things in regards to the Replay Lag Manager. It's a great feature that will provide some automation in keeping your Exchange databases highly available.

Michael Schatte

Exchange Server guidance to protect against speculative execution side-channel vulnerabilities

Permanently Clear Previous Mailbox Info

$
0
0

We are introducing a new parameter that can be called by using the Set-User cmdlet in Exchange Online PowerShell. The feature is focused for customers doing migration of on-premises mailboxes to the cloud and you will be able to use it within three weeks or so (Edit 1/19: we updated this due to slower than expected rollout):

Customers who have Hybrid or on-premises environments with AAD Connect / Dir Sync may have faced the following scenario:

  1. User Jon@contoso.com has a mailbox on-premises. Jon is represented as a Mail User in the cloud.
  2. You are synchronizing the on-premises directory to the cloud in preparation to migrate to Exchange Online.
  3. Due to issues with the on-premises sync or due to a configuration problem, the user Jon@contoso.com does not get the ExchangeGUID synchronized from on-premises to the cloud.
  4. If the Exchange GUID is missing from the object in the cloud, assigning an Exchange license to Jon@contoso.com will cause Exchange Online to give the user a mailbox, converting the object from a Mail User to a User Mailbox. (Adding the license is a step required for the migration of the mailbox from on-premises to the cloud.)
  5. The end result is the user that has 2 mailboxes: one on-premises and one in the cloud. This is not good. Mail flow issues will follow.

Those doing these types of migrations will know that the ExchangeGUID value is very important as it helps Exchange Online identify that the user has a mailbox on-premises, and if an Exchange license is assigned in the cloud, a new mailbox should not be created.

The immediate fix for this situation is to remove the Exchange License from Jon@contoso.com. This will convert the cloud object for Jon back to a Mail User. Mail flow should be restored at this point.

The problem now is that you have an “unclean” cloud object for Jon. This is because Exchange online keeps pointers that indicate that there used to be a mailbox in the cloud for this user:

PS C:\WINDOWS\system32> Get-User Jon@contoso.com | Select name,*Recipient*
Name PreviousRecipientTypeDetails RecipientType RecipientTypeDetails
---- ---------------------------- ------------- --------------------
Jon UserMailbox MailUser MailUser

Re-assigning the license after that will always err on the side of caution and Exchange Online will try to re-connect the (duplicate, temporary) mailbox in the cloud (and mailboxes can be reconnected for 30 days). Therefore Jon’s account in the cloud can’t be licensed in preparation for migration.

Up to now, one of the few options to fix this problem was to delete *only in the cloud* Jon’s object and re-sync it from on-premises. This would delete jon@contoso.com from the cloud – but from all workloads, not only Exchange. This is problematic because Jon could have his OneDrive or SharePoint data in the cloud only and deleting his account means that this will be deleted too. If the account is then re-created, Jon and the tenant admin would have to work to recover to his new account all the data he used to have in OneDrive or SharePoint just because Exchange data needed to be “cleaned up”.

The new parameter in the user cmdlet will allow tenant admin to clean up Exchange Online Jon’s object without having to delete it.

To clean the object, you can run the following command:

PS C:\> Set-User Jon@contoso.com -PermanentlyClearPreviousMailboxInfo
Confirm
Are you sure you want to perform this action?
Delete all existing information about user “Jon@contoso.com"?. This operation will clear existing values from Previous home MDB and Previous Mailbox GUID of the user. After deletion, reconnecting to the previous mailbox that existed in the cloud will not be possible and any content it had will be unrecoverable PERMANENTLY. Do you want to continue?
[Y] Yes  [A] Yes to All  [N] No  [L] No to All  [?] Help (default is "Y"): Y

Executing this leaves you with a clean object that can be re-licensed without causing the 2-mailbox problem. Now you can on-board Jon’s on-premises mailbox following the usual steps. An alternative – a call to support to do the clean-up for you - is also not needed.

Remember, cleaning up the user means that the older associated disconnected (duplicate) cloud mailbox is not recoverable. If you want to keep it or be able to check it’s content, we recommend using Soft Deletion or Inactive Mailboxes to keep the mailbox.

Mario Trigueros Solorio

Exchange Log Collector Script

$
0
0

A while ago I created the “CollectLogsScript” (see my old A better way to collect logs from your Exchange servers blog post) which I have since rebranded to “ExchangeLogCollector”. Seeing that this has proven popular, I have continued to make some major improvements to the script over the years. The script was recently moved over to GitHub to allow people to know and understand what changes I have made so there are no surprises - those of you wanting to see the changes/commits in the branch of code can do so by clicking here. Moving to GitHub also allows the option for someone else to submit issues that they are running into so that they can be addressed. For those looking simply to download the latest version of the script, go to the release page and download the latest ps1 itself.

A recent major improvement that I have made was to enable remote collection from other Exchange servers. This allows the data collection to be done with even more ease and with less admin overhead to collect the data required. This is only able to be done on machines that will allow you to run Invoke-Command remotely against them. From my testing thus far, it appears that machines running on Windows 2008 R2 are not able to do this functionality. If a server fails remote collection, you will still be able to run the script locally on the server without any issues.

When running this script, you’ll always want to be run it from a server that you want to also collect data from or some data will be missing from the data collection – the script will not run properly against a tools machine. Here is what it looks like when you run the script now.

I have added a disclaimer because this script can collect large amounts of data - if you aren’t careful, you can fill up a drive on the server. I still have the logical check to make sure there are at least 15GB free on the specified drive which should be enough in most circumstances – there will always be some variables in play here, but the 15GB free space check is expected to be sufficient. In past versions of the script, I did have a DiskCheckOverride switch that you could use to skip over the check for the disk space, however, with this current version of having the remote option I don’t have that in place till I have some advanced logic in place to allow near zero chance of any drives filling up.

After you have agreed to the disclaimer, the script checks to make sure the servers in the list are up and able to be collected from remotely. Then we do the disk free space check and proceed to collect any Exchange-specific cmdlets as you can’t run Exchange cmdlets within Invoke-Command. If any server fails one of these tests, it will remove them from the list. You will need to collect from any listed servers manually.

After everything has executed locally that we need to, we then send the Invoke-Command to all the servers in the list. You will start to see something like in the image below where we are collecting and zipping up the selected data.

Once we are done collecting data from all the servers, we will proceed to check the actual size of the zipped file from every server and verify we have enough space free to copy the data to the local server. This way, it will be even easier to collect and upload the data.

With the new features that I have added to the script hopefully this makes data collection from an Exchange environment even easier than before. This major improvement moving forward should make it easier for admins to collect data from multiple servers with little to no hassle. This makes all of our jobs easier – ensuring that we collect all the data we need when we need to collect it to resolve issues!

David Paulson

Exchange Server TLS guidance, part 1: Getting Ready for TLS 1.2

$
0
0

Overview

As the realm of security in technology continues to evolve over time, every so often we say hello to newer and more competent versions of technologies while saying goodbye to their older siblings.

By the time you are reading this article you may have learned Office 365 intends to stop accepting inbound network connections if they are using TLS protocol versions prior to TLS 1.2, and started to wonder how this may affect your on-premises deployments of Exchange Server. For clarity, this does not mean your on-premises deployments must disable TLS 1.0/1.1 by the time Office 365’s change takes place. It only means TLS 1.2 must be enabled and used when communicating with Office 365.

Today, in part 1 of this series we will provide you with the information required to prepare your environments for using TLS 1.2, as well as, what our plans are during the next few weeks.

Part 1: This blog. What you need to be ready for TLS 1.2 being enabled.

ETA: The present, which is now the past

Part 2: Enabling and confirming TLS 1.2 is operational in supported Exchange Server deployments.

ETA: The future, Early/Mid-February 2018

Part 3: Disabling TLS 1.0 and TLS 1.1 as well as how to run a TLS 1.2-only Exchange Server deployment aligned with Office 365’s configuration.

ETA: The future’s future, the next Exchange 2010/2013/2016 updates, est. Mid-March

In addition to the Office 365 announcement, we know there are customers interested in this topic due to the PCI DSS 3.1 that currently has an effective date of June 30th, 2018. We are seeing an uptick in requests for guidance related to this date and want to assure you we have the guidance underway.

Protocols and Components

TLS versus SSL

Before going further, let us take a moment to clarify TLS and SSL in case they are unfamiliar terms.

In the world of Exchange Servers, it isn’t uncommon to think of the TLS protocol (Transport Layer Security) as being involved only in mail delivery processes ("Transport" kind of indicates that). For the SSL protocol (Secure Socket Layer), we most often speak to it when planning for client namespaces and ensuring we’re able to use HTTPS for a secure HTTP session. For example, during the deployment of a new Exchange organization you may hear, “Did you already get the SSL certificate for the new Exchange namespace?” The S in HTTPS does not stand for SSL, it stands for Secure. What really should be asked in the SSL example above is “Did you already get the certificate to enable HTTPS for the new Exchange namespace?” as HTTPS can (and should) be using a TLS based protocol these days rather than an older SSL protocol. TLS can be thought of as the successor to SSL and can be used anywhere two systems must exchange information over an encrypted network session. The Windows Dev Center does a nice job of summarizing this for us here and here.

Additional Components

In addition to the TLS and SSL protocols, there are many other terms that may be useful to cover, which will become more important in later segments of this blog series.

Schannel

Microsoft Exchange Server relies on the Secure Channel (Schannel) security support provider, which is a Windows component used to provide identity repudiation and in some instances authentication to enable secure, private communications through encryption. One of the roles of Schannel is to implement versions of SSL/TLS protocols to be used during client/server information exchanges. Schannel also plays a part in determining what cipher suite to be used.

Cipher Suites

Cipher Suite selection, in addition to the encryption protocol (TLS/SSL) used to carry out information exchanges, is another significant piece of the overall puzzle. Cipher suites are a collection of algorithms used to determine how information exchanged between two systems will be encrypted for key exchange, bulk encryption, and message authentication. As one may expect, different versions of Windows have supported an ever-evolving list of cipher suites made up of different strengths throughout the course of release. If you are a customer accustomed to configuring applications to only use Federal Information Processing Standards (FIPS) compliant algorithms, then cipher suites are nothing new to you.

WinHTTP

Some components of Microsoft Exchange Server rely on Microsoft Windows HTTP Services (WinHTTP). WinHTTP provides a server-supported, high-level interface to the HTTP/1.1 Internet Protocol. WinHTTP enables Exchange to retrieve enabled encryption levels, specify the security protocol, and interact with server and client certificates when establishing an HTTPS connection.

.NET

Last, but certainly not least, is the Microsoft .NET Framework. .NET is a managed execution environment that includes a common language runtime (CLR) that is used as an execution engine and class library which provides reusable code; a vast majority of the code that makes up Exchange Server is written for the .NET Framework. With the release of Exchange Server 2013, our references to the Information Store now being “managed code” or “managed store” were due to its complete rewrite using .NET Framework. Settings for .NET itself can have an impact on how different protocols are used when applications exchange information with other systems.

There are many components Exchange Server depends on to properly implement all its encryption capabilities. Understanding what those components are, and how every component should align when adjusting cryptography settings will help you better understand the impact to Exchange Server when those changes are carried out.

With those clarifications out of the way let us keep moving on.

How to Prepare

If you would like to prepare your Exchange environments for the upcoming TLS 1.2 configuration guidance, please align yourself by auditing your current deployment against the below set of requirements. No guidance will be provided for versions of Exchange or Windows earlier than what is listed below. By ensuring you are ready for the TLS 1.2 configuration guidance you will minimize the amount of work to enable TLS 1.2 in your environment.

Any update called out as ‘current’ is as of the publishing of this article and may not remain true in the future.

Exchange Server versions

Exchange Server 2016

  • Install Cumulative Update (CU) 8 in production and be ready to upgrade to CU9 after its release if you need to disable TLS 1.0 and TLS 1.1.
  • Install the newest version of. NET and associated patches supported by your CU (currently 4.7.1).

Exchange Server 2013

  • Install CU19 in production and be ready to upgrade to CU20 after its release if you need to disable TLS 1.0 and TLS 1.1.
  • Install the newest version of.NET and associated patches supported by your CU (currently 4.7.1).

Exchange Server 2010

  • Install SP3 RU19 in production today and be ready to upgrade to SP3 RU20 in production after its release if you need to disable TLS 1.0 and TLS 1.1.
  • Install the latest version of.NET 3.5.1 and patches.

Exchange Server versions older than 2010

  • Out of support. There is no path forward and you should be planning a migration to Exchange Online or a modern version of Exchange Server on-premises.

As always you may refer to the Exchange Supportability Matrix if you need information related to what combinations of Exchange, Windows, and .NET Framework are supported operating together.

Windows Server versions

You cannot have an Exchange server without Windows Server, so don't forget to make sure you're in a good place at the operating system level to support using TLS 1.2.

Many of the Schannel, WinHTTP, and. NET Framework updates require registry changes to become effective. After confirming the updates below are installed, please do not make any registry changes unless you already have custom settings you must use. We will cover registry changes for these updates in the next part of this series.

Windows Server 2016

  • TLS 1.2 is the default security protocol for Schannel and consumable by WinHTTP.
  • Ensure you have installed the most recent Monthly Quality Update along with any other offered Windows updates.

Windows Server 2012 R2

  • TLS 1.2 is the default security protocol for Schannel and consumable by WinHTTP
  • Ensure your server is current on Windows Updates.
    • This should include security update KB3161949 for the current version of WinHTTP.
  • If you rely on SHA512 certificates; please see KB2973337.

Windows Server 2012

  • TLS 1.2 is the default security protocol for Schannel.
  • Ensure your server is current on Windows Updates.
    • This should include security update KB3161949 for the current version of WinHTTP.
  • If you rely on SHA512 certificates; please see KB2973337.
  • Exchange 2010 Installs Only: Install 3154519 for .NET Framework 3.5.1.

Windows Server 2008 R2 SP1

  • TLS 1.2 is supported by the OS but is disabled by default.
  • Ensure your server is current on Windows updates.
    • This should include security update KB3161949 for the current version of WinHTTP.
    • This should include optional recommended update KB3080079 which adds TLS 1.2 capability to Remote Desktop Services if you intend to connect to 2008 R2 SP1 based Exchange Servers via Remote Desktop. Also install this update on any Windows 7 machines you intend to connect from.
  • If you rely on SHA512 certificates; please see KB2973337.
  • Exchange 2010 Installs Only: Install 3154518 for .NET Framework 3.5.1.

Windows Server 2008 SP2

  • TLS 1.2 is not supported by default.
  • Ensure your server is current on Windows updates.
    • This should include optional recommended update KB4019276. This update adds TLS 1.2 capability as a default secure protocol for Schannel.
    • This should include security update KB3161949 for the current version of WinHTTP.
  • If you rely on SHA512 certificates; please see KB2973337.
  • Exchange 2010 Installs Only: Install 3154517 for .NET Framework 3.5.1.

Why is having current updates helpful?

It may normally go without saying, but by being on a current update you will minimize the risk of encountering any issues while applying a new update as these update paths are tested and well-known prior to the release of the new update. We would like to help you avoid any delay in deploying TLS configuration changes which could arise from battling upgrades from very old Exchange or Windows updates.

In addition, with our December 2017 releases for Exchange Server, we’ve already been making underlying changes to prepare for this eventual moment in TLS' history. Starting with those releases, Exchange setup no longer overwrites the current cryptography settings of the server you're upgrading. If you have previously configured certain cryptography ciphers and their order of presentation, we will no longer reset them to our desired default configuration.

For any new server installations (not an upgrade of an existing server to a new update), Exchange setup will still configure the recommended configuration as of the time the update was originally published. This will also happen if setup is run with /M:RecoverServer as we assume this is the first time Exchange is being installed on the server.

If customers prefer a configuration other than our recommended out-of-the-box configuration, then you will still have to apply those updates after installing Exchange Server for the first time on a server. However, once Exchange is installed your custom cryptography config should remain in place after any future Exchange Server update. The Exchange team will continue to publish guidance on which cryptography settings we believe customers should use to optimally configure an Exchange server.

What else have we been up to?

Historically there were many areas within the Exchange codebase where specific cryptography protocols were hard-coded. Over the last few years we have been systematically updating all these areas and slowly converting components over to use protocols and ciphers as dictated by the underlying operating system and .NET. Progress in making these changes was intentionally done in a slow controlled manner over time to ensure stability of the product was not affected. We believe these changes should make administrators' lives easier by reducing where and how you need to configure cryptography settings for an Exchange Server.

Am I a Server or a Client?

Believe it or not even with Exchange 2016 the acceptance of inbound connections to the Mailbox Server role and the Edge Transport role are not the only purposes a server can have. Exchange Server is often playing the role of a client. Any time Exchange is initiating contact to another system it is effectively a client. Sending mail to another Exchange Server in your org? Client. Contacting O365 for a cross-premises F/B request? Client. Sending mail to a partner organization? Client. Doing a CRL lookup against a CDP so it can show S/MIME certificate status in OWA? Client. Proxying a client request from one Exchange Server to another? Client.

Exchange obviously can also play the role of a server, as defined as the party answering a request from another system. Examples include receiving client connections, receiving inbound e-mail via SMTP, or accepting cross-forest requests from another Exchange org.

Why does this matter? As you move forward with your configuration changes you must take caution to not move too quickly. Stop and take stock of not only what talks to Exchange, but what Exchange talks to as well. This may mean you have to coordinate changes across multiple environments to ensure you do not suffer any impact to service availability. In the next part of the series you will see configuration changes that refer to both Client and Server aspects of the machine. If you miss one setting you may find yourself with a system making outbound connections on older TLS protocols even though it allows incoming connections to only use TLS 1.2. In part 2 of this series we will discuss how to introduce TLS 1.2 into your environment safely while other servers may still be using TLS 1.0 or 1.1.

Call to Action and Review

Should you be preparing to act?

Yes, we recommend all Exchange Server on-premises customers begin the transition towards using TLS 1.2.

Action Items

If you have not already, then please audit your systems for any updates we’ve outlined above as necessary and begin deploying them to prepare yourself for configuring TLS 1.2.

Review

Keep watching this space for additional information on configuring TLS 1.2, and then additional future guidance on deprecating TLS 1.0 and 1.1 from Exchange Servers.

We're continuing to work with our partner teams across Microsoft to provide you with the best set of guidance and you'll continue to hear more from us to help guide you through this transition.

We hope this first post is helpful in your planning and look forward to releasing the other upcoming parts!

A huge debt of gratitude goes to Scott Landry, Brent Alinger, Chris Schrimsher and others for combining numerous efforts of work into this series of postings.

Brian Day

Viewing all 172 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>