Quantcast
Channel: Microsoft Dynamics AX Solution Architecture
Viewing all 97 articles
Browse latest View live

Authenticate with Dynamics 365 for Finance and Operations web services in on-premises

$
0
0

This blog explains how to take the standard examples for Dynamics 365 for Finance and Operations integration from Github and authenticate to an on-premises instance of Finance and Operations. At the end you'll also find some troubleshooting tips if it doesn't work first time, which can be useful for any scenario where something is trying to authenticate to services.

Environment prerequisites

There are a few items required before you start:
- Installed Visual Studio 2017 Enterprise edition (edition appears to not be important)
- Downloaded the examples from GitHub: https://github.com/Microsoft/Dynamics-AX-Integration
- Open the ServicesSamples.sln solution
- Within Visual Studio, go to Tools->NuGet package manager-> manage NuGet packages for solution, there it recognises there are some missing – click restore in the top right and it downloads them automatically.

ADFS Setup

Now on ADFS server in my on-prem environment, I need to add a client application. From “AD FS Management” I see Application Groups, open the default application group for Dynamics 365 called “Microsoft Dynamics 365 for Operations On-Premises”, it will look something similar to this:
ADFS application group

Click “Add application…” at the bottom, add a new server application:
Add new application

Add the redirect URL, this should be the URL for the D365 application, take a note of the client identifier as you’ll need to use this in your client application later:
Add the redirect URL

Select to generate a shared secret – you must copy this now, as it will not be shown again – your client app needs this detail to connect to D365:
Generate shared secret

Summary

Completed

Next, back in the application group window, edit the “Microsoft Dynamics 365 for Operations On-premises – Web API” item:
Edit application group

On the “Client permissions” tab add a new record for the server application created in the previous step:
Add a new record for server application

Code

Within the ServicesSamples.sln solution open the ClientConfiguration.cs source file and modify similar to below (using the values from your ADFS configuration above):

public static ClientConfiguration OneBox = new ClientConfiguration()
{
UriString = "https://ax.d365ffo.zone1.saonprem.com/namespaces/AXSF/", //the normal URL for logging into D365
UserName = "not used",
Password = "",

//Note that AOS config XML is on AOS machines in: C:\ProgramData\SF\AOS_10\Fabric\work\Applications\AXSFType_App84\AXSF.Package.1.0.xml

ActiveDirectoryResource = "https://ax.d365ffo.zone1.saonprem.com/", //this is the value for AADValidAudience from the AOS config xml
ActiveDirectoryTenant = "https://dax7sqlaoadfs1.saonprem.com/adfs",//this is the value for AADIssuerNameFormat (minus the placeholder {0}, instead suffix "/adfs") from AOS config xml
ActiveDirectoryClientAppId = "6c371040-cf6b-4154-b9c4-75e613fb5104", //client app ID is from ADFS management - configure a application group
ActiveDirectoryClientAppSecret = "MO-tVemKqAjVLj1NdcCs3mfiWw2X3ZNyjuFe0UYg", //secret is from ADFS management - same place as the client app ID

// Change TLS version of HTTP request from the client here
// Ex: TLSVersion = "1.2"
// Leave it empty if want to use the default version
TLSVersion = "",
};

AX Setup

Also need to add the application in AX application too, under System administration > setup > Azure Active Directory applications, using the client ID you put into your client code:
AX Setup

Troubleshooting:

ADFS group creation fails

If ADFS group creation fails as shown below with the error MSIS7613 Each identifier must be unique across all relying party trusts in AD FS configuration. This means that the URL entered for Web API is already registered in another group – probably the default D365 group. To resolve this see below.
ADFS group creation fails

Locate the standard Microsoft Dynamics 365 for Operations On-premises ADFS application group and open it.
ADFS configuration

Forbidden

The error below occurs if the setup within AX hasn’t been completed within the AX application under System administration > setup > Azure Active Directory applications. The error below was reported back to the calling client application.
0:025> !pe
Exception object: 0000029e2b48e1d8
Exception type: System.ServiceModel.Web.WebFaultException`1[[System.ComponentModel.Win32Exception, System]]
Message: Forbidden
InnerException: <none>
StackTrace (generated):
<none>
StackTraceString: <none>
HResult: 80131501
0:025> !clrstack
OS Thread Id: 0x1cb4 (25)
Child SP IP Call Site
000000f0381bc3e8 00007ff86e233c58 [HelperMethodFrame: 000000f0381bc3e8]
000000f0381bc4d0 00007ff808fe8642 Microsoft.Dynamics.Ax.Services.ServicesSessionProvider.ThrowSessionCreationException(Microsoft.Dynamics.Ax.Services.ServicesSessionCreationErrorCode)
000000f0381bc520 00007ff808fe45b0 Microsoft.Dynamics.Ax.Services.ServicesSessionProvider.GetSession(Boolean, Boolean, System.String, System.String, System.String, System.String, System.Security.Claims.ClaimsIdentity)
000000f0381bc690 00007ff808fe4014 Microsoft.Dynamics.Ax.Services.ServicesSessionManager.InitThreadSession(Boolean, Microsoft.Dynamics.Ax.Xpp.AxShared.SessionType, Boolean, System.String, System.String, System.String, System.String, System.Security.Claims.ClaimsIdentity)
000000f0381bc730 00007ff808fe3ea6 Microsoft.Dynamics.Platform.Integration.Common.SessionManagement.ServicesAosSessionManager.InitializeSession(Boolean, System.String, System.Security.Claims.ClaimsIdentity)
000000f0381bc7a0 00007ff808fe366a Microsoft.Dynamics.Platform.Integration.Common.SessionManagement.OwinRequestSessionProvider.CreateSession(System.Security.Claims.ClaimsIdentity)
000000f0381bc7f0 00007ff808fe34cc Microsoft.Dynamics.Platform.Integration.Common.SessionManagement.ServicesRequestSessionHelper.EnsureRequestSession(Microsoft.Dynamics.Platform.Integration.Common.SessionManagement.IServicesRequestSessionProvider, System.Security.Claims.ClaimsIdentity)
000000f0381bc830 00007ff808fe2a86

Audience validation failed

The error below occurs if the value you’re using in your client application for ActiveDirectoryResource (from ClientConfiguration.cs in the example apps) doesn’t match the value in the AOS configuration for AADValidAudience. The AOS configuration is here: C:\ProgramData\SF\AOS_10\Fabric\work\Applications\AXSFType_App84\AXSF.Package.1.0.xml
Note that the error passed back to the client application is not as detailed as this – this error was from catching the exception directly on the AOS machine using WinDbg.

0:029> !pe
Exception object: 00000166f07da608
Exception type: System.IdentityModel.Tokens.SecurityTokenInvalidAudienceException
Message: IDX10214: Audience validation failed. Audiences: 'http://tariqapp.saonprem.com'. Did not match: validationParameters.ValidAudience: 'null' or validationParameters.ValidAudiences: 'https://ax.d365ffo.zone1.saonprem.com, 00000015-0000-0000-c000-000000000000, https://ax.d365ffo.zone1.saonprem.com/'
InnerException: <none>
StackTrace (generated):
<none>
StackTraceString: <none>
HResult: 80131501

Strong name validation failed on first client application run

If you tried to run OdataConsoleApplication and it failed with error: Could not load file or assembly 'Microsoft.OData.Client, Version=6.11.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. Strong name validation failed. (Exception from HRESULT: 0x8013141A)

The root cause of this could be that it’s looking for 6.11 but 6.15 is the version installed – from the NuGet package manager in VS you can change the version, then build, success, then run, success.

ADFS error log

To help troubleshoot ADFS errors you can view the event viewer on the ADFS server, as shown below, it’s under Applications and services logs > AD FS > Admin.
ADFS error log


Oh AOS why have you forbidden me

$
0
0

Sometimes when services are trying to authenticate to an AOS in Dynamics 365 for Finance and Operations, both in the Cloud version and the on-premises version, the calling application may receive the error message "forbidden" back from the AOS. This message is deliberately vague, because we don't want a calling application to be able to poke the AOS and learn about how to get in, but unfortunately that vagueness can make it difficult to figure out what is actually wrong, in this post we'll discuss what's happening in the background and how to approach troubleshooting.

Anything which is calling web services could receive this "Forbidden" error - for example an integrated 3rd party application, or Financial Reporting (formerly Management Reporter).

First let's talk about how authentication to Finance and Operations works, there are two major stages to it:

1. Authentication to AAD (in Cloud) or ADFS (in on-premises)- this is happening directly between the caller and AAD/ADFS - the AOS isn't a part of it.
2. Session creation on the AOS - here the caller is giving the token from AAD/ADFS to the AOS, then AOS attempts to create a session.

The "forbidden" error occurs during the 2nd part of the process - when the AOS is attempting to create a new session. The code within the AOS which does this has a few specific cases when it will raise this:

- Empty user SID
- Empty session key
- No such user
- User account disabled
- Cannot load user groups for user

For all of these reasons the AOS is looking at the internal setup of the user in USERINFO table - it's not looking at AAD/ADFS. In a SQL Server based environment (so Tier1 or on-premises) you can run SQL Profiler to capture the query it's running against the USERINFO table and see what it's looking for.

Examples:

- Financial Reporting (Management reporter) might report "Forbidden" if the FRServiceUser is missing or incorrect in USERINFO. This user is created automatically, but could have been modified by an Administrator when trying to import users into the database.
- When integrating 3rd party applications if the record in "System administration > setup > Azure Active Directory applications" is missing

Disable any reliance on internet in Finance and Operations on-premise

$
0
0

There are some features within Dynamics 365 for Finance and Operations on-premises which rely on an internet connection.

This means that the on-premises version DOES by default have a dependency on some cloud services - BUT you can turn that off, so there is no dependency.

As an example, last week there was an AAD outage, which affected on-premises customer's ability to log into the application. What was happening was - you'd log in as normal - see the home page for a moment, then it would redirect to the AAD login page, which was down, so the user would be stuck.

In the background this relates to the Skype presence feature - after the user logs in, in the background the system is contacting the Skype service online - which is what triggers that redirect to AAD when AAD is unavailable.

There is a hotfix available which allows a System Administrator to turn off all cloud/internet related functions in the on-prem version, details are available here:
Disable internet connectivity

How to select the document management storage location

$
0
0

In Dynamics 365 for Finance and Operations the document management feature allows you to attach documents (files and notes) to records within the application. There are several different options for storage of those documents – in this document we will explain the advantages and disadvantages of each option

Document storage locations

There are 3 possible options for document storage:

• Azure storage: In the cloud version of Finance and operations this will store documents in Azure blob storage, in the on-premises version this will store documents in the file share given in the environment deployment options in LCS*
• Database: stores documents in the database
• SharePoint: stores documents in SharePoint Online, this is currently only supported for the cloud version. Support for on-premises SharePoint is planned to be added in the future

Each document storage option can be configured per document type – meaning that it’s possible to configure a type of document “scanned invoices” and choose storage “Database”, and configure another type of document “technical drawings” and choose storage “Azure storage”.

Classes

When configuring document types there are 3 different classes of document available, each class of document only allows certain storage locations:
- Attach file: this allows selection of “Azure storage” or “SharePoint” locations
- Attach URL: this allows only “Database” location
- Simple note: this allows only “Database” location

Document storage location options

Azure storage

This type of storage can be configured for the “attach file” class of document only.

As mentioned earlier in this document, in the cloud version of Finance and operations this will store documents in Azure blob storage, in the on-premises version this will store documents in the file share given in the environment deployment options in LCS.

In the cloud version the an Azure storage account is automatically created when an environment is deployed. No direct access to the storage account is given, access is only via the application. This is a highly available geo-replicated account, so there are no additional considerations required to ensure business continuity for this component.

In the on-premises version an SMB 3.0 file share is specified at environment deployment time. High availability and disaster recovery options must be considered to ensure availability of this file share, the application accesses this using its UNC path – ensure this UNC path is available at all times.

Files stored in this way are not readable by directly accessing the file share location – they are intended only to be accessed through Finance and Operations – specifically files stored will be renamed to a GUID type name, and their file extension is removed. Within Finance and Operations a database table provides the link between the application and the file stored on the file system.
No direct access to this folder should be allowed for users, access for the Finance and Operations server process is controlled through the certificate specified during environment deployment.

Database

Database storage will be used automatically for document types using classes “Attach URL” or “Simple note”. The “Attach file” class of documents will not be stored in the database.
Documents stored in the database will be highly available by virtue of the SQL high availability options which are expected to be in place already as a requirement of Finance and Operations.

SharePoint

This type of storage can be configured for the “Attach file” class of document only.

For the cloud version of Finance and Operations, SharePoint Online is supported but currently SharePoint on-premises is not supported. For the on-premises version SharePoint Online is also not supported currently.

SharePoint Online is a highly available and resilient service, we recommend to review our documentation for more information.

Cloud versus On-premises

In the cloud version of Finance and Operations, for file storage, either SharePoint Online or Azure blob storage can be used.
In the on-premises version, for file storage, only Azure blob storage can be used – which will store files in a network file share as defined in the environment deployment options.
*The screenshot below shows the setting for file share storage location used by on-premises environments when selecting “Azure storage”.

On-premises deployment storage options

Troubleshooting on-premise environment deployment D365FFO

$
0
0

This document contains tips for troubleshooting on-premises Dynamics 365 for Finance and Operations environment deployment failures, based on my own experiences when troubleshooting this process for the first time.

Types of failures

The first type of failure I am seeing here is a simple redeploy of the environment. Originally I was trying to deploy a custom package, but it failed and I didn’t know why, so I deleted the environment and was redeploying with vanilla – no custom bits, just the base, and it still failed. In LCS after it runs for approx. 50 minutes I see the state change to Failed. There is no further log information in LCS itself, that information is within the respective machines in the on-premises environment.

Orchestrators

The Orchestrator machines trigger the deployment steps. In the base topology there are 3 orchestrators, these are clustered/load balanced – often the first one will pick up work, but don’t rely on that – it could be any of them which picks up tasks – and it could be more than 1 for a given deployment run – for example server 1 picks up some of the tasks and server 2 picks up some other tasks – always check event logs on all of them to avoid missing anything useful.

To make it easier to check them you can add a custom view of the event logs on each orchestrator machine, to give you all the necessary logs in one place, like this:
Create custom event log view

Select events

I found in my case that server 2 was showing an error, as below, it’s basically saying it couldn’t find the AOS metadata service, and I notice the URL is wrong – I’d entered something wrong in the deployment settings in LCS:
Example error

AOS Machines

There are also useful logs on the AOS machines – the orchestrators are calling deployment scripts but for AX specific functions the AOSes are still running – for example database synchronize is run by an AOS. Again the AOSes are clustered so we need to check all of them as tasks could be executed by any of them. Similar to the orchestrators I create a custom event log view to show me all Dynamics related events in one place. This time I am selecting the Dynamics category, and I have unchecked “verbose” to reduce noise.

AOS event log

Here’s an example of a failure I had from a Retail deployment script which was trying to adjust a change tracking setting, for an issue such as this, once I know the error I can work around the problem by manually disabling change tracking on the problem table from SQL Server Management Studio and then starting the deployment again from LCS.

AOS example error

ADFS Machines

The ADFS servers will show authentication errors – typical causes of this kind of failure could be a “bad” setting entered in the deployment settings in LCS – for example I entered the DNS address for the ax instance incorrectly, then I see an ADFS error after deployment when trying to log into AX:

ADFS error example

If you see an error as above, you can understand more about it my reviewing the Application group setup in “ADFS Management” on the ADFS machine, open it from server manager:

ADFS

Under application groups you’ll see one for D365, double click it to see the details

ADFS setup

If you’re familiar with the cloud version of D365, then you’ll probably know that AAD requires application URLs to be configured against it to allow you to log in – in cloud the deployment process from LCS is doing this automatically, and you can see it if you review your AAD setup via the Azure portal. In the on-prem version, this ADFS management tool shows you the same kind of setup, also in on-prem the deployment process is creating these entries automatically for you. Click on one of the native applications listed and then the edit button you can see what’s been set up:

ADFS application group setup

The authentication error I mentioned previously:
MSIS9224: Received invalid OAuth authorization request. The received 'redirect_uri' parameter is not a valid registered redirect URI for the client identifier: 'f06b0738-aa7a-4a50-a406-5c1e486c49be'. Received redirect_uri: 'https://dax7sqlaodc1.saonprem.com/namespaces/AXSF/'.

We can now see from the configuration above that for client 'f06b0738-aa7a-4a50-a406-5c1e486c49be' the request URL isn’t configured. If we believed that the URL is correct, then we could add it here and ADFS would then allow the request to go through successfully. IN my case the URL was the mistake, so I didn’t change ADFS settings, I corrected the URL in LCS and started the deployment again.

Package deployment failures

When reconfiguring an environment, and including a custom package, if the deployment fails, check the orchestrator machine event logs, as described above – use a custom event log view to check all the logs on a machine at once.

I have had a situation where I’m getting failures related to package dependencies where my package does not have the failing dependency, I will explain:
Error is:

Package [dynamicsax-demodatasuite.7.0.4679.35176.nupkg has missing dependencies: [dynamicsax-applicationfoundationformadaptor;dynamicsax-applicationplatformformadaptor;dynamicsax-applicationsuiteformadaptor]]

My package does not contain demodatasuite, so the error is a mystery. Turns out that because my package has the same filename as a previously deployed package, the system is not downloading my package and just attempting to deploy an old package with the same name. Packages can be found in the file share location, as below:
\\DAX7SQLAOFILE1\SQLFileShare\assets

The first part, \\DAX7SQLAOFILE1\SQLFileShare, is my file share (so will differ in different environments – it’s a setting given when the environment was created), the assets folder is constant.

In here I see that my current package “a.zip” (renamed to a short name to work around an issue with deployment failure due to path too long), is from several weeks ago and is much larger than the package I expect. To get past this I rename my package to b.zip and attempt deployment again. Note that after PU12 for on-premises this issue no longer occurs.

Package deployment process

During the package deployment process, the combined packages folders will be created in this folder:

\\DAX7SQLAOFILE1\SQLFileShare\wp\Prod\StandaloneSetup-109956\tmp\Packages

Error when environment left in Configuration mode

When running a reployment, the error below can occur if the environment has been left in Configuration mode (for changing config keys), turn off configuration mode, restart the AOSes and then re-run the deployment.

MachineName SQLAOSF1ORCH2
EnvironmentId c91bafd5-ac0b-43dd-bd5f-1dce190d9d49
SetupModuleName FinancialReporting
Component Microsoft.Dynamics.Performance.Deployment.Commands.AX.AddAXDatabaseChangeTracking
Message An unexpected error occurred while querying the Metadata service. Check that all credentials are correct. See the deployment log for details.
Detail Microsoft.Dynamics.Performance.Deployment.Common.DeploymentException: An unexpected error occurred while querying the Metadata service. Check that all credentials are correct. See the deployment log for details. ---> System.ServiceModel.FaultException: Internal Server Error Server stack trace: at System.ServiceModel.Channels.ServiceChannel.HandleReply(ProxyOperationRuntime operation, ProxyRpc& rpc) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout) at

Error when FRServiceUser is missing

This error can also happen when the FRServiceUser is missing in USERINFO – the AOS metadata service is trying to create an AX session as this user.
This user is normally created by the DB synch process. If the user is incorrect in USERINFO then deleting that User and re-running DB synch should recreate the user – you can set USERINFO.ISMICROSOFTACCOUNT to 0 in SSMS, and then re-run db synch to create the user. DB synch can be triggered in PU12+ by clearing the SF.SYNCLOG table and then killing AXService.exe – when it automatically starts back up it will run a db synch. The you should see the FRServiceUser created back in USERINFO.

MachineName SQLAOSF1ORCH2
EnvironmentId c91bafd5-ac0b-43dd-bd5f-1dce190d9d49
SetupModuleName FinancialReporting
Component Microsoft.Dynamics.Performance.Deployment.Commands.AX.AddAXDatabaseChangeTracking
Message An unexpected error occurred while querying the Metadata service. Check that all credentials are correct. See the deployment log for details.
Detail Microsoft.Dynamics.Performance.Deployment.Common.DeploymentException: An unexpected error occurred while querying the Metadata service. Check that all credentials are correct. See the deployment log for details. ---> System.ServiceModel.FaultException: Internal Server Error Server stack trace: at System.ServiceModel.Channels.ServiceChannel.HandleReply(ProxyOperationRuntime operation, ProxyRpc& rpc) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout) at

How authentication works in Dynamics 365 for Finance and Operations On-premises

$
0
0

In this article I'm going to explain the moving parts to authentication in on-premises Dynamics 365 for Finance and Operations. The intention of this article is to provide some background to how the process works, so that if you have issues you can work through them to figure out what's wrong.

First off - there's one option you provide during environment deployment, the URL for AD FS, which looks something like this:

https://dax7sqlaoadfs1.saonprem.com/adfs/.well-known/openid-configuration

You'll find that mentioned in the deployment instructions here

During deployment this is going to be used to set various options in the AOS xml config files on each AOS machine. You'll find the AOS config in a folder similar to below - note that the numbers vary from machine to machine:

C:\ProgramData\SF\AOS_10\Fabric\work\Applications\AXSFType_App218\AXSF.Package.1.0.xml

Within this config file (which is on each AOS machine) you'll find a few sections which are set from the LCS deployment setting for AD FS, this bit:


<Section Name="Aad">
<Parameter Name="AADIssuerNameFormat" Value="https://dax7sqlaoadfs1.saonprem.com/{0}/" />
<Parameter Name="AADLoginWsfedEndpointFormat" Value="https://dax7sqlaoadfs1.saonprem.com/{0}/wsfed" />
<Parameter Name="AADMetadataLocationFormat" Value="https://dax7sqlaoadfs1.saonprem.com/FederationMetadata/2007-06/FederationMetadata.xml" />
<Parameter Name="AADTenantId" Value="adfs" />
<Parameter Name="AADValidAudience" Value="https://ax.d365ffo.zone1.saonprem.com/" />
<Parameter Name="ACSServiceEndpoint" Value="https://accounts.accesscontrol.windows.net/tokens/OAuth/2" />
<Parameter Name="ACSServicePrincipal" Value="00000001-0001-0000-c000-000000000000" />
<Parameter Name="FederationMetadataLocation" Value="https://dax7sqlaoadfs1.saonprem.com/FederationMetadata/2007-06/FederationMetadata.xml" />
<Parameter Name="Realm" Value="spn:00000015-0000-0000-c000-000000000000" />
<Parameter Name="TenantDomainGUID" Value="adfs" />
<Parameter Name="TrustedServiceAppIds" Value="913c6de4-2a4a-4a61-a9ce-945d2b2ce2e0" />
</Section>

Also this section:


<Section Name="OpenIDConnect">
<Parameter Name="ClientID" Value="f06b0738-aa7a-4a50-a406-5c1e486c49be" />
<Parameter Name="Metadata" Value="https://dax7sqlaoadfs1.saonprem.com/adfs/.well-known/openid-configuration" />
</Section>
<Section Name="Provisioning">
<Parameter Name="AdminIdentityProvider" Value="https://dax7sqlaoadfs1.saonprem.com/adfs" />
<Parameter Name="AdminPrincipalName" Value="admin@exampleDomain.com" />
</Section>

The AOS is using these config values to know where to redirect to when a user tries to hit the application URL - so user hits the URL, AOS should redirect to the AD FS login page (using the values from this config), user enters their credentials, and gets redirects to the application URL again.

If values in the AOS config file are incorrect - then that typically means the value given for ADFS during environment deployment was wrong - easiest thing is to delete and redeploy the environment from LCS with the right value - it is possible to manually edit the config files, but to be safe, do a redeploy. If you do edit the config files then you need to restart the AOS services for it to take effect - either from SF explorer (right click the AOS node under Nodes, and choose restart, then wait for a minute or so for it's status to go back to green) or reboot the machine.

One example of an error caused by this, if I had entered the AD FS URL in LCS deployment wrongly (as below - note the missing hyphen) then I would get server error 500 when going to the application URL, because it no longer knows how to redirect to AD FS properly

https://dax7sqlaoadfs1.saonprem.com/adfs/.wellknown/openid-configuration

 

The second piece to the authentication process is ADFS itself, on the ADFS server if you open "AD FS Management" (from Control Panel\System and Security\Administrative Tools), and look under "Application groups", you'll find a group called "Microsoft Dynamics 365 for Operations On-premises" - within this group the settings for AD FS for Dynamics are kept - specifically there are application URLs, the same one you specified during environment deployment as the URL for the application, here's an example:

AD FS application group setup

AD FS uses the Client ID and the URLs to decide whether the request for access is ok or not. You will notice that the Client ID is also listed in the AOS config (it's in the section I pasted above) - if both the client ID and the URL don't match what the AOS is requesting, then AD FS will deny the token - if that happens you'll find an error in the Event Log on the ADFS server - there's a special event log for AD FS under "Application and Services logs\AD FS\Admin"

AD FS event log error

In the case that any of the AD FS application group setup is wrong, you're likely to see an error in it's event log which explains the value it was looking for, so you can figure out what is set incorrectly.

Debug a Dynamics 365 for Finance and Operations on-premises instance without Visual Studio

$
0
0

In this post I'm going to explain how to debug an error occurring in Dynamics 365 for Finance and Operation on-premises - directly in the on-premises environment, where Visual Studio isn't available, by using a free tool called WinDbg.

This approach gives a fast way to catch exceptions occurring in the environment and identify the call stack, more detailed error message (for example to see inner exceptions) and to see values for running variables at the time of the exception. You can use this approach not only for debugging the AOS itself, but actually for any component in Windows which is running .NET type code - for example if SSRS was throwing an exception, you can do the same thing to debug SSRS itself.

It does not give a full X++ debugging experience as you would normally have using Visual Studio with the Dynamics dev tools installed - I will be making another post soon explaining how to hook up Visual Studio to debug your on-premises instance to debug.

Overview

WinDbg is a very powerful debugging tool and can be used in many different scenarios - for example debugging an exception occurring in any Windows software or analyzing memory dumps (also known as crash dumps) from a Windows process.

In this document we'll look at one particular scenario to give an introduction to the tool and how it can be helpful in conjunction with Dynamics 365 for Finance and Operations on-premises to troubleshoot exceptions.

The example scenario here is:
- I have an external application trying to call into Finance and Operations web services
- The call is failing with "Unauthorized" in the calling application
- There is no error in the AD FS event log - AD FS is issuing a token fine, but the AOS is denying the call.
- I want to know why I am "Unauthorized" because it seems AOS should be allowing me

Prepare

First install WinDbg, this is available from the Windows SDK here

Note: there is a newer version of WinDbg currently in preview available in the Windows Store here, but my post here is only dealing with the old current released version.

Most of the install tabs you can click next-next - but when choosing which options to install, uncheck everything except the "Debugging tools for Windows" as shown below:

Once the installer completes you will find WinDbg on your Windows start menu - both x64 and x86 versions (and ARM and ARM64) will be installed. The rule for debugging .NET code with WinDbg is to match the version of WinDbg to the architecture of the process - 32 bit process, 32 bit WinDbg and 64 bit process, 64 bit WinDbg. As we are going to debug the AOS which is 64 bit, we'll need to open WinDBgx64 - MAKE SURE to run as Administrator, otherwise it won't let you attach to the process.

In a typical on-premises environment there w3ill be 3 AOS instances - when we're debugging we're not sure which of the 3 AOS we'll hit, so we want to turn off the other two, then we know everything will hit the remaining one, and we can debug that one. There are two options to do that:
1. Shut down the other two AOS machines in Windows.
2. From SF explorer, disable the AOS application for the other two AOS - if you take this route then you need to check that AXService.exe has actually stopped on both of those AOS machines in task manager - because I've found that it doesn't always stop immediately, it'll sit there for a while and requests will continue to go to them.

Debug

Now we have the tool installed we're ready to debug something. In WinDbg go to "File"->"Attach to process..", a dialog will open showing all the current running processes on that machine - select "AXService.exe" and click ok. It's easier to find in the list if you select the "by executable" radio button, which will alphabetize the list.

WinDbg is a command line debugger, at the bottom of the Windows there is a box where you can enter commands for it to execute - that's primarily how you get it to do anything.

As we're going to debug .NET code, we'll first load an extension for WinDbg which will help us to decode .NET related information from the process. This extension exists on any machine which has the .NET framework installed. Enter this command and hit enter:

.load C:\Windows\Microsoft.NET\Framework64\v4.0.30319\sos.dll

Next we're going to tell WinDbg that when a .NET exception occurs it should stop the process on a breakpoint, because we don't have source code available in an on-premises environment, the easy way for us to set a breakpoint is to base it on exceptions. The command for WinDbg to break on exception is "sxe" and the exception code is "e0434352", we always use the same exception code here, because that is the native Windows code representing all .NET type exceptions.

sxe e0434352

Now we need to let the process run again - because when we attached to the process WinDbg automatically put a "break" on it - we can tell if the process is running or not - if it's running it says "Debuggee is running.." in the command prompt. To let the process run again enter "g" meaning go.

g

After entering "g" you see it is running again:

Ok now we're ready to reproduce our issue, so I'm just going to my client application and making the error happens, then in WinDbg I see this. Note that the client application will seem to "hang", this is because WinDbg is stopping the AOS on a breakpoint and not letting it complete the request:

We can run a command to show us the exception detail "!pe". This command comes from the sos.dll extension we loaded earlier, the use of "!" denotes it's coming from an extension. Note that WinDbg is case sensitive on everything you enter.

Here I can see the exception from within the AOS - it's hard to see in the screenshot, so here's the full text:

0:035> !pe
Exception object: 000002023b095e38
Exception type: System.IdentityModel.Tokens.SecurityTokenInvalidAudienceException
Message: IDX10214: Audience validation failed. Audiences: 'https://ax.d365ffo.zone1.saonprem.com/namespaces/axsf/'. Did not match: validationParameters.ValidAudience: 'null' or validationParameters.ValidAudiences: 'https://ax.d365ffo.zone1.saonprem.com, 00000015-0000-0000-c000-000000000000, https://ax.d365ffo.zone1.saonprem.com/'
InnerException:
StackTrace (generated):
StackTraceString:
HResult: 80131501

I'm not going explain the example error message in this post - but if you're interested it is explained here

Next we can see the call stack leading to this exception by running "!clrstack", it's worth noting that the first time you run this command on a machine where it's the first time you've used WinDbg it might spin for a couple of minutes - that happens because WinDbg is looking for symbols - after the first time it'll run straight away. This command is useful to understand what the AOS was trying to do when the exception occurred - its not necessary to have all of the source code to make sense of the call stack - most times I am looking at this I am simply reading the method names and making an educated guess about what it was doing based on the names (of course it's not always that simple, but often it is).

!clrstack

Last command for this post, is to show the running .NET variables relating to the call stack we just saw. This command is useful, to understand what values the AOS was running with - similar to my approach with !clrstack, I am simply looking through this list of human readable values - something I recognize - for example if it was an exception in a Purchase order process I'd be looking for something which looks like a vendor account number or PurchId. This is particularly useful when the value the AOS is running with, isn't the value that you expect it should have been running with.

!dso

That's all for now, happy debugging!

How to view which permissions a security role really has in Dynamics 365 for Finance and Operations

$
0
0

The key word in the title of this post is "really" - this isn't about how to look in the AOT or how to open the security forms in the browser - this is about how to check what an AOS is picking up as security permissions for a given role under the hood.

Why would I want to do that I hear you ask? It's useful for me when I'm developing new security elements - because the AOS doesn't see them until I do a full build and database synchronize(sometimes just a synch if everything is already built), and I can't always remember when I last did a build and database synchronize - so it gives me a simple way that I can check what the AOS actually see's for a security role. Also if you're troubleshooting something wrong with security in a deployment environment it gives a way to see what the AOS is seeing.

How can I do it?

Earlier, I created a new privilege, which granted a form control permission to a control called "GroupFinancialDimensionLine" on Forms\SalesTable, then I created a role extension on the Accounts Receivable Clerk role, and granted it my new privilege.

What I want to do now, is see if my AOS knows about it or not - or if I need to run a full build/synch.

Querying my AXDB, first I'm looking up the RecId for the role I modified, then I'm using that RecId to check what permissions are set for SalesTable for that Role - looking in the SecurityRoleRuntime table.


select recId, * from SECURITYROLE where name = 'Accounts receivable clerk'

--query returned recId=13 for that record

SELECT T1.SECURITYROLE,T1.NAME,T1.CHILDNAME,T1.TYPE,T1.CREATEACCESS,T1.READACCESS,T1.UPDATEACCESS,T1.DELETEACCESS,
T1.CORRECTACCESS,T1.INVOKEACCESS,T1.PASTCREATEACCESS,T1.PASTREADACCESS,T1.PASTUPDATEACCESS,T1.PASTDELETEACCESS,T1.PASTCORRECTACCESS,
T1.PASTINVOKEACCESS,T1.CURRENTCREATEACCESS,T1.CURRENTREADACCESS,T1.CURRENTUPDATEACCESS,T1.CURRENTDELETEACCESS,T1.CURRENTCORRECTACCESS,
T1.CURRENTINVOKE,T1.FUTURECREATEACCESS,T1.FUTUREREADACCESS,T1.FUTUREUPDATEACCESS,T1.FUTUREDELETEACCESS,T1.FUTURECORRECTACCESS,
T1.FUTUREINVOKEACCESS,T1.RECVERSION,T1.RECID
FROM SECURITYROLERUNTIME T1
WHERE (SECURITYROLE=13) AND NAME = 'SALESTABLE'

A couple of things to note:

- It's database synchronize that's populating SECURITYROLERUNTIME.
- AOS is using SECURITYROLERUNTIME as it's definition of the detail of each role - this is how it knows what to allow a user to see/do and what not to.
- AOS only reads from the table on startup**, and then it's cached.
- When you're deploying a package to an environment - no further action should be needed - the will be populated if package deployment completes successfully.

In my example, after a database synchronize, I can see my new permission is there, and then when I log in with a user with that permission it works:

**I said that an AOS only reads the table on startup - that's not strictly true, it just made a nicer bullet point. There is a cache synchronizing mechanism between AOS - so that if someone modifies a role/permission in the UI, the other AOSes will pick up the change by re-reading the table:

- Each running AOS has in it's memory a global user role version ID
- It's getting this from a special record in Tables\SysLastValue
- Periodically (every few minutes) it checks the SysLastValue record to see if the ID has changed - meaning has another AOS made a role change, and notified the others by incrementing the global user role version ID stored in this table.
- If it's changed it flushes it's cache and re-reads all the role information from SecurityRoleRuntime

It's a similar type of mechanism that we use for AOS to check their server configuration, batch configuration and EntireTable cache settings/values.


Debug Dynamics 365 for Finance and Operations on-premises with Visual Studio remote debugger

$
0
0

In this article I’m going to explain how to use Visual Studio Remote Debugger to debug a Dynamics 365 for Finance and Operations AOS in an on-premises environment. Why would you want to do that? Well, if you have an issue occurring in an on-premises environment that you can't reproduce on your developer (also known as Tier1/onebox/dev box) environment, this allows you to attach Visual Studio from the developer environment to the on-premises AOS and debug X++ code.

There's another related article on here, to debug an on-premises AOS without Visual Studio, which may be useful depending on your circumstances.

Overview

The basic gist of this process is:
1. Use a D365 developer environment which is on the domain (and of course the network) with the AOS machine
2. Copy the remote debugging tools from developer environment to the AOS
3. Run the remote debugger on the AOS
4. Open Visual Studio on the developer environment and attach to the remote debugger on the AOS
5. From this point debug as normal

First let’s talk about why I’m using a developer environment which is joined to the domain: The remote debugger has a couple of authentication options – you can either set it to allow debugging from anyone (basically no authentication), or to use Windows authentication. It’s a bit naughty to use the no authentication option, although the remote debugger wouldn’t be accessible from the internet, it’s still allowing that access to the machine from the network without any control on it. So we’ll use the Windows authentication option, which means we need to be on the domain.

There’s nothing special about adding a developer environment to the domain, join as you would any other machine - I won't go into that here.

Copy the remote debugger to the AOS

On the developer environment you'll find "Remote Debugger folder" on the Windows start menu:

Copy the x64 folder from there, and paste it onto the AOS you're going to debug. Note that if you have multiple AOS in your on-premises environment, turn off all but one of them - so that all requests will go to that one AOS that you're debugging. Within the folder double click msvsmon.exe:

The remote debugger will open, and look something like this, take note of the machine name and port, in my case it's SQLAOSF1AOS1:4020.

Configure the developer environment

Now move over to the developer environment, log on as an account which is an Administrator of both the developer machine and the AOS machine you want to debug. Open Visual Studio and go to Tools, Options, set the following options:

Dynamics 365, Debugging: Uncheck "Load symbols only for items in the solution"
Debugging, General: Uncheck "just my code"
Debugging, Symbols: add paths for all packages you want to debug, pointing to the location of the symbols files on the AOS you want to debug, because my account is an Administrator on the AOS box I can use the default C$ share to add those paths, like this:

Close the options form, and then go to Debug, Attach to process.., in the window that appears set the qualifier to the machine and port we saw earlier in the remote debugger on the AOS machine, in my case it was SQLAOSF1AOS1:4020. Then at the bottom click "Show processes from all users" and select the AXService.exe process, this is the AOS.

You'll get a warning, click attach.

On the AOS machine, you'll see in the remote debugger that you've connected:

Now open some code and set a breakpoint, in my case I'm choosing Form\CustTable.init(), then open the application in the browser and open the form to hit your breakpoint.

Switching between source code files

When you try to step into a different source file, for example if I want to step from CustTable.init() down into TaxWithholdParameters_IN::find(), then I need to open the code for TaxWithholdParameters_IN manually from the Application explorer (AOT) before I step into it, if you don't do that you'll get a pop up window asking you where the source code file is - if that happens, then you can just cancel the dialog asking for the source file, go open it from the AOT, and then double click on the current row in the call stack to force it to realize you've got the source file now.

Happy debugging!

How to copy a database from cloud tier 1 to on-premises in Dynamics 365 for Finance and Operations

$
0
0

In this post I'm going to explain how to copy a Dynamics 365 for Finance and Operations database from a cloud Tier 1 environment (also known as a onebox, or demo environment) to an on-premises environment. This might be useful if you're using a Tier 1 to create your golden configuration environment, which you'll use to seed the on-premises environments later.

I will post how to move a database in the other direction soon.

Overview

This process is relatively simple compared to the cloud version, because we're not switching between Azure SQL and SQL Server - it's all SQL Server. The basic gist of the process is:
1. Backup the database on the Tier 1 (no preparation needed)
2. Restore the database to the on-premises SQL instance
3. Run a script against the restore DB to update some values
4. Start an AOS in the on-premises and wait for it to automatically run database synchronize and deploy reports

Process

First back up the database on the Tier 1 environment and restore it to the on-premises environment - don't overwrite the existing on-premises database, keep that one and restore the new one with a different name - because we're going to need to copy some values across from the old DB to the new DB.

Now run this script against the newly restored DB, make sure to set the values for the database names correctly:


--Remove the database level users from the database
--these will be recreated after importing in SQL Server.
use AXDB_onebox --******************* SET THE NEWLY RESTORED DATABASE NAME****************************

declare
@userSQL varchar(1000)
set quoted_identifier off
declare userCursor CURSOR for
select 'DROP USER ' + name
from sys.sysusers
where issqlrole = 0 and hasdbaccess = 1 and name 'dbo' and name 'NT AUTHORITY\NETWORK SERVICE'
OPEN userCursor
FETCH userCursor into @userSQL
WHILE @@Fetch_Status = 0
BEGIN
exec(@userSQL)
FETCH userCursor into @userSQL
END
CLOSE userCursor
DEALLOCATE userCursor

--now recreate the users copying from the existing database:
use AXDB --******************* SET THE OLD ON-PREMISES DATABASE NAME****************************
go
IF object_id('tempdb..#UsersToCreate') is not null
DROP TABLE #UsersToCreate
go
select 'CREATE USER [' + name + '] FROM LOGIN [' + name + '] EXEC sp_addrolemember "db_owner", "' + name + '"' as sqlcommand
into #UsersToCreate
from sys.sysusers
where issqlrole = 0 and hasdbaccess = 1 and name 'dbo' and name 'NT AUTHORITY\NETWORK SERVICE'
go
use AXDB_onebox --******************* SET THE NEWLY RESTORED DATABASE NAME****************************
go
declare
@userSQL varchar(1000)
set quoted_identifier off
declare userCursor CURSOR for
select sqlcommand from #UsersToCreate
OPEN userCursor
FETCH userCursor into @userSQL
WHILE @@Fetch_Status = 0
BEGIN
exec(@userSQL)
FETCH userCursor into @userSQL
END
CLOSE userCursor
DEALLOCATE userCursor

--Storage isn't copied from one environment to another because it's stored outside
--of the database, so clearing the links to stored documents
UPDATE T1
SET T1.STORAGEPROVIDERID = 0
, T1.ACCESSINFORMATION = ''
, T1.MODIFIEDBY = 'Admin'
, T1.MODIFIEDDATETIME = getdate()
FROM DOCUVALUE T1
WHERE T1.STORAGEPROVIDERID = 1 --Azure storage

--Clean up the batch server configuration, server sessions, and printers from the previous environment.
TRUNCATE TABLE SYSSERVERCONFIG
TRUNCATE TABLE SYSSERVERSESSIONS
TRUNCATE TABLE SYSCORPNETPRINTERS

--Remove records which could lead to accidentally sending an email externally.
UPDATE SysEmailParameters
SET SMTPRELAYSERVERNAME = ''
GO
UPDATE LogisticsElectronicAddress
SET LOCATOR = ''
WHERE Locator LIKE '%@%'
GO
TRUNCATE TABLE PrintMgmtSettings
TRUNCATE TABLE PrintMgmtDocInstance

--Set any waiting, executing, ready, or canceling batches to withhold.
UPDATE BatchJob
SET STATUS = 0
WHERE STATUS IN (1,2,5,7)
GO

--SysFlighting is empty in on-premises environments, so clean it up
TRUNCATE TABLE SYSFLIGHTING

--Update the Admin user record, so that I can log in again
UPDATE USERINFO
SET SID = x.SID, NETWORKDOMAIN = x.NETWORKDOMAIN, NETWORKALIAS = x.NETWORKALIAS,
IDENTITYPROVIDER = x.IDENTITYPROVIDER
FROM AXDB..USERINFO x --******************* SET THE OLD ON-PREMISES DATABASE NAME****************************
WHERE x.ID = 'Admin' and USERINFO.ID = 'Admin'

Now the database is ready, we're going to rename the old on-premises database from AXDB to AXDB_old, and the newly restored database from AXDB_onebox to AXDB. This means we don't have to change the AOS configuration to point to a new database - we're using the same users and the same database name.

All we need to do is restart all the AOS processes (either reboot the machines or restart the AOS apps from service fabric explorer).

When the AOSes restart one of them will run a database synchronize & deploy reports - because they can tell the database changed. You can watch progress in the AOS event log – create a custom event log view for all events under “Services and applications\Microsoft\Dynamics”. When this is finished you’ll see a record appear in SF.SYNCLOG in the AXDB.

Notes

A few other things to note:
- Only the Admin user can log in - because I'm assuming that the users from the onebox environment were all AAD cloud users, and that's not what the on-premises environment uses. The script above fixed the Admin user, but left the others as-is.
- To get Management Reporter working again, perform a reset.
- Storage (things like document handling documents) aren't kept in the database, so copy the database hasn't copied those things across. In the script above we cleared the links in the DocuValue table, so that we don't try and open docs from Azure storage which aren't there.
- The script has withheld all batch jobs, to stop anything running which shouldn't.
- Data stored in fields that were encrypted in the Tier1 environment, won't be readable in the restored database - there aren't many fields that are like this, details are in the "Document the values of encrypted field" section here.

How to copy a database from on-premises to to cloud Tier 1 in Dynamics 365 for Finance and Operations

$
0
0

In this post I'll explain how to copy a database from an on-premises environment, and restore it to a Tier1 (also know as onebox, dev box) environment. Why would you want to do that? Well typically so that you have some realistic data to develop against, or to debug a problem that you can only reproduce with the data.

If you've already read this post about copying a database in the other direction - tier1 to on-premises, then this process will be very familiar.

Overview

This process is relatively simple compared to the cloud version, because we're not switching between Azure SQL and SQL Server - it's all SQL Server. The basic gist of the process is:
1. Backup the database on the on-premises environment (no preparation needed)
2. Restore the database to the Tier 1 environment
3. Run a script against the restore DB to update some values
4. Open Visual Studio and run a database synchronize

Process

First back up the database on the on-premises environment and restore it to the Tier 1 environment - don't overwrite the existing Tier 1 database, keep that one and restore the new one with a different name - because we're going to need to copy some values across from the old DB to the new DB.

Now run this script against the newly restored DB, make sure to set the values for the database names correctly:


--Remove the database level users from the database
--these will be recreated after importing in SQL Server.
use AXDB_onpremises --******************* SET THE NEWLY RESTORED DATABASE NAME****************************

declare
@userSQL varchar(1000)
set quoted_identifier off
declare userCursor CURSOR for
select 'DROP USER [' + name +']'
from sys.sysusers
where issqlrole = 0 and hasdbaccess = 1 and name != 'dbo' and name != 'NT AUTHORITY\NETWORK SERVICE'
OPEN userCursor
FETCH userCursor into @userSQL
WHILE @@Fetch_Status = 0
BEGIN
exec(@userSQL)
FETCH userCursor into @userSQL
END
CLOSE userCursor
DEALLOCATE userCursor

--now recreate the users copying from the existing database:
use AXDB --******************* SET THE OLD TIER 1 DATABASE NAME****************************
go
IF object_id('tempdb..#UsersToCreate') is not null
DROP TABLE #UsersToCreate
go
select 'CREATE USER [' + name + '] FROM LOGIN [' + name + '] EXEC sp_addrolemember "db_owner", "' + name + '"' as sqlcommand
into #UsersToCreate
from sys.sysusers
where issqlrole = 0 and hasdbaccess = 1 and name != 'dbo' and name != 'NT AUTHORITY\NETWORK SERVICE'
go
use AXDB_onpremises --******************* SET THE NEWLY RESTORED DATABASE NAME****************************
go
declare
@userSQL varchar(1000)
set quoted_identifier off
declare userCursor CURSOR for
select sqlcommand from #UsersToCreate
OPEN userCursor
FETCH userCursor into @userSQL
WHILE @@Fetch_Status = 0
BEGIN
exec(@userSQL)
FETCH userCursor into @userSQL
END
CLOSE userCursor
DEALLOCATE userCursor

--Storage isn't copied from one environment to another because it's stored outside
--of the database, so clearing the links to stored documents
UPDATE T1
SET T1.STORAGEPROVIDERID = 0
, T1.ACCESSINFORMATION = ''
, T1.MODIFIEDBY = 'Admin'
, T1.MODIFIEDDATETIME = getdate()
FROM DOCUVALUE T1
WHERE T1.STORAGEPROVIDERID = 4 --Files stored in local on-premises storage

--Clean up the batch server configuration, server sessions, and printers from the previous environment.
TRUNCATE TABLE SYSSERVERCONFIG
TRUNCATE TABLE SYSSERVERSESSIONS
TRUNCATE TABLE SYSCORPNETPRINTERS

--Remove records which could lead to accidentally sending an email externally.
UPDATE SysEmailParameters
SET SMTPRELAYSERVERNAME = ''
GO
UPDATE LogisticsElectronicAddress
SET LOCATOR = ''
WHERE Locator LIKE '%@%'
GO
TRUNCATE TABLE PrintMgmtSettings
TRUNCATE TABLE PrintMgmtDocInstance

--Set any waiting, executing, ready, or canceling batches to withhold.
UPDATE BatchJob
SET STATUS = 0
WHERE STATUS IN (1,2,5,7)
GO

--Update the Admin user record, so that I can log in again
UPDATE USERINFO
SET SID = x.SID, NETWORKDOMAIN = x.NETWORKDOMAIN, NETWORKALIAS = x.NETWORKALIAS,
IDENTITYPROVIDER = x.IDENTITYPROVIDER
FROM AXDB..USERINFO x --******************* SET THE OLD TIER 1 DATABASE NAME****************************
WHERE x.ID = 'Admin' and USERINFO.ID = 'Admin'

Now the database is ready, we're going to rename the old Tier 1 database from AXDB to AXDB_old, and the newly restored database from AXDB_onpremises to AXDB. This means we don't have to change the AOS configuration to point to a new database - we're using the same users and the same database name.

NOte that to do the rename, you'll need to stop the Management reporter, batch, IIS and/or iisexpress services - otherwise it'll say the database is in use.

Then open Visual Studio and run a database synchronize. A tier 1 environment doesn't have the same auto-DB-synch mechanism like the on-premises environment does, so you have to run it yourself.

Notes

A few other things to note:
- Only the Admin user can log in - because I'm assuming that the users from the onebox environment were all AAD cloud users, and that's not what the on-premises environment uses. The script above fixed the Admin user, but left the others as-is.
- To get Management Reporter working again, perform a reset.
- Storage (things like document handling documents) aren't kept in the database, so copy the database hasn't copied those things across. In the script above we cleared the links in the DocuValue table, so that we don't try and open docs from local on-premises storage which aren't there.
- The script has withheld all batch jobs, to stop anything running which shouldn't.
- Data stored in fields that were encrypted in the Tier1 environment, won't be readable in the restored database - there aren't many fields that are like this, details are in the "Document the values of encrypted field" section here.

How to use Environment Monitoring View Raw Logs

$
0
0

This document explains how to use the "view raw logs" feature in LCS environment monitoring for your Cloud Dynamics 365 for Finance and Operations environments, this is the ability for you to look at some of the various telemetry data we record from your environments (for example slow queries) to give you insight into issues you might have, or crucially to react proactively before anyone notices there's an issue.

So what is this view raw logs?

Physically "view raw logs" is a button in LCS which shows you various telemetry data taken from your environment, things like long running queries. In the background this is surfacing telemetry data gathered from the environment - for all Microsoft-hosted cloud environments we're gathering telemetry data constantly. This is via instrumentation in our application, we are gathering a huge number of rows per hour from a busy environment. We store this in a Big Data solution in the cloud, this is more than just a SQL Database somewhere, as we are quite literally gathering billions and billions of rows per day, it's a pretty special system.

Timings - how quickly does it show in LCS and long is it kept for?

There is approximately a 10 minute delay between capturing this data from an environment and being able to view it in LCS.

Data is available in LCS for 30 days - so you always have a rolling last 30 days.

A few limitations/frequently asked questions

- Is it available for on-premises? Not available for on-premises and not on the roadmap yet. This feature relies on uploading telemetry data to Microsoft cloud, so it doesn't feel right for on-premises.
- Is it available for ALL other environments? It's available for environments within your Implementation project - so Tier1-5 environments and Production. It's not available for environments you download or host on your own Azure subscription.
- Doesn't Microsoft monitor and fix everything for me so I don't need to look at anything? This can be a sensitive subject; Microsoft are monitoring production, and will contact you if they notice an issue which you need to resolve (that's quite new). Customer/Partner still own their code, and Microsoft won't change your code for you. During implementation and testing, you're trying to make sure all is good before you go-live, this is useful during that period too. So the reality is it's a little bit on all parties.
- Is there business data shown/stored in telemetry? No. From a technical perspective this does mean things like user name and infolog messages are not shown, which as a Developer is annoying, but understandable.

Where to find view raw logs?

From your LCS project, click on "Full details" next to the environment you want to view, a new page opens, then scroll to the bottom of the page and click "Environment monitoring" link, a new page opens, click "view raw logs" button (towards the right hand side), now you're on the View raw logs page!

Here's a walkthrough:



Explanation of the search fields

See below:

How to use "search terms" for a query?

This field allows you to search for any value in any column in the report. A common example would be looking for an Activity ID from an error message you get, for example:

An activity ID can be thought of as the ID which links together the log entries for a particular set of actions a user was taking – like confirming a PO. If you add a filter on this in the “All logs” query, as below, then you’ll see all logs for the current activity the user was performing – this is showing all events tagged with that activityId.

Tell me what each query does!

All logs

This query can be used to view all events for a giving user’s activity ID. If a user had a problem and saved the activity ID for you, then you can add it in the “search terms” box in this query and see all events for the process they were performing at the time. The exceptionMessage and exceptionStacktrace are useful for a Developer to understand what may have caused a user’s issue, these are populated when the TaskName= AosXppUnhandledException

All error events

This is a version of “All logs” which is filtered to only show TaskName=CliError, which means Infolog errors. There is only one column available on this report which isn’t already available in “All logs” which is eventGroupId, which serves no practical purpose. It is not possible to identify which users had the errors (user isn't captured directly on this TaskName). It is not possible to see the Infolog message shown to the user (because it could have contained business data so can't be captured). The callstack column shows the code call stack leading to the error.

User login events

This shows when user sessions logged on and off. The user IDs have been anonymized as GUIDs so to track them back to actual users, in the "Users" form inside Dynamics look at the "telemetry ID" field, on environments where you have database access you can look in the USERINFO table at OBJECTID. The report is pre-filtered to show 7 days activity from the end date of your choosing. There is a maximum limit of 10000 rows, the report isn’t usable if you have over 10k in 10 days.

This report could be useful to make statistics about the number of unique users using the system per day/week/month, by dumping the results to excel and aggregating it.

Error events for a specific form

This shows all TaskName=CliError (Infolog errors) for a specific form name you search for. The form name is the AOT name of the form, e.g. TrvExpenses, not the name you see on the menu, e.g. Expenses.
The call stack is visible for any errors found. This can be useful when users are reporting problems with a particular form, but they haven't given you an ActivtyId from an error message - using this query you can still find errors/call stacks related to the form.

Slow queries

This shows all slow queries for a time period. SQL query and call stack is shown. The Utilization column is the total estimated time (AvgExecutionTimeinSeconds * ExecutionCount) – we're calling it "estimated" because it’s using the average execution time and not the actual time. Queries over 100ms are shown.

This one is one of my favourites, it's very useful to run after a set of testing has completed to see how query performance was. A Developer can easily see where long queries might be coming from (because SQL and call stack are given) and take action.

SQL Azure connection outages

Shows when SQL Azure was unavailable. This is very rare though, I've never seen it show any data.

Slow interactions

Ironically the "slow interactions" query takes a long time to run! The record limit isn’t respected on this query – it shows all records regardless. This means if you try to run it for longer periods it’ll fail with “query failed to execute” error message as the result set is too large, run it in small date ranges to prevent this.
This one includes the slow query data I mentioned earlier, and also more form related information, so what this one can give you is a rough idea of the buttons a user pressed (or I should say form interactions to be more technically correct) leading up to the slow query. If you're investigating a slow query, looking for it here will give you a bit more context about the form interactions.

Is batch throttled

This shows whether batches were throttled. Batch throttling feature will prevent new batches from running temporarily if the resource limits set within the batch throttling feature are exceeded - this is to try and limit the resources that a batch process can use, to ensure that sufficient resources are available for user sessions. The infoMessage column in this report shows which resource was exceeded.
Generally speaking you shouldn't hit the throttling limits - if you see data in here, it's likely you have a runaway batch job on your hands - find out which one and look at why it's going crazy.

Financial reporting daily error summary

Shows an aggregated summary of errors from the Financial Reporting processing service (used to be called Management Reporter). This gives you a fast view if anything is wrong with Financial Reporting, this is hard-coded to filter for today, but as processing runs every 5 minutes in the background that is ok. Typically use this if a user reports something is wrong/missing in Financial Reporting, to get a quick look at if any errors are being reported there.

Financial reporting long running queries

This shouldn't return any data normally - it might do if a reset has been performed on Financial reporting and it's rebuilding all of it's data. Generally for customers and partners I would recommend not to worry about this one, it's more for Microsoft's benefit.

Financial reporting SQL query failures

Again this one shouldn't return data normally. This helps to catch issues such as, when copying databases between environments if change tracking has been re-enabled, then Financial reporting will be throwing errors when it tries to make queries against change tracking.

Financial reporting maintenance task heartbeat

The Financial reporting service reports a heartbeat once a minute to telemetry to prove it's running ok. This report shows that data summarized - so it has 1 row per hour, and should show 60 count for each row (i.e. one per minute). This allows you to see if the service is running and available. Note that the report doesn't respect the row limit, but as it's aggregated it doesn't cause a problem.

Financial reporting data flow

For those of you familiar with the old versions of Financial Reporting (or management reporter), this is similar to the output you used to get in the UI of the server app, where you can see the various integration tasks and whether they ran ok, and how many records they processed. This is useful for checking if the integration is running correctly or if one of the jobs is failing. Note that this query also ignores the row limit, so run it for a shorter time period or it'll run for a long time.

Financial reporting failed integration records

I'd skip this one, it's showing just the timestamp and name for each integration task (similar to the "Financial reporting data flow" query above, but with less information), the name suggests it shows only failures, but actually it shows all rows regardless. Use the "Financial reporting data flow" query instead.

All events for activity

You can skip over this one - it's very similar to the "All logs" query – but it also has SQL server name and SQL database name, which are irrelevant as you’ve already chosen an environment to view it so you know which server and database it is.

All crashes

This shows AOS crashes, it tells you is how many crashes there were but it’s not directly actionable from here. If you have data here, log a support ticket with Microsoft - on the Microsoft side we have more data available about the crash which means it is actionable. Microsoft are proactively investigating crash issues we see through telemetry. Keeping up to date on platform updates helps prevent seeing crashes.

All deadlocks in the system

The title of this query is odd "in the system", ahh thanks for clarifying I thought it was all deadlocks in the world. This shows SQL deadlocks, and gives the SQL statement and call stack. You can use this similarly to the "Slow queries" query, for example after a round of testing has completed, review this log to check whether the tested code was generating deadlocks anywhere - and if it is then investigate the related X++ code.

Error events for activity

This is a filtered versions of the query “All events for activity” showing only errors, which itself is a version of "All logs" - it means if you've been given an ActivityId you could use this one to jump straight to only the error events relating to that activity - whereas "All logs" would show you errors + other data.

Distinct user sessions

This one shows, for each user, how many sessions they’ve had during a time period. You could use this to look at user adoption of the environment - the number of unique users per day/week/month - see if users are actually using it. It is similar to "User login events", just aggregated.

All events for user

This one is named in a confusing way – really it is showing user interaction events for a user – so it’ll show you everything a user pressed in forms in the time period. The tricky thing is that user IDs are obfuscated so you need to find the GUID for the user first – look it up in the "Users" form inside Dynamics. You might use this to see what a particular user was doing during a period, if you're trying to reproduce something they've reported and the user isn't being very forthcoming with information. The information shown here is a little difficult to interpret, it's very much Developer focused.

All events for browser session

This allows you to look up results using the session ID from an error message - remember right at the beginning of this article the screenshot about how to use the ActivityId from an error message - well also in that message was a "Session ID" this query let's you show logs for the session. Imagine an Activity ID is a set of related events within a session, and a Session ID is the overarching session containing everything while the user was logged in that time.

Find the official page on Monitoring and Diagnostics here.

How to link SQL SPID to user in Dynamics 365 for Finance and Operations on-premises

$
0
0

Quick one today! How to link a SQL SPID back to a Dynamics user in Dynamics 365 for finance and operations on-premises. You use this when, for example, you have a blocking SQL process, and you want to know which user in the application triggered it - this allows you to look up the blocking SPID and find out which user.

Run this SQL:


select cast(context_info as varchar(128)) as ci,* from sys.dm_exec_sessions where session_id > 50 and login_name = 'axdbadmin'

First column, shows the Dynamics user. It's much like it was in AX2012, except you don't need to go set a registry key first.

You can do the same thing in the Cloud version, but there you don't need to do it in TSQL because in LCS you can go to Monitoring and "SQL Now" tab where you can see SPID to user for running SQL.

How to connect to SQL on BI servers in a Dynamics 365 for Finance and Operations environment

$
0
0

Another quick one - I had trouble this week connecting to the local SQL Server instance on the BI server in my Dynamics 365 for Finance and Operations cloud environment.

I was investigating an SQL Server Reporting Services (SSRS) issue and I wanted to be able to look at the execution logs in the SSRS database.

Looking at the SSRS configuration on the box it appeared that SSRS itself was connecting to the database as Network Service, but that didn't help me when trying to connect using SQL Server Management Studio (SSMS) myself, so I was doubting whether I could ever access the local SQL instance there.

In the end I realized there is a very simple solution - if you're logged on as the local Admin account, and run SSMS normally, login to SQL will fail, but if you run SSMS as Administrator, then login to SQL will work fine (just using Windows authentication, the local Admin account is a SQL admin).

How to scale out Dynamics 365 for Finance and Operations on-premises

$
0
0

How to scale out Dynamics 365 for Finance and Operations on-premises

 

In this post I’m going to explain how to scale out Dynamics 365 for Finance and Operations on-premises by adding new VMs to your instance.

 

Overview

The process is quite straight forward and Service Fabric is going to do the remaining jobs once a new node added to Service Fabric Cluster. In this post, I’m going to showcase it by adding a new AOS node to an existing Dynamics 365 for Finance and Operations 7.3 with Platform Update 12 on-premises instance. Basically, the procedure is as follows.

  1. Update Dynamics 365 for Finance and Operations on-premises configurations for new AOS node
  2. Setup new AOS machine for Dynamics 365 for Finance and Operations on-premises
  3. Add new AOS machine as an AOS node in Service Fabric Cluster
  4. Verify new AOS node is functional

Prerequisites

  1. New AOS machine must fulfill the system requirements documented in here
  2. Basic configurations on new AOS machine like join domain, IP assignment, enable File and printer sharing… are done

Procedures

Update Dynamics 365 for Finance and Operations on-premises configurations for new AOS node

  1. Update ConfigTemplate to include new AOS node. For detailed instructions, please refer to documentation in here.
    a. Identify which fault and upgrade domain new AOS node will belong to
    b. Update AOSNodeType section to include new AOS machine
  2. Add A record for new AOS node in DNS Zone for Dynamics 365 for Finance and Operations on-premises. For detailed instructions, please refer to documentation in here.
  3. Run cmdlet Update-D365FOGMSAAccounts to update Grouped Service Accounts. For detailed instructions, please refer to documentation in here.
  4. Grant Modified permission of file share aos-storage to new AOS machine. For detailed instructions, please refer to documentation in here.

Setup new AOS machine for Dynamics 365 for Finance and Operations on-premises

  1. Install prerequisites. For detailed instructions, please refer to documentation in here
  2. a. Integration Services
    b. SQL Client Connectivity SDK

  3. Add gMSA svc-AXSF$ and domain user AxServiceUser to local administrators group
  4. Setup VM. For detailed instructions, please refer to documentation in here.
  5. a. Copy D365FFO-LBD folder from an existing AOS machine, then run below steps in powershell as an administrator from D365FFO-LBD folder

    NOTE: D365FFO-LBD folder is generated by cmdlet Export-Scripts.ps1 when deploy Dynamics 365 for Finance and Operations on-premises per document in here

    b. Run Configure-PreReqs.ps1 to install pre-req softwares on new AOS machine
    c. Run below cmdlets to complete pre-reqs on new AOS machine
    .\Add-GMSAOnVM.ps1
    .\Import-PfxFiles.ps1
    .\Set-CertificateAcls.ps1

  6. Run Test-D365FOConfiguration.ps1 to verify all setup is done correctly on new AOS machine
  7. Install ADFS certificate and SQL Server certificate
  8. a. Install ADFS SSL certificate to Trusted Root Certification Authorities of Local Machine store
    b. Install SQL Server (the .cer file) in Trusted Root Certification Authorities of Local Machine store

Add new AOS machine as an AOS node in Service Fabric Cluster

  1. The full instructions about how to add or remove a node in a existing Service Fabric Cluster could be found in here. Below steps are performed in new AOS machine.
  2. Download, unblock and unzip the same version of standalone package for Service Fabric for Window Server for existing Server Fabric Cluster
  3. Run Powershell with elevated privileges, and navigate to the location of the unzipped package in above step
  4. Run below cmdlet to add new AOS machine as an AOS node in Service Fabric cluster

  5. .\AddNode.ps1 -NodeName <AOSNodeName> -NodeType AOSNodeType -NodeIPAddressorFQDN <NewNodeFQDNorIP> -ExistingClientConnectionEndpoint <ExistingNodeFQDNorIP>:19000 -UpgradeDomain <UpgradeDomain> -FaultDomain <FaultDomain> -AcceptEULA -X509Credential -ServerCertThumbprint <ServiceFabricServerSSLThumbprint> -StoreLocation LocalMachine -StoreName My -FindValueThumbprint <ServiceFabricClientThumbprint>

    Note the following elements in above cmdlet

    AOSNodeName – Node name of a Service Fabric Cluster. Refer to configuration file or Service Fabric Cluster explorer to see how existing AOS nodes named
    AOSNodeType – the node type of new node is
    NewNodeFQDNorIP – FQDN or IP of new node
    ExistingNodeFQDNorIP – FQDN or IP of an existing node
    UpgradeDomain – upgrade domain specified in ConfigTemplate for new node
    FaultDomain – fault domain specified in ConfigTemplate for new node
    ServiceFabricServerSSLThumbprint – thumbprint of Service Fabric server certificate, star.d365ffo.onprem.contoso.com
    ServiceFabricClientThumbprint – thumbprint of Service Fabric client certificate, client.d365ffo.onprem.contoso.com
    Local Machine, My – where certificates installed

    NOTE: Internet access is required as AddNode.ps1 script will download Service Fabric runtime package

  6. Once new node added, set anti-virus exclusions to exclude Service Fabric directories and processes
  7. Get and edit existing Service Fabric Configuration once new node synced
  8. a. Run below cmdlet to connect to Service Fabriccluster

    $ClusterName= "<ExistingNodeFQDNorIP>:19000"
    $certCN ="<ServiceFabricServerCertificateCommonName>"
    Connect-serviceFabricCluster -ConnectionEndpoint $ClusterName -KeepAliveIntervalInSec 10 -X509Credential -ServerCommonName $certCN -FindType FindBySubjectName -FindValue $certCN -StoreLocation LocalMachine -StoreName My

    Note the following element in above cmdlet

    ExistingNodeFQDNorIP – FQDN or IP of an existing node
    ServiceFabricServerCertificateCommonName – common name of Service Fabric Server certificate, *.d365ffo.onprem.contoso.com
    Local Machine, My – where certificate installed

    b. Run cmdlet Get-ServiceFabricClusterConfiguration and save output as a JSON file
    c. Update ClusterConfigurationVersion with new version number in JSON file
    d. Remove WindowsIdentities section from JSON file

    e. Remove EnableTelemetry

    f. Remove FabricClusterAutoupgradeEnabled

  9. Start Service Fabric configuration upgrade
  10. a. Run below cmdlet to start Service Fabric configuration upgrade

    Start-ServiceFabricClusterConfigurationUpgrade -ClusterConfigPath <Path to Configuration File>;

    b. Run below cmdlet to monitor upgrade progress

    Get-ServiceFabricClusterUpgrade

Verify new AOS is functional

  1. Confirm new AOS machine is added as AOS node successfully
  2. Before After


  3. Validate new AOS is functional as expected

Cleanup routines in Dynamics 365 for Finance and Operations

$
0
0

In Dynamics 365 for Finance and Operations cleanup routines are available across various modules within the product. It is important to note that these cleanup routines should be only executed after detailed analysis and confirmation from the business this data is no longer needed. Also always test each routine first in test environment prior executing it in production. This article provides an overview on what is available today.

 

System administration

Module

Path

Description

System administration

Periodic tasks > Notification clean up

This is used to periodically delete records from tables EventInbox and EventInboxData. Recommendation would also be if you don’t use Alert functionality to disable Alert from Batch job.

System administration

Periodic tasks > Batch job history clean-up

The regular version of batch job history clean-up allows you to quickly clean all history entries older than a specified timeframe (in days). Any entry that was created prior to – will be deleted from the BatchJobHistory table, as well as from linked tables with related records (BatchHistory and BatchConstraintsHistory). This form has improved performance optimization because it doesn’t have to execute any filtering.

System administration

Periodic tasks > Batch job history clean-up (custom)

The custom batch job clean-up form should be used only when specific entries need to be deleted. This form allows you to clean up selected types of batch job history records, based on criteria such as status, job description, company, or user. Other criteria can be added using the Filter button.

 

Data management

Module

Path

Description

Data management

Data management workspace > “Staging cleanup” tile

Data management framework makes us of staging tables when running data migration. Once data migration is completed then this data can be deleted using "Staging cleanup" tile.

 

Warehouse management

Module

Path

Description

Warehouse management

Periodic tasks > Clean up > Work creation history purge

This is used to delete work creation history records from WHSWorkCreateHistory table based on number of days to keep the history provided on dialog.

Warehouse management

Periodic tasks > Clean up > Containerization history purge

This is used to delete containerization history from WHSContainerizationHistory table based on number of days to keep the history provided on dialog.

 

Warehouse management

Periodic tasks > Clean up > Wave batch cleanup

This is used to clean up batch job history records related to Wave processing batch group.

Warehouse management

Periodic tasks > Clean up > Cycle count plan cleanup

This is used to clean up batch job history records related to Cycle count plan configurations.

Warehouse management

Periodic tasks > Clean up > Mobile device activity log cleanup

This is used to delete mobile device activity log records from WHSMobileDeviceActivityLog table based on number of days to keep the history provided on dialog.

Warehouse management

Periodic tasks > Clean up > Work user session log cleanup

This is used to delete work user session records from WHSWorkUserSessionLog table based on number of hours to keep provided on dialog.

 

Inventory management

Module

Path

Description

Inventory management

Periodic tasks > Clean up > Calculation of location load

WMSLocationLoad table is used in tracking weight and volume of items and pallets. Summation of load adjustments job can be run to reduce the number of records in the WMSLocationLoad table and improve performance.

Inventory management

Periodic tasks > Clean up > Inventory journals cleanup

It is used to delete posted inventory journals.

Inventory management

Periodic tasks > Clean up > Inventory settlements cleanup

 

It is used to group closed inventory transactions or delete canceled inventory settlements. Cleaning up closed or deleted inventory settlements can help free system resources.

Do not group or delete inventory settlements too close to the current date or fiscal year, because part of the transaction information for the settlements is lost.

Closed inventory transactions cannot be changed after they have been grouped, because the transaction information for the settlements is lost.

Canceled inventory settlements cannot be reconciled with finance transactions if canceled inventory settlements are deleted.

Inventory management

Periodic tasks > Clean up > Inventory dimensions cleanup

This is used to maintain the InventDim table. To maintain the table, delete unused inventory dimension combination records that are not referenced by any transaction or master data. The records are deleted regardless of whether the transaction is open or closed.

Inventory dimension combination record that is still referenced cannot be deleted because when an InventDim record is deleted, related transactions cannot be reopened.

Inventory management

Periodic tasks > Clean up > Dimension inconsistency cleanup

This is used to resolve dimension inconsistencies on inventory transactions that have been financially updated and closed. Inconsistencies might be introduced when the multisite functionality was activated during or before the upgrade process. Use this batch job only to clean up the transactions that were closed before the multisite functionality was activated. Do not use this batch job periodically.

Inventory management

Periodic tasks > Clean up > On-hand entries cleanup

This is used to delete closed and unused entries for on-hand inventory that is assigned to one or more tracking dimensions. Closed transactions contain the value of zero for all quantities and cost values, and are marked as closed. Deleting these transactions can improve the performance of queries for on-hand inventory. Transactions will not be deleted for on-hand inventory that is not assigned to tracking dimensions.

Inventory management

Periodic tasks > Clean up > Warehouse management on-hand entries cleanup

Deletes records in the InventSum and WHSInventReserve tables. These tables are used to store on-hand information for items enabled for warehouse management processing (WHS items). Cleaning up these records can lead to significant improvements of the on-hand calculations.

Inventory management

Periodic tasks > Clean up > On-hand entries aggregation by financial dimensions

Tool to aggregate InventSum rows with zero quantities.

This is basically extending the previously mentioned cleanup tool by also cleaning up records which have field Closed set to True!

The reason why this is needed is basically because in certain scenarios, you might have no more quantities in InventSum for a certain combination of inventory dimensions, but there is still a value. In some cases, these values will disappear, but current design does allow values to remain from time to time.

If you for example use Batch numbers, each batch number (and the combined site, warehouse, etc.) creates a new record in InventSum. When the batch number is sold, you will see quantity fields are set to 0. In most cases, the Financial/Physical value field is also set to 0, but in Standard cost revaluation or other scenarios, the value field may show some amount still. This is valid, and is the way Dynamics 365 for Finance and Operations handles the costs on Financial inventory level, e.g. site level.

Inventory value is determined in Dynamics 365 for Finance and Operations by records in InventSum, and in some cases Inventory transactions (InventTrans) when reporting inventory values in the past. In the above scenario, this means that when you run inventory value reports, Dynamics 365 for Finance and Operations looks (initially) at InventSum and aggregates all records to Site level, and reports the value for the item per site. The data from the individual records on Batch number level are never used. The tool therefore goes through all InventSum records, finds the ones where there is no more quantity (No open quantities field is True). There is no reason to keep these records, so Dynamics 365 for Finance and Operations finds the record in InventSum for the same item which has the same Site, copies the values from the Batch number level to the Site level, and deletes the record. When you now run inventory value reports, Dynamics 365 for Finance and Operations still finds the same correct values. This reduced number of InventSum records significantly in some cases, and can have a positive impact on performance of any function which queries this table. 

Inventory management

Periodic tasks > Clean up > Cost calculation details

Used to clean up cost calculation details.

 

General ledger

Module

Path

Description

General ledger

Periodic tasks > Clean up ledger journals

It deletes general ledger, accounts receivable, and accounts payable journals that have been posted. When you delete a posted ledger journal, all information that’s related to the original transaction is removed. You should delete this information only if you’re sure that you won’t have to reverse the ledger journal transactions.

 

Sales and marketing

Module

Path

Description

Sales and marketing

Periodic tasks > Clean up > Delete sales orders

It deletes selected sales orders.

Sales and marketing

Periodic tasks > Clean up > Delete quotations

It deletes selected quotations.

Sales and marketing

Periodic tasks > Clean up > Delete return orders

It deletes selected return orders.

Sales and marketing

Periodic tasks > Clean up > Sales update history cleanup

It deletes old update history transactions. All updates of confirmations, picking lists, packing slips, and invoices generate update history transactions. These transactions ca be viewed in the History on update form.

Sales and marketing

Periodic tasks > Clean up > Order events cleanup

Cleanup job for order events. Next step is to remove the not needed order events check-boxes from Order event setup form.

 

Production control

Module

Path

Description

Production control

Periodic tasks > Clean up > Production journals cleanup

It is used to delete unused journals.

Production control

Periodic tasks > Clean up > Production orders cleanup

It is used to delete production orders that are ended.

Production control

Periodic tasks > Clean up > Clean up registrations

It is recommended to clean up registrations periodically. The clean-up function does not delete data that is not processed. Make sure that you do not delete registrations that may be required later for documentation purposes.

Production control

Periodic tasks > Clean up > Archive future registrations

It is used to remove future registrations from the raw registrations table.

 

Procurement and sourcing

Module

Path

Description

Procurement and sourcing

Periodic tasks > Clean up > Purchase update history cleanup

This is used to delete all updates of confirmations, picking lists, product receipts, and invoices generate update history transactions.

Procurement and sourcing

Periodic tasks > Clean up > Delete requests for quotations

It is used to delete requests for quotation (RFQs) and RFQ replies. The corresponding RFQ journals are not deleted, but remain in the system.

Procurement and sourcing

Periodic tasks > Clean up > Draft consignment replenishment order journal cleanup

It is used to cleanup draft consignment replenishment order journals.

 

Introduction to troubleshooting Dynamics 365 Operations Mobile Application

$
0
0

Recently I had to look into an issue with the Dynamics 365 for Finance and Operations mobile application, specifically I was looking at the "normal" mobile application, not the special warehousing one, so I thought I'd share what I learnt.

My initial impression coming to the mobile app, was that when I'm publishing mobile workspaces, and then running them on my mobile device, that most of the code was running on the mobile device, and that I'd have to do something fancy to see what it was doing.

That was completely wrong! All X++ logic is still on the AOS (sounds obvious now!), the mobile application is just displaying results back to the user. That means my first tip is to use TraceParser to trace what the application is doing - same as looking at an issue in the desktop browser, the trace will show X++ running, SQL queries, timings etc..

My second tip is related to the first - attach to the related AOS and debug the X++ logic behind the mobile workspace. Using TraceParser first will show which forms/classes its using so you can set your breakpoints in the right places.

The particular issue I was looking into wasn't related to X++ logic though - the problem was if I logged into the mobile application it worked fine, but if I signed out, then I couldn't log in again without uninstalling and reinstalling the app. For this issue I wanted to see how the mobile app was communicating with the outside world - with ADFS (this happened to be on-premises), with the AOS. Normally in the desktop browser I'd use Fiddler to see what calls were being made and pick up errors in communication, the good news is that you can do the same with a mobile, you just connect he device and your laptop to the same WiFi and then set the device to use Fiddler on your laptop as a proxy (as described here). This setup gives you the ability to make tests on your device and see the results immediately in Fiddler on your laptop, just like you would with the desktop browser.

It is also possible to debug the code running on the device itself, but I didn't need to do that for my issue, so saving that for a rainy day.

Viewing all 97 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>