The following steps take you through configuring your Dynamics 365 Business Central (BC) as well as Azure resources to enable the feature.
You will need an Azure credential to be able to connect BC to the Azure Data Lake Storage account, something we will configure later on. The general process is described at Quickstart: Register an app in the Microsoft identity platform | Microsoft Docs. The one I created for the demo looks like,
Take particular note of the a) and b) fields on it. Also note that you will need to generate a secret c) by following the steps detailed in the Option 2: Create a new application secret. Add a redirected URI d) , https://businesscentral.dynamics.com/OAuthLanding.htm
, so that BC can connect to Azure resources, say the Blob storage, using this credential.
For communication to Microsoft Fabric we need to add one more permission to the service principal. This is done by going to the API permissions tab and clicking on Add a permission. Select Azure storage and then Delegated permissions. Search for User_impersonation and select it. Click on Add permissions to add it to the service principal.
In Microsoft Fabric you need to create a lakehouse. Go to the appropeate workspace and click on new and select
Lakehouse (preview). Give it a name and click on Create. This will create a lakehouse with a default configuration.
For moving the delta files to tables you need to create a notebook. Go to the appropeate workspace and choose Home. Click on New and select import notebook. Upload the notebook from the fabric folder.
Read more at: How to use notebooks - Microsoft Fabric | Microsoft Learn
You can also schedule the notebook to run at a specific time. Click on Schedule in the ribbon and select the time and frequency.
The service principal that you have created in step 1 needs to be added to the workspace. Go to the workspace and click on Manage access and search for your service principal. Select the service principal and click on Add.
it is possible that you cannot see the service principal then go to the admin tenant settings and enable the setting "Allow service principals to use Power BI API's"
Install the extension into BC using the code given in the businessCentral folder using the general guidance for developing extensions in Visual Studio code.
The app exposes 3 permission sets for different user roles that work with this app. Remember to assign the right permission set to the user, based on the scope of their tasks:
ADLSE - Setup
- The permission set to be used when administering the Azure Data Lake Storage export tool.ADLSE - Execute
- The permission set to be used when running the Azure Data Lake Storage export tool.
Once you have the Azure Data Lake Storage Export
extension deployed, open the Page 82560 - Export to Azure data lake Storage
. In order to export the data from inside BC to the data lake, you will need to add a configuration to make BC aware of the location in the data lake.
Let us take a look at the settings show in the sample screenshot below,
- Storage Type choose here the storage type. Choose "Microsoft Fabric"
- Tenant ID The tenant id at which the app registration created above resides (refer to b) in the picture at Step 1)
- Workspace The workspace in your Microsoft Fabric environment where the lakehouse is located. This can also be a GUID.
- Lakehouse The name or GUID of the lakehouse inside the workspace.
- Max payload size (MiBs) The size of the individual data payload that constitutes a single REST Api upload operation to the data lake. A bigger size will surely mean less number of uploads but might consume too much memory on the BC side. Note that each upload creates a new block within the blob in the data lake. The size of such blocks are constrained as described at Put Block (REST API) - Azure Storage | Microsoft Docs.
- Skip row version sorting Allows the records to be exported as they are fetched through SQL. This can be useful to avoid query timeouts when there is a large amount of records to be exported to the lake from a table, say, during the first export. The records are usually sorted ascending on their row version so that in case of a failure, the next export can re-start by exporting only those records that have a row version higher than that of the last exported one. This helps incremental updates to reach the lake in the same order that the updates were made. Enabling this check, however, may thus cause a subsequent export job to re-send records that had been exported to the lake already, thus leading to performance degradation on the next run. It is recommended to use this cautiously for only a few tables (while disabling export for all other tables), and disabling this check once all the data has been transferred to the lake.
- Emit telemetry The flag to enable or disable operational telemetry from this extension. It is set to True by default.