Using Synapse Link in Data Warehouse Architecture for D365
- TBIS
- Dec 16, 2024
- 4 min read
Updated: Jul 27
If you work with Dynamics 365 Finance and Supply Chain Management (D365) and want to build reports and dashboards, you know how difficult it is to connect to and extract data from the system. This is because Microsoft does not allow direct connections to the underlying D365 database.
The most common method of connecting to D365 is via an API using the OData protocol or connectors. However, depending on the size of your database and your objectives, this approach can become slow.
The Business Problem
Austworld is a proudly Australian-owned plumbing wholesaler, supplying over 2,500 customers nationwide with a catalog of more than 3,000 high-quality products. They previously built reports and dashboards in Power BI by directly connecting to the data sources through data connectors. One significant challenge was creating reports based on their sales order lines, as the sales order lines table was quite large. They often couldn’t load the data into Power BI due to the data source size limit imposed by their licensing, or the dataset would take so long to load that it eventually timed out.
Another challenge was the availability of new data, especially sales and production information, where the business needed data refresh happening every one to two hours.
Under the role of IT Projects and Systems Lead, Andre Nunes structured a project to provide a proper infrastructure for delivering data to the business through the construction of a data warehouse in the Azure Cloud.
The Solution
Just a few days before the project begins, we became aware of a Microsoft solution that allows direct connections to the D365 database and exposes the actual data tables, rather than using OData virtual entities. This was very handy at the time. The solution is known as Synapse Link for Dataverse.
Other tools and components we evaluated for the architecture include Azure Data Factory and Microsoft Fabric.
Because their main systems were Microsoft-based (ERP and CRM) and they also used Dataverse and Power Apps, we thought it made sense to incorporate Synapse Link for Dataverse into our data warehouse architecture. Our concern, however, was the cost of running a Synapse Analytics instance, which can be significant.
Data Warehouse Architecture

Synapse Link is a native integration between Microsoft Dataverse (the underlying data platform for Microsoft Dynamics 365 CE and Power Apps) and Azure Synapse Analytics. It continuously exports Dataverse data into Azure Synapse Analytics in near real time. By doing so, it bridges the gap between transactional and analytical workloads, enabling advanced analytics, machine learning, and data warehousing tasks on your business data without negatively impacting the performance of your core applications.
Synapse Link for Dataverse connects your Dataverse environment to an Azure Data Lake Gen2 storage account and then to Synapse Analytics workspaces. Once enabled, Synapse Link automatically and continuously replicates your Dataverse and D365 F&SCM tables and entities as analytical data tables, ensuring that your Synapse environment is always up to date with the latest changes from the business application.
Whenever records are created, updated, or deleted in the source systems, these changes are propagated to the analytical store in near real time. The replicated data can be directly queried within Azure Synapse Analytics using serverless SQL pools, dedicated SQL pools, or Spark environments. From there, you can create data models, run complex queries, integrate with Power BI, or leverage other Azure services to build rich analytics, dashboards, and machine learning models.
However, to avoid the costs associated with running Synapse Analytics for analysis, queries, transformations, and so on, we decided to transfer the data to a serverless Azure SQL Database. We used Synapse Analytics pipelines to retrieve data from D365 via the Azure Data Lake, and then pushed this data directly to the Azure SQL Database.

Azure Synapse Pipelines
To leverage this strategy, we utilised the powerful Synapse Analytics pipelines feature, which allows high-performance parallel job execution. We developed a pipeline capable of moving all the required data in just a few seconds. Data transformation and orchestration now occur within the Azure SQL Database.
Data Warehouse Layers
The raw data is first stored in staging tables, then cleansed, transformed, and saved in the operations tables, before finally being loaded into enterprise tables as facts and dimensions. These three table layers are orchestrated by stored procedures, resulting in a conventional data warehouse structure.
Cost of Ownership
An internal feasibility analysis showed that, given their usage patterns and workload, using only Synapse Analytics pipelines and a serverless Azure SQL Database would save at least 30% of our total cost of ownership compared to using Synapse Analytics alone, while still delivering the expected results.
In many scenarios, where workloads are intermittent or not running at full capacity around the clock, such as their, an Azure SQL Database serverless model can be significantly more cost-effective than maintaining a dedicated Synapse environment.
Typical Considerations
Usage Patterns
Synapse Analytics (Dedicated SQL Pool): You pay for provisioned capacity whether you’re actively running queries or not.
Azure SQL Database (Serverless): You pay primarily for the compute resources used while queries are running. When usage is low, the service can automatically pause or scale down, dramatically reducing costs during idle periods.
In Austworld use case, there was no data warehouse activity between 7:00 PM and 6:00 AM Brisbane time, when neither their head office in Australia nor their China office was operating.
Typical Savings Ranges
For workloads that do not require continuous high-level compute, organisations often see a significant cost reduction, sometimes in the range of 30–70% or more, when switching to a serverless Azure SQL Database model. This saving comes from eliminating the constant “always-on” costs and paying only when the workload is active.
Data Volume and Complexity
If your data size and query complexity are moderate and queries are run intermittently, you could achieve savings at the higher end of this spectrum. On the other hand, if you need continuous, high-end analytics at scale, Synapse might be more suitable, and the cost difference would likely be less pronounced.
After running this platform for 3 months we confirmed the expected cost, so we were very happy with the outcome.
So, the last piece of the puzzle was not an innovation for us. Austworld continued using Microsoft Power BI for presenting to the end-users the output from the data warehouse, which is a great Data Analytics platform.
If you want to know more about this solution, or how a data warehouse and business intelligence can benefit your business, do not hesitate to get in touch.


Comments