Introducing the Fabric Accelerator!

The Fabric Accelerator is a collection of reusable code artifacts integrated with an orchestration framework for Microsoft Fabric. This accelerator helps you to build, deploy and run data platforms using Microsoft Fabric in a consistent and repeatable manner. It leverages the popular ELT (Extract, Load, Transform) framework for meta-data based orchestration. The ELT Framework is widely used with Azure Synapse and Azure Databricks. It has now been extended to support Microsoft Fabric.

Key Features

  • Ease of Use: Simple setup and configuration to get you started quickly.
  • Scalability: Designed to handle batch workloads of any size from Minimum Viable Product (MVP) to enterprise-scale data platforms.
  • Architecture: Implements medallion architecture using industry best practices.
  • Flexibility: Supports various data sources using metadata-driven ELT orchestration, making it adaptable to your specific needs.
  • Reusable Artifacts: No need to re-invent the wheel. Ready-to-use and configurable Data Factory pipelines, Spark notebooks and PySpark libraries available for common data extraction and transformation tasks.
  • Automation: Everything is automated, no click-ops. From Infrastructure as Code (IaC) to monitoring of Lakehouses and real-time insights into batch jobs.
  • Documentation: Detailed Wiki to help you every step of the way.

Why Use Fabric Accelerator?

  • Accelerate Development: Reduce the time and effort needed to build your data platform.
  • Repeatability: A framework to replicate your data platform implementations with minimal configuration.
  • Best Practices: Incorporates industry best practices to guarantee your platform is reliable, efficient and minimizes tech debt.
  • Streamline Operations: A consistent pattern that ensures the data platform operates smoothly. Its also recovers from failure without the need for manual patching.
  • Community Support: Join a growing community of users and contributors to share knowledge and get help.

Architecture Overview

The accelerator uses a medallion architecture. The medallion layers can be configured as either files, Lakehouse, or Data Warehouse in OneLake. In this setup, the bronze layer consists of files, the silver layer is a Lakehouse, and the gold layer is a Data Warehouse

  1. The Data Factory pipelines ingest data from both cloud and on-premises sources into OneLake bronze layer. The on-premises sources need an OPDG.
  2. Data lands in the bronze layer in OneLake as files, where possible in parquet format as-is, without any transformation.
  3. The Spark notebooks then transform the raw data from the bronze layer. The curated data is then stored in silver layer of OneLake as Lakehouse tables. Here, the data is cleansed, flattened, and standardized while maintaining its grain. The bronze data can be transformed into one-to-one or one-to-many Lakehouse table(s).
  4. The Data Warehouse stored procedures apply business rules to data from the Lakehouse tables in the silver layer. It lands the data as DW tables in the gold layer of OneLake. Here, typical activities include applying custom business rules, creating snapshots, merging data from multiple tables, and creating hub-spoke star schema. A Lakehouse table from the silver layer can be transformed into one-to-one, one-to-many, or many-to-one DW tables in the gold layer.
  5. Semantic models built on the gold layer DW tables serve as the analytics layer. This analytics layer is sometimes referred as the diamond layer. Here the relationships between tables are established.
  6. The orchestration is underpinned by the ELT Framework, a metadata-driven orchestration tool that streamlines ingestion and transformation pipelines.
  7. The ELT framework uses an Azure SQL serverless database for metadata, mirrored into the Fabric workspace. Semantic models built from ELT metadata offer real-time reporting via direct lake Semantic Models.
  8. Power BI serves as the analytics layer, supported by PBI Copilot for self-service capabilities.

Getting Started

Interested to collaborate?

You can collaborate in various ways, including:

  • Pull Requests.
  • Update/Enrich Wiki documentation.
  • Raise issues when you spot one.
  • Answer questions in the discussion forum.

Let’s build awesome data platforms with Microsoft Fabric, powered by the Fabric Accelerator!

3 thoughts on “Introducing the Fabric Accelerator!

  1. Love this concept, I have used metadata models for prior DW development.
    Is this accelerator something you expect to expand, maintain and support going forward?

    1. Yes Robert, this is just the start. The roadmap is long term. I’m working on exciting new features like the observability capability that went live few weeks back. Please connect with me on LinkedIn where I will be posting updates.

      1. sent invite on LinkedIn, would love to hear about the long term Roadmap

Leave a comment