Lessons learned from a one-year migration towards Fabric & dbt

By Amaury Fouville, Corentin De Ro, florian thoen

Modern data platforms are expected to deliver fast, reliable insights while remaining flexible enough to adapt to evolving business needs. In reality, many platforms accumulate complexity more quickly than they generate value. This was the situation that triggered a year-long migration to Microsoft Fabric, with dbt at the centre of the transformation layer. This article shares what that migration actually taught us about Fabric in terms of cost, capacity management, security and overall maturity.

When architecture stops serving the business

The starting point was a platform built around a centralised data lake feeding a Data Vault model, which then powered all data marts and the reporting layer, with all transformations in SQL. While Data Vault had originally been introduced for traceability and historical logging, it brought significant complexity without delivering the expected business value.

Over time, several structural issues emerged:

  • Development and production shared the same environment, meaning changes were made directly in production.
  • Documentation and data lineage were lacking, making it difficult to understand dependencies or assess impact of changes.
  • Semantic models had grown organically, with little governance, resulting in overlapping KPIs and inconsistent reporting.
  • Most critically, the time to insight became too long, leading business teams to lose trust in the platform.

As trust decreased, business teams began building their own pipelines and datasets. Shadow IT emerged, with multiple versions of the same tables and metrics coexisting.

Why incremental fixes were no longer enough

At this point, improving isolated components was not enough. The combination of architectural complexity and organisational trust issues made a clean reset unavoidable.

The decision was made to start from a blank page, using Microsoft Fabric as the core analytics platform and dbt to structure transformations. The goal was not to replicate the old situation, but to rebuild a platform that could restore confidence through clearer ownership, faster delivery, and better operational control & cost.

Designing a platform with clear responsibilities

In the target architecture, ingestion patterns largely remained unchanged, but everything downstream was redesigned.

An ingestion layer acts as a controlled buffer for incoming data from multiple sources. On top of that, the new architecture around Fabric and dbt, produces business-ready tables. Warehouses are used to materialize transformations, while lakehouses enable shortcuts to shared data without duplication

Gold tables are exposed to business domains so teams can build their own analyses while relying on centrally governed foundations. This design deliberately balances autonomy and control: core logic is owned centrally, while domain teams retain flexibility where it adds value. The freedom they previously exercised through shadow IT has not been removed; it has been properly redefined within a new framework.

Migration reality: scale, ambiguity, and refactoring

The migration itself was neither small nor clean. Hundreds of tables and views existed, with unclear usage, inconsistent naming, and undocumented dependencies. Determining what was still used, what was not, and what needed to be rebuilt were the first significant challenges to address.

This was not a lift-and-shift. The Data Vault was deliberately abandoned, which meant rebuilding lineage and refactoring logic to ensure the resulting data remained consistent and trustworthy. For this, right from the start of the migration, special care was put into leveraging dbt capabilities and best practices, for clean code and better collaboration and maintenance.

Progress tracking, validation, and status reporting became just as important as development work.

Cost and capacity become operational concerns

Operating Fabric at scale introduced new challenges, particularly around capacity consumption. Fabric’s ability to burst compute provides flexibility, but it also introduces risk. A small number of poorly scoped queries can exhaust capacity and block all workloads for extended periods.

To regain control, several safeguards were introduced:

  • Automated termination of long-running queries in dev
  • Extensive use of dbt defer functionality to avoid unnecessary recomputation
  • Default row limits during local development to prevent accidental overload
  • Dedicated monitoring to gain visibility beyond native Fabric tooling (see FUAM)

These protection measures fundamentally shifted responsibility. Safe defaults were enforced by the platform, while developers consciously opted into heavier workloads when needed.

Security trade-offs in a maturing platform

Handling PII data exposed limitations in Fabric’s current security model. We could not restrict Viewers of a Workspace to see only non-PII data. Workspace-level permissions proved too coarse, while item-level controls alone were insufficient.

The resulting solution combined:

  • Separation of execution and data storage workspaces
  • Item-level sharing for controlled access
  • SQL-based data grants aligned with identity groups and time-bound access

This approach was pragmatic rather than elegant, reflecting the reality of working with a platform that is still evolving.

dbt as a structuring force

dbt played a role far beyond SQL transformations. It enforced development discipline through CI/CD, environment isolation, deferred builds, and shared macros. This structure proved essential to keep costs under control, to standardize code following best practices and enable collaboration inside the data platform team, between central and domain teams.

In practice, dbt became a key mechanism for control, not by restricting teams, but by making good practices the default.

What this migration ultimately taught us about Fabric

A few lessons stand out:

  • Shortcuts are central pieces of architecture in Fabric
  • Fabric Capacity usage and Query Optimizer needs monitoring and guardrails to avoid surprises
  • Fabric dynamic rhythm of Previews makes it difficult not to engineer the platform
  • Any modern data platform without best practices at every level is just old mess in a new environment

All in all, our feedback about Fabric:

Fabric is definitely mature enough to be implemented and be a cornerstone of Data Platforms in 2026, and it is a bundle of best-in-class Microsoft data tools in one efficient platform. But, currently in the start of 2026, Fabric isn’t mature enough yet to fulfil its promise of low engineering and business user-friendliness. Engineers, as with any platform, remain a must to build and maintain anything related to data in complex environments.

If you are interested in knowing more about:

  • how we tackled this migration
  • how we implemented security with PII data with Fabric
  • to discuss any challenge/questions you might encounter with Microsoft Fabric

Please get in contact with us :)

-Amaury Fouville | Corentin De Ro | Florian Thoen