Microsoft is retiring auto-generated default semantic models

Microsoft is making a big move with Fabric. They’re phasing out a legacy feature that automatically created semantic models, structured metadata that defines and gives meaning to raw data. These Default Semantic Models helped teams quickly spin up reports, validate datasets, and test analytics in a lightweight way. That convenience is going away, and by December 2025, creating semantic models will be entirely manual.

Here’s why that matters: Microsoft is signaling that speed isn’t everything. Correctness, traceability, and governance now take the lead. If you’ve scaled your data operations, you’ve already run into issues with shadow modeling or data ownership conflicts. Auto-generated models might be fast, but they also obscure the lineage and accountability teams need as they grow. When you’re pushing workloads across AI, BI, and machine learning environments, clarity and control over your models make all the difference.

This is part of a long-term shift across cloud ecosystems, with stronger emphasis on data stewardship. That makes sense. Bad data decisions at scale mislead strategy. Microsoft wants Fabric to be taken seriously in mature environments. And serious environments don’t auto-generate key structural logic.

There’s a clear trade-off in usability, but the upside is tighter governance, better audit trails, and accuracy in analytics outcomes. The change forces teams to understand what they’re modeling. That’s a positive shift. If you’re operating with scale or planning to, you’ll want those controls in place anyway.

According to Robert Kramer, Principal Analyst at Moor Insights & Strategy, this move helps “organize workspaces and make reporting more transparent.” He called out improvements in “audit trails, clearer lineage, and more accurate usage data.” If you’ve built systems that need to comply with regulatory frameworks or you’re injecting AI into your analytics stack, those improvements aren’t just nice-to-haves, they’re essential.

So, what should leadership teams take away from this? Simplicity is useful when you’re experimenting. But when you’re operationalizing insights enterprise-wide, across departments, countries, or product lines, automation that hides complexity becomes a liability. Microsoft is eliminating that liability. It’s a change that nudges your teams toward long-term discipline. If you’ve got the right architecture and people in place, you won’t just keep up, you’ll accelerate.

The shift to manual semantic model creation may slow down rapid prototyping for enterprises

There’s a real cost to removing the automation layer, especially for teams relying on speed. Until now, Microsoft Fabric’s Default Semantic Models allowed technical teams to jumpstart analytics workflows without heavy upfront design. That’s ending. From now on, every semantic model must be built explicitly, no automatic scaffolding, no shortcuts.

For teams running proofs-of-concept, experimenting with new datasets, or moving fast to validate KPIs, this introduces more friction. What used to be instant now needs planning. It’s changes the rhythm of how data science and analytics teams operate. Startups and midsize companies will feel this more, particularly those that leaned heavily on the default models for quick insights.

But here’s the thing: while you lose some convenience, you gain structural integrity. When models are built manually, you gain clarity in how metrics are calculated, how data flows, and who owns what. That precision matters when your reports influence millions in budget or product decisions. Rapid prototyping is valuable, but too often, it carries forward errors baked into early, unchecked assumptions.

Microsoft know removing automation slows the process up front. Robert Kramer, Principal Analyst at Moor Insights & Strategy, noted that while this change “introduces an additional step for enterprise users,” it also ensures that the models being used are high-quality and intentionally designed.

To minimize the disruption, Microsoft is offering a migration script. If your teams haven’t used it yet, it’s time. That script helps you move away from defaults without breaking functionality, giving you a foundation to build on. The earlier you do it, the less chaos you’ll face later, especially when the old models are officially unsupported.

If you’re managing a data engineering or business intelligence team, your priority now should be governance readiness. This is about upskilling your teams, enforcing clearer model definitions, and allocating time for more deliberate structure. Don’t underestimate the resource shift.

Competitive hyperscalers are also moving toward curated, governed data models

Microsoft’s decision to retire auto-generated semantic models isn’t a stand-alone move. It reflects a broader shift across major cloud platforms. Leaders at AWS and Google are already putting emphasis on curated, explicitly defined data models. The goal is consistent: structured data layers that are explainable, auditable, and optimized for responsible AI use.

Google’s Looker requires developers to define models using LookML, its proprietary modeling language. More recently, it integrated Gemini, an AI-powered system that offers model suggestions and enables users to interact with data through prompts instead of writing full queries. That enhances productivity, sure, but the core remains manual: the logic behind the models must still be user-directed and governed.

AWS, on the other hand, lets machine learning recommend fields for inclusion in QuickSight Topics, but final model curation stays in human hands. It’s intentional. These companies want control over model integrity because they know the quality of your data layers directly affects what AI sees, predicts, and reports.

Robert Kramer, Principal Analyst at Moor Insights & Strategy, pointed out that Microsoft is pushing toward “curated, explainable models that work well with AI and keep data governance in check.” In other words, what the hyperscalers are really competing on isn’t just speed or ease-of-use, it’s trust in the data foundation.

For C-level leaders, the takeaway is clear. Regardless of platform, the message is the same: AI-enhanced analytics only delivers value if the structure beneath is stable, transparent, and compliant. That means fewer shortcuts and more investment in controlled modeling processes. If your enterprise hasn’t prioritized this yet, now is the time to do so.

The real risk is failing to modernize your data infrastructure while your competitors move ahead with governance-ready AI and scalable dashboards built on top of solid data models. Microsoft’s strategy aligns with that reality. So do its competitors’. You need a plan that does, too.

Enterprises must proactively transition existing default semantic models to adapt to the new framework

Microsoft isn’t just changing forward-looking behavior, it’s changing how existing systems are handled. Default Semantic Models already in use within Fabric won’t vanish, but they will be decoupled from their parent data assets. That means they lose automatic updates and become standalone artifacts that require explicit maintenance. If your teams don’t intervene, you’ll be stuck managing fragile models that no longer align with evolving source data environments.

The right move is to take action now. Use Microsoft’s Fabric Admin APIs to locate all current Default Semantic Models. For each, decide whether to keep, merge, or retire. Once you’ve identified the ones you want to keep, rebuild them using modern, governable formats: Power BI Project (PBIP) or Tabular Model Definition Language (TMDL). These formats support version control, structured collaboration, and scalable governance, essentials for large-scale enterprise environments.

Beyond just technical updates, this transition involves a significant behavioral shift across teams. Your analysts and engineers need to understand how to design efficient, transparent models. That includes mastering concepts like star schema design, not just for performance, but for clarity, maintainability, and compliance. If your team isn’t well-versed in semantic fundamentals, this transition is going to be inefficient. With the right preparation, it becomes an opportunity to elevate internal standards.

Robert Kramer, Principal Analyst at Moor Insights & Strategy, advises organizations to start tagging and categorizing existing models now. In his view, cleaning up these assets early is key to avoiding later issues, both in report accuracy and system performance. He also points to Microsoft Purview as a smart integration point, giving enterprises a way to catalog and govern metadata more effectively.

For business leaders, this means dedicating resources not only to migration projects, but also to training and governance oversight. This is a decision layer that must be owned at the leadership level, because the complexity and impact will scale with your organization. Models drive metrics. Metrics guide decisions. If those models aren’t built well or maintained with discipline, data-driven decisions lose their value.

Enterprises that treat this shift as a one-off tooling update will fall short. Those that use it to embed stronger accountability across BI and AI teams will outperform, both in efficiency and trust in their data. This is where governance becomes a competitive advantage.

Key takeaways for decision-makers

  • Microsoft tightens data governance in fabric: Leaders should prepare for the removal of auto-generated Default Semantic Models by ensuring teams are equipped to build models manually, boosting accountability, data transparency, and governance at scale.
  • Prototyping will slow, but quality will rise: Executives should expect a short-term impact on prototyping speed, but this tradeoff will be offset by more consistent, auditable, and high-quality data models driving analytics and reporting.
  • Industry trend supports curated modeling: Microsoft aligns with AWS and Google in mandating curated semantic layers; leaders should view this as a long-term competitive move toward AI-ready, explainable data architectures.
  • Transition planning is critical: Organizations must identify, rebuild, and version key semantic models ahead of the 2025 cutoff. Prioritize model governance, team training, and metadata management now to avoid operational disruption later.

Alexander Procter

September 8, 2025

8 Min