The potential AI bubble burst is driven by unsustainable business models

We’re in the middle of another wave of technical over-optimism. Right now, artificial intelligence is being treated like the new electricity. Everyone wants in. That’s not necessarily bad, it’s how progress happens, but it also creates distortion. Companies with software that barely works are raising hundreds of millions of dollars based on ideas. That’s a dead-end path.

The real risk here isn’t that AI technology will fail. It’s not. GenAI works well, and it will keep improving. The problem is financial. Business models propped up by venture capital with no clear revenue path are fragile. They rely on the belief that scale will eventually save them. At some point, someone has to pay for the compute, the data, the talent. If there’s no obvious way to recover those costs, the model collapses, no matter how strong the underlying tech is.

This is what happened in the dot-com crash over 20 years ago. Companies with no real product, just a pitch deck and fragile infrastructure, got wiped out. E-commerce survived, of course. Amazon came through stronger. But dozens of others vanished. The ones built on hype went first. The same pattern could repeat with AI.

Brian Jackson, Principal Research Director at Info-Tech Research Group, said, “The AI business model today is a highly subsidized one.” Think about that. If the economic foundation is artificial, survival becomes a matter of timing. Not innovation. Not even product performance.

For C-suite leaders, the question is simple: Are the AI tools you’re relying on supported by sustainable revenue? Or are their lives extended by burn rates and big promises? If it’s the second one, you’re depending on something that might not be here in six months.

Large, diversified tech giants are considered the safest bets

Let’s talk about the big players. Microsoft, Google, Amazon, Apple, IBM, these companies aren’t going anywhere. They’ve built AI into their ecosystems, supported by existing revenue streams in cloud infrastructure, enterprise software, hardware, and ads. This diversification protects them. If one area suffers, another picks up the slack.

Right now, these players are pricing AI services attractively. Call that a strategy or a temporary investment, it doesn’t matter. It’s not sustainable. The AI stack is expensive to run. At some point, the business model shifts from growth-at-all-costs to profit-at-scale. That’s when your cost structures change, fast. CIOs should expect price increases as a default assumption.

Brian Jackson from Info-Tech warns this will come as “business model change.” That’s tech industry shorthand for margin recovery. Translation: the cheap experimental phase is closing. The stable, enterprise-centric phase begins. And that phase gets expensive.

If your company is building products or workflows that depend on low-cost generative AI APIs, now’s the time to step back and do the math under new assumptions. What happens when costs go up 3x or 5x? Do the tools still deliver ROI?

Here’s the upside: big tech won’t disappear. Partnering with hyperscalers gives you continuity. But it also gives them leverage. You have to design for optionality, build in the ability to switch models, renegotiate terms, or scale down your dependencies. Because when the tech stays but the pricing explodes, your flexibility becomes your survival strategy.

Smaller, specialized AI vendors may weather the storm

Most startups in AI aren’t built to last. The market is saturated with companies that look interesting today but won’t survive once funding dries up. There are exceptions, and those are the ones focused on very specific use cases. Companies that solve one problem clearly and do it better than anyone else. Not general-purpose platforms. Not another chatbot API. Think tightly-bound functions with measurable value.

Ricardo Carreon, Head of Technology at Almacenes Distribuidores de la Frontera, pointed this out clearly: “With the small guys, there are some that do specialized stuff, such as genAI for legal or aerospace… As long as they solve a narrow problem and that niche is valuable, they can be pretty successful.”

His point is critical. The companies that survive won’t be the loudest. They’ll be quiet operators sitting in sectors most venture investors don’t fully understand. If they build tech that solves a real pain point, delivers high ROI for a particular type of customer, and the market isn’t overrun by copycats, they can thrive, even during a broader shakeout.

CIOs need to get better at identifying which partners fall into that category. You’re not betting on size, you’re betting on relevance. Evaluate your suppliers and AI vendors based on the problem they solve and whether anyone else can do it better. If your AI partner brings unique value to a crucial function, procurement, legal document automation, compliance risk, keep investing. If they claim they “can serve every sector and every use case,” it’s time to reassess.

Specialized vendors don’t have the safety net of diversified revenue. What they have is focus. That’s their survival lever.

CIOs must evaluate and mitigate dependency on external AI models

A lot of enterprise teams don’t know how much they depend on a single AI model until something breaks. That’s a dangerous blind spot. Whether it’s OpenAI’s GPT, Google’s Gemini, or Anthropic’s Claude, if your systems are overly tied to one vendor’s API or pricing model, you’re at a disadvantage.

Smart CIOs are already doing the work. They’re running internal audits to map dependencies and pressure-test how easily their solutions can switch to new models. That means building modular systems, decoupling apps from model-specific behavior, and creating infrastructure that can reload, re-train, or redirect if needed.

Srini Pagidyala, Co-founder of AI platform Aigo.ai, puts it plainly: test now or risk losing control later. Enterprises should use this window to replace or reroute elements of their pipeline just for the sake of knowing how resilient their architecture really is. Once models are swapped and workflows retested, most companies find out they’re not as agile as they thought.

Kjell Carlsson, VP Analyst at Gartner, agrees on the need for model flexibility but cautions that “switching from one model to another is a lot more painful than that.” Redirecting an API call sounds easy in theory, but in practice, you’re rebuilding code, revalidating logic, and absorbing the overhead of testing, integration, and downtime.

Here’s what this means for leadership: Don’t wait for a model to go offline or a pricing structure change to discover that you’re locked in too tightly. The solution isn’t to avoid external models, but to guard against overcommitment. Long-term resilience depends on making sure your AI deployments are just as swappable and modular as the rest of your stack.

Push your teams to test. If they can’t switch models without rewriting half the application, something needs to change.

Open source AI models offer control and cost benefits but introduce legal and operational complexity

There’s growing momentum behind open-source AI for a reason. It gives companies more control, over data, over infrastructure, and over integration. When you build on open-source models, you’re not waiting for someone else’s roadmap. You deploy on your own hardware, run your own updates, and keep your data within your ecosystem. That matters at enterprise scale.

Ricardo Carreon, Head of Technology at Almacenes Distribuidores de la Frontera, puts it clearly: “By building on top of an open-source model, you can own your future. You can deploy it in your infrastructure and have complete control of your data.” For many organizations, especially those with regulatory or sovereignty demands, that kind of control is essential.

But open source doesn’t mean frictionless. The belief that it’s just free technology with no constraints is wrong. Brian Jackson, Principal Research Director at Info-Tech Research Group, warns, “Open source is not just a free pass to anything and everything.” In his view, companies need to be cautious about licensing risks. Some models don’t allow for certain kinds of commercial derivatives without royalties. In real terms, that means you could build something valuable, and end up paying the original model owner to operate or distribute it.

On top of that, operational overhead is real. Open-source projects often lack enterprise-grade documentation or support. You’ll need your own team to manage it. That’s not a blocker, it’s just a requirement. You have to be ready to take full ownership.

There’s also legal unpredictability. The recent legal battles over WordPress control serve as a reminder that community-led projects aren’t immune to corporate disputes. Enterprise leaders should understand where contributions come from, who owns the IP, and whether long-term governance can be trusted.

If you want full control, open source is a legitimate route. But going that route responsibly requires legal diligence, skilled teams, and a clear architectural strategy. Without those in place, you’re not reducing risk, you’re just shifting where it comes from.

Proactive preparation for an AI market adjustment provides strategic advantage

Prepping for disruption isn’t just about surviving downside scenarios. It’s about improving execution before the market demands it. Companies that treat dependency testing, vendor assessment, and deployment audit as standard practice, not reactionary cleanup, build structural advantages faster than their competitors.

Take the current conversation about potential AI market collapse. We don’t know when or if it will happen. Doesn’t matter. What matters is whether your systems can keep working if your primary model provider folds, changes pricing, or shifts priorities. If the answer is no, you’re exposed. That’s avoidable.

Brian Jackson of Info-Tech Research Group said it best: if OpenAI were to go out of business tomorrow, working with diversified AI providers and building apps around models you control would make your systems resilient. Planning for that now ensures continuity, no matter what.

This is also about leverage. When you know which systems depend on external APIs, and how tightly those dependencies lock you in, you can negotiate contracts with more confidence. The ability to say, “We don’t need you, this system can run without your stack,” changes the balance. It’s not about walking away. It’s about being able to.

Beyond negotiations, there’s internal value. Running vendor flexibility and dependency experiments helps tech teams uncover architectural bottlenecks, discover optimization opportunities, and improve documentation. You’re not just preparing for a collapse. You’re becoming smarter about how your AI infrastructure operates.

You also position yourself to scale faster. Organizations that understand their model mix, cost exposure, and switching tolerance can move more quickly when the market opens up new capabilities. The moment a better model appears, faster, cheaper, or more accurate, you can adopt it without costly refactoring.

No crisis needed. Just clearer visibility and better execution. That’s good leadership. That’s building for the future on purpose.

Key takeaways for leaders

  • Assess AI vendor business models now: Many AI startups are operating on unsustainable, venture-backed models with no clear path to profitability. Leaders should review their AI partners’ financial viability to avoid disruption from sudden market exits.
  • Expect pricing pressure from large AI providers: While hyperscalers like Microsoft and Google offer stability, their current pricing is unlikely to last. Executives should prepare for cost hikes and evaluate whether AI deployments will continue to drive ROI at scale.
  • Niche AI vendors can be reliable if their value is focused: Not all small vendors are at risk, those providing deep value in specific use cases may survive and grow. Leaders should identify vendors who solve core business problems uniquely and evaluate their long-term position accordingly.
  • Eliminate AI model lock-in risk: Overreliance on a single AI model limits agility and increases operational risk. CIOs should prioritize system designs that enable smooth transitions between models like GPT, Gemini, and Claude.
  • Open source AI offers control but requires due diligence: Open-source models give enterprises freedom over deployment and data, but they come with legal and operational complexity. Leaders should have legal and technical safeguards in place before committing.
  • Preparation yields advantage regardless of market outcome: Testing dependencies, pricing scenarios, and model alternatives now gives companies leverage in negotiations and resilience in disruptions. Proactive assessment should be routine, not reactive.

Alexander Procter

September 12, 2025

10 Min