The AI landscape isn’t one bubble, it’s several, each with a clock

There isn’t one overarching “AI bubble.” That’s outdated thinking. What we’re seeing is more like parallel arcs, distinct market segments, moving at different speeds toward entirely different outcomes. You’ve got prediction-driven hype and speculation, but it’s not uniform. Some pieces will hold. Others won’t.

If you’re sitting in the C-suite, you need to stop asking whether the AI market will collapse and start asking which part might collapse, and when. The AI economy is not one layer, it’s three: application wrappers, foundation model providers, and core infrastructure. Each layer has its own economics, exit risks, and long-term value profile.

The application layer, mostly composed of wrapper companies, is first in line for correction. These are the firms building light-touch tools that call OpenAI’s or Anthropic’s API and charge a subscription to repackage that output inside a slick interface. Rapid growth? Sure. Sustainable? No. They’re skating on rented ice.

Foundation model providers, OpenAI, Anthropic, Mistral, are on firmer ground, but far from invincible. Their models are powerful and capital-heavy, with huge stakes tied to compute infrastructure. As model performance across the market starts to converge, differentiation will depend on deep engineering skill, efficiency, and optimizing deployment. You’ll see consolidation here. Maybe three major players survive.

Infrastructure is where the value locks in. Nvidia, cloud providers, and data center operators may be oversized for now, but their buildout stays useful. It’s the only layer that holds long-term asset value, regardless of which AI apps end up winning. Smart leaders recognize this.

Now for numbers. There are 1,300+ AI startups globally with valuations over $100 million. Nearly 500 are “unicorns” valued above a billion. They won’t all make it. Probably not even close.

Understanding which layer your company, or your capital, is sitting in is critical. A single crash? Unlikely. But staged corrections, each on its own timeline? That’s already in motion.

Wrapper companies look flashy but don’t last, they’ll be first to drop

Let’s talk about wrapper companies. These are startups that don’t build their own models. They call APIs from OpenAI or Anthropic, wrap it in some UX polish, and hope users will pay $30, $50, sometimes more, per month. They’re quick to launch, but they’re built for convenience, not longevity.

Jasper.ai is a good example. Fast out of the gate, hit $42 million in recurring revenue in its first year. But that kind of velocity doesn’t mean staying power. When bigger players like Microsoft or Google decide to bundle similar features into Office or Gmail, for free, these wrappers get absorbed instantly. Your $50 tool becomes yesterday’s menu option.

Then there’s the switching problem. Or lack of it. Most of these companies don’t lock users in with proprietary data, infrastructure, or ecosystem stickiness. You can move from one to another, or just back to ChatGPT, in a few clicks. No loyalty, no friction, no moat.

Margins? They look good today, but they’re evaporating. As model costs drop and capabilities flatten out across providers, wrapper companies face a perfect storm. No tech moat. Zero defensibility. And competitors with infinite distribution leverage.

White-label products build on the same shaky ground. Vendor lock-in, limited API flexibility, and no real ownership of the experience. It’s risky when you don’t even control the foundation your business is sitting on.

There are exceptions. Cursor stands out. It doesn’t just wrap the model, it integrates deep into developer workflows, creates custom configurations, and provides real value beyond the API. Most wrappers aren’t doing this. That’s the difference between a tool and a product.

For leaders investing in or running these businesses, the message is clear, move fast or move out. If you’re a wrapper, evolve. Build proprietary workflows. Lock in user behavior. Create switching costs. If you don’t, you’re not just at risk of being replaced. You’re already outdated.

Expect to see this segment break first. That timeline isn’t speculative, it’s likely by late 2025 to mid-2026. The platforms are consolidating, and user expectations are catching up.

Foundation model providers are stronger, but they face a tight race

Now let’s look at the middle layer: foundation model companies. OpenAI, Anthropic, Mistral, these are the labs building the core large language models most others rely on. Their position is stronger than the wrappers, but don’t mistake that for stability. Their runway depends on execution, engineering efficiency, and capital endurance.

These companies have heavy technical advantages, training expertise, top-tier talent, and unique infrastructure partnerships, but the gap is closing. The quality difference between models is shrinking fast. As capabilities converge, models will become harder to differentiate. At that point, you’re not selling breakthrough performance, you’re selling infrastructure access with a smaller margin.

What happens next is a race. It’s not about who has the biggest training run anymore. It’s about who gets inference cost down, who boosts throughput, and who can serve faster results with less compute. Engineering matters more than hype. Innovations like KV cache extensions, token streaming improvements, and optimized memory routing will separate who stays independent, and who gets acquired.

Then there’s the capital structure. Some of these firms are operating with huge funding that doesn’t line up well with current revenue. OpenAI is involved in deals approaching $1 trillion, including a $500 billion plan to build out data center infrastructure, mostly financed by Nvidia. Yet it’s only projected to generate about $13 billion in revenue. That level of imbalance draws attention. It’s a bubble dynamic, even if the core tech is real.

There’s also circular dependency. Nvidia funds infrastructure for OpenAI, and OpenAI uses that infrastructure to generate demand, for Nvidia’s chips. This type of flywheel can overstate actual market demand and distort risk. That doesn’t make the companies bad bets, but it does raise questions about sustainability without external correction.

Looking ahead, expect consolidation across 2026 to 2028. The best-capitalized model providers will stay. Others will fold or be acquired. For C-level leaders, the key move here is focusing on efficiency. Whoever wins won’t just have the best models, they’ll have the leanest, fastest, and most affordable way to deploy them at scale.

Infrastructure is the foundation that will hold

Now let’s focus on the infrastructure layer, this is where the ground is solid.

When people ask where the lasting value in AI is, it’s here: data centers, chips, cloud availability, high-bandwidth memory systems, storage layers tuned for AI workloads. This stack exists no matter what happens above it. It serves all layers, from foundation models to wrappers, and its utility doesn’t vanish when any one product fails.

Critics point to spending. And yes, it’s massive. Global AI-related capital expenditures already exceed $600 billion. Gartner projects total AI spending could top $1.5 trillion by 2025. That might look excessive, but serious growth relies on over-commitment early on. Even if some data centers run under capacity in the short term, this hardware doesn’t degrade in value. It gets repurposed. It gets updated. It keeps running.

Then look at actual demand signals. Nvidia’s Q3 FY2025 revenue hit $57 billion. That’s up 62% year-over-year. Its data center business alone delivered $51.2 billion. These aren’t soft bets. Companies are deploying infrastructure for real workloads, not just POCs or demo pilots.

And today’s infrastructure isn’t just dumb storage and compute. It spans the full memory hierarchy, HBM on GPUs, DRAM buffering, down to high-speed storage stacking. These designs aren’t interchangeable with legacy systems. They’re purpose-built, and they’ll continue to serve workloads long after today’s front-end tools get replaced.

For enterprise leaders, this layer is where long-term bets make sense. Yes, some facilities may be underutilized in the short window. Yes, not every chip order will be maximally efficient in the first wave. But as the ecosystem matures, the infrastructure layer scales with it, whether the winners are chatbots, agentic systems, or something entirely unforeseen.

Bottom line: the infrastructure isn’t just necessary, it’s non-discretionary. It’s not speculative hype. It’s the operating backbone of AI, and it’s here to stay.

The AI market correction will be staggered

Don’t expect the AI sector to crash all at once. That won’t happen. The correction is coming in waves, and it’s already underway.

Phase one is already visible: wrapper companies are beginning to feel pressure. Their margins are shrinking. Users are questioning the value. Larger platforms are replicating their features. Hundreds of startups that built on thin differentiation, without proprietary tech, without workflow ownership, are running out of room. Some will fold, some will exit at a fraction of their valuation.

Phase two will follow soon after: foundation model providers will face consolidation. You’ll see smaller, mid-tier players across North America and Europe either absorbed or shut down. The strongest, well-capitalized labs, those with end-to-end model optimization and key industry partnerships, will take more share. Expect three to five significant acquisitions across 2026 to 2028, as cloud majors and AI-centric portfolio firms trim the field. If your business is dependent on a second-tier foundation model provider, it’s time to reevaluate vendor risk.

Phase three is more stable but still requires awareness: infrastructure spending is elevated and will remain high. Yes, some buildouts will outpace short-term demand. Some facilities might stay partially idle for two to three years. But as AI usage scales and new applications gain traction, that capacity will be absorbed. Infrastructure may correct mildly in the short term, but long term, capital here retains value.

For executives, this staged correction is an advantage. It’s predictable. You can map your exposure by layer and timeline. If you’re tied to wrapper-style monetization, accelerate your shift. If you’re betting on models, double down on distribution and inference performance. If you’re in infrastructure, keep building, but tune for efficiency and forward capacity.

Overall, the market will rebalance, gradually. Most importantly, not all losses are equal, and not all gains disappear. Precision in where you’re operating is what matters.

From wrappers to real companies, evolve or exit

Let’s be blunt: if you’re still just wrapping an API and pushing out responses, that’s not a company, it’s a feature. That’s fine for a classroom demo or a first pass to explore product-market fit, but it’s not viable long term. Many startups are stuck here. Most won’t survive.

If you’re building at the application layer, you need to own significantly more than just the output. You need to own user workflows, what happens before, during, and after the AI response. That means integrating with key tools, building context-aware logic, and generating proprietary data that reinforces product value every time users come back.

The next level up is vertical SaaS. Not broad platforms, but focused, deeply integrated products for sales, legal, logistics, finance, and so on. These work because they compress end-to-end execution inside one environment. They build switching friction. They solve real workflows comprehensively, not one-click at a time.

If you don’t evolve, you’ll face distribution risk. Because distribution is where the moat is forming. Not just acquisition, but activation, retention, and upselling all within your ecosystem. The best AI products won’t just answer questions. They’ll own the operating layer customers build their day around.

For leadership teams, this is directional. It’s not a feature arms race. It’s a battle for user workflows, integrations, and stickiness. As OpenAI, Google Cloud, and Microsoft Azure further expand their model platforms, the only defensible position is to build on top, owning a differentiated user relationship that can’t be easily displaced.

Do not stay in the wrapper stage. Either move up the stack, fast, or move out before your valuation does.

Stop asking if there’s an AI bubble, focus on which one, and when it expires

The real question isn’t “Are we in an AI bubble?” That question’s too broad, and too late. A better question is: which AI bubble are we in, and when will that specific segment peak or unwind?

This industry isn’t moving as a single unit. Each layer, application wrappers, foundation models, infrastructure, has its own capital structure, product risk, and timeline. Treating AI investment as one uniform bet is a flawed strategy at the leadership level. Decisions should be based on the layer you’re operating in, not just the overall hype trend.

Each layer’s future looks different. Wrapper companies will start collapsing as early as 2025. They’ve got short product lifespans, thin margins, and shrinking defensibility. That phase is already in motion. Next, foundation model providers will go through consolidation between 2026 and 2028, only those with low-cost inference, fast deployment, and large-scale deals will make it past that filter. Infrastructure, while not immune to short-term overbuilding, is long-term resilient.

The timelines are all visible if you’re paying attention. Wrappers: 12 to 18 months runway before contraction. Foundation models: 2 to 4 years before it’s clear who survives at scale. Infrastructure: at least a decade of relevance as AI workload volume expands.

So if you’re in charge of capital, partnerships, or strategic direction, what matters is situational awareness. Track your exposure across layers. Measure burn and defensibility. Don’t plan around general sentiment. Plan around stage-specific signals.

This isn’t about pessimism. The AI wave is real. But not every participant makes it to the other side. The default state of this market is volatility, and the only real protection is knowing exactly where you stand and what’s going to hit next.

Make sharper moves. Ignore noise. Follow the timelines. That’s how you stay relevant.

Final thoughts

If you’re leading a company, allocating capital, or building in this space, the biggest mistake now is treating AI like a single-market bet. It’s not. The risks and opportunities aren’t spread evenly. Some segments will scale. Others will collapse under their own weight.

This isn’t about avoiding AI. It’s about clarity on where you stand and where your risks are stacked. Wrapper companies need to evolve now, or they’re done. Foundation model providers will need to engineer for scale, not just capability. Infrastructure players should stay focused but optimize for efficiency over headline spend.

The collapse won’t come all at once, but it is coming in phases. You have enough time to shift, if you move with precision.

AI is real. It’s powerful. But it’s not immune to overbuild or bad business. Don’t assume survival. Design for it.

Keep your edge sharp. Know your layer. And act before the market decides for you.

Alexander Procter

January 20, 2026

12 Min