Agentic AI will not boost revenue for major public cloud providers

The hype around generative AI got the hyperscalers excited, and for good reason. Running large models eats up serious compute power. That mapped neatly onto centralized public cloud infrastructure. So AWS, Azure, and Google Cloud saw an easy win: more AI means more demand for their cloud platforms. Simple.

But agentic AI is different. It’s not built to be centrally hosted in massive data centers. It’s built to think for itself and operate across diverse environments, on-prem data centers, sovereign clouds, managed hosting, even edge setups. That means it doesn’t need specialized GPU clusters that only hyperscalers provide. It takes a more distributed approach. It’s lighter, smarter, and more self-reliant.

For anyone expecting the same economic returns that came with hosting generative AI, this is inconvenient. Planning around a single revenue bump from consolidation doesn’t hold when the underlying tech values autonomy and output over scale. Agentic AI doesn’t want to live in a walled garden, it prefers to travel light and integrate on the fly.

C-suite leaders in hyperscaler businesses need to digest this: the next generation of AI isn’t guaranteed to drive cloud resource consumption. If agentic AI becomes mainstream, your most lucrative product, centralized compute at scale, won’t be the default destination anymore.

Agentic AI’s distributed design reduces the need for large-scale centralized cloud infrastructure

Most people misunderstand what agentic AI is. It’s not just a smaller version of generative AI. It’s a different idea. Think of it as a system capable of choosing its tasks, managing its own resources, and integrating services only when needed. It doesn’t need all the firepower upfront. It focuses on efficiency by nature.

Because of its design, agentic AI doesn’t depend on centralization. It often runs fine on standard hardware. It can pull in specific small language models and call APIs when needed. The key advantage is modular, task-directed intelligence. It’s efficient and flexible. That makes large, centralized infrastructure unnecessary, not useless, but not required for most use cases.

This is why the movement toward edge computing and hybrid environments fits so well. Agentic systems work where they’re needed, in a factory, at the network edge, inside a local datacenter. They’re autonomous, which is the point. Enterprises see this as a win because it cuts latency, saves on bandwidth, and reduces cloud bills.

The nuance here is critical. This trend empowers businesses, not just cloud providers. If you’re leading a company, agentic AI gives you executive-level control: lower operating costs, more deployment options, and no obligation to scale infrastructure through hyperscaler lock-in.

Modern IT infrastructure is often more conducive to agentic AI than traditional hyperscalers

Agentic AI isn’t locked into one specific infrastructure model. That gives it serious range. It doesn’t need a hyperscaler to function, and in many cases, it performs better when deployed closer to the business, on-prem, hosted, or hybrid environments. What we’re seeing now is the emergence of a wider infrastructure landscape, from sovereign cloud providers to regional players to efficient colocation options. Many of these offer better flexibility, improved cost structures, and direct control over performance and security.

This increased diversity lets enterprise leaders run smart AI systems where they choose, not just where hyperscalers say they should. That’s a win. There’s no longer a single path to AI scalability. You can design systems that prioritize sovereignty, compliance, cost-effectiveness, or latency reduction. Agentic AI works with all of them. It’s adaptable by default.

Most businesses have already built up complex, hybrid infrastructure over the years. Agentic AI fits into that model without disruption. You don’t need to overcommit to new infrastructure, and you don’t need to increase spending on heavy centralized resources. For a CIO or CTO, that flexibility translates directly into business advantage, more options, less risk, and clearer paths to ROI.

The hyperscaler advantage was built on scale and centralization. But agentic AI rewards precision, autonomy, and architecture control. That reduces the hyperscalers’ edge in this specific AI evolution. The decisions don’t just come from the IT department now. Executives planning long-term investment strategies need to understand this shift, it’s already happening.

The future of AI infrastructure will favor multiprovider strategies and modular, bridge-like architectures

Enterprise AI strategies are heading toward dynamic, flexible infrastructure models that pull from multiple providers. In practice, this means architecting systems that span private clouds, public clouds, edge systems, regional providers, and colocation environments, whatever combination delivers the best performance per task.

Instead of a centralized tower, you’re looking at systems where parts of the workload are distributed and managed by different elements across a network. That approach gives enterprises massive flexibility. It’s less about cloud dominance and more about intelligent orchestration. Each component, whether it’s a processor in your private data center or an API call to a third-party LLM, operates as part of a wider, orchestrated system.

For public cloud providers, this shift won’t eliminate them, but it will reposition them. They won’t be the center. They’ll be one of many moving parts in a multi-cloud, multi-vendor approach. From a C-suite view, this is about risk reduction, platform optionality, and deployment agility. You buy what you need, where you need it, and tune for performance or cost as required.

The leading companies are already designing around these architectures. Not in experimental labs, this is production-level, enterprise-grade deployment. So if you’ve been investing on the assumption that hyperscalers will remain the default for AI, it’s time to rework that thinking. Agile infrastructure beats locked-in infrastructure. That’s what the market is rewarding now.

High costs of AI workloads on hyperscaler platforms are prompting a reevaluation of public cloud reliance

A decade ago, moving to the cloud looked like a straight win, cost reduction, simplification, scalability. But the economics have shifted, especially once AI entered the picture. Running generative and agentic AI workloads on public cloud platforms has introduced unexpected costs. These workloads are compute-heavy, unpredictable, bandwidth-intensive, and often inefficient when deployed at scale on centralized infrastructure.

Enterprise teams are looking at their monthly cloud bills and realizing that the economics don’t hold up, particularly when scaled across multiple AI use cases. Many companies that migrated to the cloud early are now questioning the long-term sustainability of these models. Expectations set on cost-efficiency are being broken by reality.

Meanwhile, the cost of owning or leasing infrastructure has dropped. You no longer need your own data center team to get on-prem-level control. Colocation providers, managed services, and edge-ready hardware give enterprises low-friction alternatives. These environments provide the performance needed to train or run AI systems, without the cloud premiums.

AI has made clear that centralized cloud isn’t always the most rational choice, especially when total cost of ownership is on the table.

IT and business leaders need to reassess cloud default strategies. Flexibility, price transparency, and operational performance matter more now than abstract scalability, particularly when AI defines your competitive edge.

Hyperscalers must adapt to a decentralized AI ecosystem to remain competitive and relevant

In the decentralized world of agentic AI, hyperscalers are definitely no longer the mandatory starting point. Their key infrastructure will still have value, but the assumption that all AI development flows through them is no longer true.

As more companies adopt modular, hybrid strategies, the hyperscalers are going to need to shift from being full-stack platforms to more focused infrastructure and service providers. Their offerings will need to plug into wider ecosystems, support multivendor orchestration, and deliver efficiencies that justify their premium pricing. This isn’t automatic, it requires serious adaptation.

The market is now rewarding resource-efficiency, fast deployment, platform-neutrality, and tighter control over data location. Hyperscalers have the reach and tools to meet those needs, but the business model has to adjust. That may mean short-term revenue volatility as enterprises spend differently, but the long-term opportunity is still there for those that evolve quickly.

Boards and executives inside these companies should be clear-eyed about this pivot. The most forward-looking enterprises are already moving to diverse, AI-optimized infrastructure models. If hyperscalers don’t move with them, they’ll be left supporting legacy workloads while high-growth AI use cases develop elsewhere.

Success means becoming a part of a federated ecosystem, not trying to dominate it. That shift is already underway. The sooner it’s embraced, the bigger the opportunity.

Key executive takeaways

  • Agentic AI won’t drive hyperscaler growth: Leaders at cloud providers should temper revenue expectations from agentic AI, as its distributed design reduces demand for centralized compute resources.
  • Distributed AI reduces centralized infrastructure dependence: CIOs should recognize that agentic AI systems run effectively on standard or hybrid infrastructure, decreasing reliance on large-scale public cloud services.
  • Hybrid and regional infrastructure is gaining ground: Enterprises have more cost-efficient, sovereign, and flexible deployment options that better align with agentic AI, making diversification beyond hyperscalers a strategic priority.
  • Multiprovider strategies are becoming the norm: Decision-makers should architect AI systems to operate across platforms, enabling agility, cost control, and reduced vendor lock-in.
  • Cloud costs are forcing AI strategy rethink: CFOs and CTOs should reevaluate public cloud commitments as AI workloads become expensive and unpredictable, pushing enterprises toward owned, leased, or managed infrastructure.
  • Hyperscalers must pivot toward interoperability: To remain relevant, large cloud platforms should redesign offerings to support hybrid and modular deployments rather than compete as central AI hubs.

Alexander Procter

April 30, 2025

8 Min