CIOs are transitioning from a sole reliance on the public cloud

We’re seeing a fundamental shift in how enterprises think about cloud infrastructure for AI. The early days of AI workloads were unpredictable, fast-moving, and experimental. Public cloud made sense. It offered flexibility, and you could scale compute with a few clicks. But as these AI strategies mature into long-term, stable operations, leaders are stepping back and asking a simple question, Are we getting the best control, performance, and cost-management here?

The answer, increasingly, is no. Enterprise CIOs are now balancing their use of public cloud with private cloud and on-prem setups. This isn’t a retreat from the cloud. It’s smart allocation of resources. AI workloads like model training and fine-tuning need consistent, sustained GPU power. If you know your usage patterns, running these on your own hardware or with a private cloud provider gives you more control and predictable costs.

Security and privacy are big triggers as well. AI projects often involve sensitive, proprietary data. Many CIOs are saying they can’t take chances storing or processing that kind of information on infrastructure they don’t control. Not just for risk mitigation, but for compliance and business assurance.

According to a 2023 Prove AI survey of 1,000 enterprise leaders across the US and Canada, 67% plan to pull some AI data out of the public cloud within the next 12 months. No hype, just the result of better strategy and clearer understanding of workload requirements.

Greg Whalen, CTO of Prove AI, summed it up well: when AI workloads are no longer just experimental, the economics and control of private infrastructure win out. That’s the kind of decision that gets CFOs and CISOs aligned in the boardroom, too.

Enterprises are increasingly favoring in-house GPU infrastructure

Once you know what your AI team needs, buying your own GPUs stops being a luxury and starts being a business decision. Renting GPU time in the cloud every time you fine-tune or retrain a model might be fast, but it’s not efficient at scale. When the workloads are constant, owning the gear pays off.

Enterprise teams that fine-tune large language models or run continuous model evaluation don’t just occasionally need GPUs. They need them on-demand, for long hours, over extended cycles. In those cases, you’re looking at sustained compute requirements. That’s where in-house gear kills the economics of public cloud rental.

There’s this idea floating around that internal GPUs sit idle. But the reality is different. If your AI roadmap is clear, there is near-zero idle time. The demand is there, whether for training, model inference tuning, or synthetic data generation. And those are processes that benefit from predictable hardware access, no queue time, no variable pricing, full control over load balancing.

Greg Whalen of Prove AI points to that exact issue. He says most organizations they work with don’t leave GPUs sitting unused. Quite the opposite. They end up finding more productive work to run. It’s not a cost, it’s an asset that’s delivering continuous value when planned and deployed right.

For C-level decision makers, this is a capital planning issue. The savings compound over time. You trade short-term convenience for long-term control and cost benefits. And with AI becoming central to product development, customer support, and revenue growth, it’s a tradeoff worth making.

Private cloud spending is outpacing public cloud investment

Private cloud isn’t new, but the way enterprises are investing in it has changed. The numbers are direct: while public cloud budgets are still growing, private cloud spending is scaling faster, and that’s not a coincidence. Enterprises are projecting more than just incremental budget increases, they’re repositioning core infrastructure strategy to meet the demands of long-term AI deployment.

According to GTT Communications, the percentage of organizations planning to spend more than $10 million on private cloud solutions jumped from 36% in 2023 to a projected 54% in 2025. In contrast, the same spending threshold for public cloud is only expected to grow 12% during that same period.

AI workloads introduce a varied layer of risk and compliance demands that traditional SaaS or public cloud platforms can’t always meet adequately. With the number of regulations around data privacy increasing across sectors, from financial services to healthcare to defense, executives are no longer deferring responsibility for where and how their data moves. Private and hybrid clouds offer more than just flexibility, they offer clarity in accountability.

This investment trend is about setting up infrastructure that can meet AI scaling demands while staying ahead of regulatory frameworks. And for systems handling business-critical operations or sensory data, having that extra level of control over architecture is more than strategic, it’s necessary.

Security, compliance, and workload-specific needs are driving the adoption

Enterprise infrastructure has shifted from generalized solutions to selective deployments. AI isn’t just another workload, it involves layers of confidential data, iterative logic, and regulatory oversight. Companies that once operated primarily in the public cloud are evolving. They’re designing infrastructure that chooses what runs where, based on functionality, risk, and business logic.

Security and compliance are the priority here, not features, not convenience. When AI systems touch IP, financial data, personal identifiers, or any asset that falls under strict regulation, internal teams need full visibility into architecture. Enterprises are solving this by establishing multicloud and hybrid architectures, where public cloud serves high-volume or configurable components, and private or on-prem environments run core, sensitive workloads.

Bastien Aerni, VP of Strategy and Technology Adoption at GTT, highlighted this exact shift. He emphasized that as AI systems increasingly interact with business-critical and confidential information, companies are forced to rethink their entire architectural approach. The question has become a board-level concern: what belongs in the public cloud, what needs to stay private, and how do you structure your systems to reflect that?

This is about putting the right workload in the right place. Enterprises that fail to make those decisions early are the ones that end up facing operational, legal, and governance issues when complexity scales. Decision-makers looking to future-proof their infrastructure are already aligning workloads with their risk and compliance profiles.

The public cloud remains essential for rapid scaling and experimentation

The public cloud still plays a key role in enterprise AI strategy. It’s flexible, easy to deploy, and packed with on-demand services that make early-stage development and quick iteration possible without heavy upfront investment. For teams conducting research, building proofs of concept, or pressure-testing new AI models, it remains the fastest path forward.

But the landscape changes once AI moves from exploration to sustained execution. When uptime, cost control, and data sensitivity begin to dominate the equation, leaders are transitioning workloads into more controlled environments. Public cloud infrastructure is optimized for elasticity and feature variety, qualities that matter most in the testing phase, not necessarily in long-term production operations.

This shift stems from usage patterns and risk profile. CIOs who once optimized for available cloud tools are now focused on stability and predictability. Toolsets that were once critical assets are now used less, not because they’ve lost their technical edge, but because production models and business governance call for platforms with tighter guardrails.

Bastien Aerni, VP at GTT, put this into perspective clearly. He noted that conversations with CIOs have changed. Five years ago, public cloud features were a key decision factor. Today, those same CIOs place more value on how infrastructure supports sustained, scalable production with fewer surprises.

Public cloud isn’t going away. It’s evolving into a powerful tool for specific parts of a broader AI architecture. Its value remains, but selectively, where it’s strategically justified.

The move away from the public cloud is selective and reflects a broader strategy

Enterprises aren’t abandoning the public cloud. What they’re doing is optimizing, deciding workload by workload what belongs where, why, and for how long. It’s about business logic, performance outcomes, and long-term risk modeling. Companies are reallocating their highest-value, most sensitive AI workloads to platforms that give them greater control, while continuing to use public cloud infrastructure for what it’s still best at: scalability and speed.

This is a clear sign of strategic maturity. Managing AI at scale requires deliberate infrastructure architecture. Companies are reaching the point where AI is no longer an isolated innovation arm, it’s embedded across operations. And that means placing workloads based on real-world variables: latency, security, governance, and finance.

Danilo Kirschner, Managing Director at Zoi North America, explained this transition in simple terms: we’re seeing a shift from early cloud enthusiasm to structured, workload-specific planning. It’s not about moving entirely off the public cloud. It’s about moving correctly, deploying resources where they create the most value with the lowest friction.

Zac Engler, Chief AI Officer at C4 Technology Services, reinforced that point. In his view, companies aren’t conducting a dramatic exit. They’re quietly reassigning critical workloads away from public infrastructure. Why? Because trust, cost, and control are the issues now dominating decision-making at the board level.

This marks a turning point. Executive teams are no longer defaulting to a public-first model. They’re now navigating cloud architecture with the same precision they apply to financial portfolios. That’s where the market is headed, toward strategic selectivity, not blanket adoption.

Key takeaways for leaders

  • CIOs shift from cloud-first mindset: Leaders should rebalance cloud strategies as mature AI workloads drive demand for greater cost control, data ownership, and security not always achievable in public cloud environments.
  • In-house GPUs improve cost-efficiency at scale: Decision-makers with stable and continuous AI training needs should consider investing in private GPU infrastructure to reduce long-term compute costs and improve workload performance.
  • Private cloud spending is accelerating: Allocate IT budgets with private cloud growth in mind, enterprises planning >$10M investments in private cloud are increasing faster than public cloud spenders, signaling a strategic pivot.
  • Security and compliance fuel hybrid adoption: To meet risk and regulatory demands, executives should deploy hybrid and multicloud infrastructures that separate sensitive AI workloads from public environments.
  • Public cloud still drives early innovation: Use the public cloud for AI experimentation and rapid scaling, but shift production workloads requiring stability, governance, or resource predictability to private setups.
  • Selective repatriation demands workload-level decisions: Avoid blanket strategies, assess workload characteristics (e.g., cost sensitivity, data confidentiality) to determine the optimal hosting environment for each AI asset.

Alexander Procter

September 2, 2025

9 Min