The enterprise shift to cloud computing

Companies reached a wall with on-premise data centers. Storage and computation demands kept rising, but infrastructure couldn’t scale fast enough. Cloud computing changed that. It removed those physical limits and gave enterprises access to scalable, on-demand computing, resources that Chalan Aras, Senior Vice-President and General Manager of Acceleration at Riverbed, accurately described as “practically infinite.”

With the cloud, organisations gained the ability to process massive datasets, train modern AI models, and deploy data-driven services globally, all without upfront hardware investments. For executives, the message here is simple: the cloud isn’t just a technical migration; it’s a strategic decision that supports long-term agility and innovation. When computing power and storage scale effortlessly, business growth and experimentation move at the same pace.

Decision-makers should see the cloud not merely as a cost center but as a competitive platform. It gives enterprises the ability to pivot quickly, test new AI capabilities, and expand into new markets without hardware constraints. The cloud has become the baseline for enterprise innovation.

AI adoption is complicated by scattered data across multicloud environments and disparate regions

Enterprises today run on multicloud systems, often by design. They want the flexibility to choose the best services from different cloud providers, reduce dependency on a single vendor, and improve resilience. The tradeoff is dispersed data. Information ends up stored in multiple environments, sometimes across continents. For AI, this is a serious challenge. AI training requires bringing massive datasets together in the same location where GPUs, graphics processing units, perform the heavy computation. When that data sits in far-flung regions or across providers, performance and cost take a hit.

Executives should understand that while multicloud strategies offer control and choice, they complicate AI readiness. Latency increases. Data transfer costs rise. Some regions have high power costs, making it difficult to run GPU-intensive workloads efficiently. All this slows down AI projects and increases their total cost of ownership.

The solution is not to abandon multicloud but to build smarter data movement and governance systems between providers. Having data spread out isn’t inherently a problem, it’s the lack of coordination that hurts performance. Future-ready enterprises are those that unify their data strategy while keeping the operational flexibility that multicloud systems provide.

Okoone experts
LET'S TALK!

A project in mind?
Schedule a 30-minute meeting with us.

Senior experts helping you move faster across product, engineering, cloud & AI.

Please enter a valid business email address.

Data transfers for AI and cloud operations present major financial, speed, and governance challenges

Moving vast amounts of data remains one of the costliest and slowest parts of enterprise AI deployment. Transferring a single petabyte of information can take around nine days over a 10Gbps connection and cost up to $80,000 in egress fees alone. These numbers demonstrate that even the most advanced cloud infrastructure can still struggle with data movement at scale. Beyond the monetary and time costs, there’s a governance factor to consider, ensuring data integrity, accuracy, and regulatory compliance throughout the entire transfer process.

For executives, this represents a strategic choke point. As companies accelerate their AI initiatives, delays in data transfers slow down progress and increase operational risk. Continuous model training means new data must feed into systems daily, not just once. If the process isn’t fast and secure, it doesn’t matter how powerful an organisation’s AI models are, the pipeline becomes the limiting factor.

Leaders should invest in data transfer systems that prioritise both speed and governance. The aim should be a scalable data movement strategy capable of supporting real-time or near-real-time updates, with transparent monitoring to confirm that every transfer is secure and validated. The financial cost of inefficient data movement is substantial, but so is the opportunity cost of missed innovation due to delays.

Riverbed leverages decades of expertise to optimise large-scale data migrations for AI applications

Riverbed has turned data movement into a high-performance operation. Drawing on 25 years of experience, the company helps clients extract, optimise, and move massive datasets efficiently across cloud environments. Chalan Aras, Senior Vice-President and General Manager of Acceleration at Riverbed, described their approach as “serving it on a plate,” delivering data that’s ready for immediate use in AI workloads.

This approach has produced measurable results. One client facing a massive transfer job expected twelve days per petabyte, with over twenty petabytes still to move. The original estimate projected the project to take up to nine months, but Riverbed completed it in less than a month. Another financial services company completed a 30PB migration between clouds in just over a month while maintaining full governance compliance. These results mattered not just operationally but strategically, their teams were able to use GPU processing time on schedule, with no disruption to AI training.

C-suite leaders should pay attention to this shift. The ability to move data swiftly and dependably no longer represents just an operational advantage, it’s a competitive one. Organisations that manage to streamline the data transfer process can enable faster AI experimentation, quicker time-to-market, and stronger ROI on cloud investments. Riverbed demonstrates how operational excellence in data handling directly supports strategic outcomes at the enterprise level.

Historical infrastructure decisions have resulted in fragmented and heterogeneous IT environments

Many large organisations operate with a patchwork of IT environments built over years of evolving business requirements. Past decisions, ranging from legacy on-premises systems to modern cloud deployments, were made to meet immediate needs rather than long-term integration goals. Different departments or business units often adopted their own cloud or SaaS solutions independently, creating silos that make enterprise-wide data consolidation increasingly complex.

For executives, this fragmentation presents a structural challenge to AI strategy. Data scattered across many systems cannot easily be aggregated for analytics or model training. This slows down innovation and limits the ability to extract unified insights from the organisation’s data assets. In a competitive environment, where AI performance relies heavily on comprehensive datasets, these internal barriers can directly constrain progress.

The solution requires leadership to prioritise integration and visibility across all data sources, whether on-premise, cloud, or SaaS-based. It also demands governance frameworks that connect these pieces without disrupting existing operations. Modernising infrastructure is not just about adopting new technology; it’s about creating cohesion across everything that already exists. A coordinated approach to infrastructure and data management gives enterprises a more complete foundation for AI-driven growth.

Single-cloud consolidations are often insufficient for meeting comprehensive enterprise needs

Many organisations consider consolidating all operations within a single cloud provider to simplify management, contracts, and maintenance. While this may reduce complexity on paper, it rarely delivers full coverage in practice. No hyperscale provider serves every geography or meets every regulatory and performance requirement. Consequently, enterprises often rely on at least one additional provider to fill those gaps, accepting added management overhead as a tradeoff for flexibility and coverage.

For business leaders, the key is strategic balance. A single cloud simplifies operations, but over-reliance increases vulnerability to vendor-specific limitations such as regional availability, service outages, or pricing changes. Multicloud structures provide resilience and access to the best tools from multiple vendors but introduce governance and operational challenges that require strong internal discipline.

C-suite teams should view cloud architecture as a continually evolving strategic choice, not a one-time decision. The goal should be achieving the right mix, maintaining performance, compliance, and resilience while avoiding unnecessary complexity. Enterprises that design their cloud strategies with this long-term view gain agility without losing control.

The rise of agentic AI intensifies the need for continuous, large-scale data movement

Agentic AI systems, which act autonomously to perform complex reasoning tasks, require constant access to large and diverse data sources. These systems operate continuously and depend on a steady flow of updated information to maintain accuracy and relevance. This creates a sharp rise in both the frequency and volume of data transfers across cloud environments. Unlike traditional AI models that are trained periodically, agentic AI must draw on data streams in near real time, demanding infrastructure capable of sustained, governed data movement.

For executives, this shift means that one-off data migrations are no longer sufficient. Companies must now design ongoing, automated pipelines that keep data synchronized across environments without bottlenecks. The reliability, security, and efficiency of these transfers directly determine the effectiveness of AI operations. Delayed or inconsistent data movement will reduce the system’s responsiveness and limit its ability to deliver timely insights.

To manage this new scale, enterprises need infrastructure built for throughput and predictability. Solutions must handle constant, large-volume transfers without compromising governance or compliance. Chalan Aras, Senior Vice-President and General Manager of Acceleration at Riverbed, noted that many customers are transitioning from single-instance migrations to continuous data movement models to support their evolving AI strategies. This is a signal for leadership teams to focus on operational readiness, ensuring that their data architecture can sustain the demands of 24/7 AI processing.

Final thoughts

The shift toward AI-driven operations has revealed one of the biggest challenges in modern enterprise: data fluidity. The issue isn’t just where information lives, it’s how quickly, securely, and intelligently it moves. Fragmented systems, costly transfers, and governance demands highlight a reality that every executive needs to address: AI performance is only as strong as the infrastructure feeding it.

Business leaders who treat data movement as a strategic capability, not just an IT function, will be the ones shaping the next competitive frontier. The ability to transfer large volumes of data efficiently, while maintaining compliance and transparency, defines how fast an organisation can innovate.

The path forward lies in unifying fragmented environments through smarter, continuous data pipelines. This isn’t about choosing one cloud or one vendor; it’s about enabling a connected data ecosystem that supports persistent AI advancement. Companies that invest in this now won’t just operate faster, they’ll think faster, decide faster, and lead with confidence in a world built on intelligent data.

Alexander Procter

April 1, 2026

8 Min

Okoone experts
LET'S TALK!

A project in mind?
Schedule a 30-minute meeting with us.

Senior experts helping you move faster across product, engineering, cloud & AI.

Please enter a valid business email address.