Gen AI adoption is reshaping cloud strategy due to complex hybrid architectures
Right now, generative AI is forcing a rethink of cloud strategy at the top level. It’s more than just another tool. It’s fundamentally different, highly experimental, resource-hungry, and largely unpredictable. This means the usual cloud spend forecasts you’re familiar with? They probably won’t hold. If you’re rolling out gen AI in a typical public cloud setup, you’ll quickly see costs stack up on compute, storage, and network traffic.
Juan Orlandini, CTO North America at Insight, summed it up perfectly: “If you’re running gen AI in the public cloud, the costs add up quickly.” And he’s not exaggerating. Every AI call, data transfer, and processing task triggers pricing that scales far faster than most traditional apps. Your teams might be testing a model today and scaling it tomorrow, without a clear view of ROI.
Bastien Aerni, VP at tech infrastructure firm GTT, points out that CIOs are stuck in a bind here. They don’t always know in advance which AI projects will succeed. Overinvesting is reckless. Underinvesting kills performance and chokes user experience.
Cloud architectures are also getting tangled. Hybrid environments supporting legacy systems, multiple cloud vendors, and experimental AI stacks introduce complexity that makes cost containment and governance harder. Add “data gravity” to that, where moving large datasets becomes inefficient and expensive, and you’ve got a problem that can’t be solved by simply throwing resources at it.
Proximity-based data processing
When it comes to AI performance, location matters, more than most executives realize. Putting your AI close to where the data is generated cuts latency and slashes cost. It’s the difference between real-time results and laggy, inefficient systems. You don’t want your AI models waiting on data that has to travel halfway around the world. It burns time, energy, and money.
Scott Gnau, Head of Data Platforms at InterSystems, put it clearly: “Real-time AI only works if you’re close to the data source.” This is mission-critical for applications like live monitoring, predictive maintenance, and AI-enhanced field operations, all of which are already in play for major enterprises.
Now consider regulatory pressure. If your organization handles personal data, especially across multiple jurisdictions, then proximity is about compliance. Boris Kolev, Global Head of Technology at JA Worldwide, leads an NGO operating in 115 countries. For him, keeping data local isn’t optional. “We have to comply with everything from GDPR to local youth protection laws,” he explains. That means using local models, regional data centers, and systems built for jurisdiction-specific privacy laws.
When data movement acts like a liability, the architecture has to adapt. Many organizations are turning to edge computing and regional cloud providers. It’s less about centralization now, more about control and context. Processing locally lets you meet response-time expectations and regulatory demands, without compromising scale.
For C-suite leaders, this shift in architecture should be a key part of any AI roadmap. You’re not just building smarter systems. You’re building them legally, efficiently, and in real time.
Flexible architectures help reduce vendor lock-in and enhance long-term adaptability
Lock-in is a silent threat. It slows your team, increases long-term operating costs, and makes it painful to evolve your platform as better AI tech comes to market. If your AI system depends entirely on one vendor’s cloud stack, you’re stuck with their roadmap, pricing, and performance, even when better tools are available elsewhere.
Juan Orlandini, CTO North America at Insight, recommends building abstraction layers, software elements that make your applications independent of any specific infrastructure. This gives you the freedom to switch AI backends without rebuilding entire systems. It’s basic infrastructure hygiene that too many companies ignore until it’s too late.
Bastien Aerni at GTT supports the same idea. His team focuses on high-performance platforms that can adapt to new AI technologies quickly, without worrying about compatibility roadblocks. This is about staying agile. It’s about preparing to use whatever works best, whenever it’s available.
Boris Kolev, Global Head of Technology at JA Worldwide, takes a grounded approach, built from experience managing tech systems across shifting budgets. His organization leans on open-source tools first, using licensed platforms only when necessary. That decision is driven not just by cost, but by control. Open-source keeps architecture portable, and small providers offer customization that large vendors rarely match.
Scott Gnau, Head of Data Platforms at InterSystems, reinforces this point: “We stick to open standards and interchangeable architectures.” It’s a deliberate move to preserve flexibility. When your environment is built on interchangeable components, you’re not locked into outdated systems. You can move, adapt, evolve, and keep doing it without interruptions.
For C-suite executives, this means setting clear architectural standards at the leadership level. Invest in interoperability. Prioritize open frameworks. Architect for change. This keeps your AI strategy future-ready and opens the door to better performance and lower long-term costs.
Dedicated financial governance and monitoring are essential to control unpredictable AI costs
AI doesn’t follow traditional cost behavior. It scales unevenly. It spikes unpredictably. And it often runs outside the boundaries of classic IT budgets. For CFOs and CIOs, that’s a warning. If you treat gen AI like any other tech stack, you’re going to get surprised, and not in a good way.
Juan Orlandini of Insight points this out clearly. “AI projects are often siloed,” he says. “Organizations need to apply the same discipline to AI as they do to traditional workloads.” That means full visibility into usage patterns, cost structures, and scaling triggers from day one. A finance discipline like FinOps needs to evolve to meet the speed and uncertainty of AI operations.
Boris Kolev from JA Worldwide provides a direct example. His team actively monitors reads per minute and latency, indicators of how intensely a model is being used. When usage spikes beyond thresholds, they activate a manual shutdown process, a kind of kill switch, to avoid surprise overages. It’s tactical and effective. Long-term, Kolev wants the process automated, ideally with AI decision layers built in.
Bastien Aerni adds another layer: poor data governance makes the situation worse. “Moving data from point A to B can be expensive, especially on public clouds,” he says. And often, data being moved isn’t even valuable. Low-quality data inflates storage costs without creating better AI outcomes. In simple terms: a lack of visibility compounds the financial impact.
For C-level leaders, this requires a redesign of how budgets are managed on tech projects. AI systems need clear guardrails, real-time tracking, and automated controls that can dial back unnecessary usage. Innovation can only scale if costs remain measurable and correctable. Without financial controls that move as fast as your AI, you’re not just running expensive systems, you’re running them blind.
Clean, governed data is critical for ensuring performance and cost efficiency
Generative AI needs a constant supply of data. But not all data is useful. When enterprises feed poor-quality or irrelevant data into AI workflows, the result is inefficient models and unnecessary overhead. This isn’t just an engineering issue, it’s a business one. You get higher cloud bills, slower model performance, and weaker outputs.
Bastien Aerni, VP at GTT, puts it plainly: many organizations “accumulate data that’s barely usable.” The cost of this mistake compounds fast. When teams move that data into public cloud systems for processing, the storage and transmission fees increase regardless of the data’s actual value. Without enforcing strong data governance, companies end up paying to process digital clutter.
From a C-suite perspective, there need to be clear expectations across departments: know what data exists, where it resides, and whether it’s worth retaining or processing. This requires more than inventory management, it requires a governance model tied directly to business use cases and AI objectives. Clean data leads to more useful models. Governed pipelines lead to reduced waste and better control.
Leadership should revisit existing data policies and push teams to define their data value frameworks. If data isn’t adding to the capabilities of your AI systems, it may be degrading them, or at a minimum, wasting money. Assigning ownership and accountability for data readiness can directly impact operational efficiency and overall performance.
Most enterprises are still in the early stages of gen AI integration
Despite the hype, most companies are still learning how to use generative AI. Executives are starting with controlled pilots, stuff like document summarization, transcription tools, and internal chatbots. These systems are isolated, lightweight, and allow teams to experiment without disrupting core operations. For now, that’s a strategic choice, not a technical shortcoming.
Scott Gnau, Head of Data Platforms at InterSystems, has made this observation after working with enterprises across sectors: “I haven’t seen many full-scale deployments yet. Most are still in pilot or early-stage implementation.” These aren’t failures. They’re part of a deliberate approach by experienced CIOs who want to avoid premature bets.
Companies like JA Worldwide are already showing how to make early-stage AI useful. Boris Kolev, their Global Head of Technology, is piloting a “pitch master” AI that gives students feedback on presentations, analyzing tone, posture, and content. It’s focused, goal-driven, and controlled.
Another trend is layering AI onto legacy infrastructure through APIs or injecting retrieval-augmented generation (RAG) strategies into existing workflows. This avoids tearing down systems that still work while giving new capabilities to outdated platforms. It’s efficient and allows teams to validate ROI clearly before making bigger moves.
For senior executives, this signals maturity in approach. The right move isn’t rushing to scale, it’s starting where the cost and risk are lowest, proving value, and expanding from there. AI adoption will accelerate, but only if the early stages are handled with precision and alignment.
Agile, evolution-ready architectures are key
Generative AI is advancing faster than most enterprise systems can keep up. New models, tools, and methods emerge constantly. Some make existing architectures inefficient. Others introduce new capabilities that demand structural adaptation. If your infrastructure is fixed to a static roadmap, you’re going to hit operational and financial limits quickly.
Juan Orlandini, CTO North America at Insight, doesn’t mince words: “A new tool may be 10 times better tomorrow.” That pace of improvement means that the system you optimized a year ago could already be outdated. The solution isn’t overhauling constantly. It’s building with flexibility from the start, allowing your teams to move fast without having to start over every time something better appears.
Bastien Aerni, VP at GTT, adds to that point. “It’s no longer enough to have a three-year IT plan,” he says. AI is pushing infrastructure decisions to become shorter-cycle, more iterative, and fully aligned with product and data roadmaps. What matters is your ability to evolve, on demand and at speed.
This requires modularity in how systems are put together. It means choosing platforms and frameworks that support updates without ripple effects. It’s not about being reckless, it’s about being ready. AI doesn’t wait for traditional release cycles or IT governance routines.
For C-suite leaders, the signal is clear. Strategic flexibility must become a core part of your infrastructure decision-making process. That means budgeting for adaptability, reviewing architectures more often, and enabling cross-functional teams to pivot without delay. The companies that build for continuous change will be the ones that create and capture value as AI reshapes every sector.
Concluding thoughts
Generative AI is a structural shift in how modern enterprises operate, spend, and scale. It challenges legacy assumptions around cloud, cost control, and architecture. For decision-makers, this isn’t optional transformation, it’s the next phase of competitiveness.
The companies winning early aren’t necessarily the biggest or fastest. They’re the most prepared to adapt. They’re building modular systems, designing for proximity, resisting vendor lock-in, and aligning financial governance with AI’s pace. They’re not locked into fixed timelines or rigid stacks, they’re architected for motion.
As an executive, your role now is to make strategic decisions that don’t force trade-offs between speed and control. Gen AI demands infrastructure that moves with the business, not after it. That means setting clear principles on flexibility, data quality, compliance, and cost oversight across your leadership teams.