Cloud isn’t broken, but if you’re starting to question its ROI, you’re not alone. A growing number of companies are pulling some workloads out of public infrastructure, AWS, Azure, Google Cloud, and bringing them in-house. Why? Because the bill came due, and it’s high.
Cloud was marketed as the simpler, more cost-effective alternative to physical hardware. That was true, ten years ago. Companies wanted speed, scale, and resilience without buying servers or managing physical infrastructure. But growth changed things. What worked for a 50-person tech team doesn’t scale for a 5,000-person enterprise without trade-offs. We’re seeing that now.
The point of repatriating workloads is not to abandon cloud, it’s to rethink where the cloud still serves you and where it clearly doesn’t. You’re not going backward. You’re moving forward with better data, clearer objectives, and more control. And that’s the goal, control over your performance, your costs, and your future architecture.
When you reevaluate infrastructure decisions, you’re not reacting emotionally. You’re optimizing for clarity, cost, and capability. That’s how tech should work.
Leaders often mistake cloud repatriation as a failure of strategy. It’s not. It’s a signal that your organization has evolved. You’re not scaling a startup anymore, you’re managing operational complexity. Freeing yourself from baked-in cost assumptions is strategic, not reactionary. Repatriation just means you’re recalibrating the system to serve your long-term goals.
Surprise and hidden cloud costs are major drivers behind the repatriation trend
Most finance teams don’t get hit with a massive cloud bill because of one major mistake. They get hit because of a hundred small ones. And they stack up fast.
Cloud billing isn’t as transparent as it should be. Defaults get expensive. Misconfigured machines run idle. Auto-scaling rules spin up more instances than needed. Your developers launch test environments and forget about them. You pay for every byte crossing a region or leaving the cloud, without always knowing what triggered the charge. That noise turns into real money.
Everyone starts off using the cloud in good faith. Then the monthly bill shows up… and nobody can explain it in under an hour. That’s a problem. If you can’t trace cost back to specific infrastructure decisions, you’re not running a scalable system, you’re on autopilot. And autopilot shouldn’t cost $3 million per year.
That’s what happened at 37Signals. Despite optimizing their AWS usage, their CTO, David Heinemeier Hansson, said they still spent $3.2 million in 2022. Roughly half of that just went to storage. That kind of spending led them to begin moving their Basecamp and HEY platforms off the cloud. Not overnight. Strategically. They’re building for independence.
As a leader, you should expect clarity and control, not surprise. Budgets need forecasting. Repatriation isn’t about escaping the cloud, it’s about removing unpredictability. If your team doesn’t know where half your cloud costs come from, you’re not optimizing, you’re absorbing losses. And that’s not sustainable. Move aggressively toward systems you can audit, debug, and budget, consistently.
Cloud sprawl significantly exacerbates uncontrolled cloud expenditure
The simplicity of deploying cloud services is part of the problem. Any team, in any department, can spin up instances, databases, APIs, and workloads in minutes. But with ease comes chaos, especially if there’s no centralized oversight. What starts as agility turns into scattered costs and fragmented infrastructure. That’s cloud sprawl.
Executives often realize too late that unused or forgotten resources are still charging them. Dev teams launch test environments and don’t shut them down. Someone forks a virtual machine, and it stays idle for six months. Multiply that across large teams, and you’re spending capital on assets nobody’s using or tracking.
This unmanaged growth of cloud artifacts, compute, storage, services, compounds over time. Asset visibility drops. Your finance team sees a rising bill, but they can’t trace it back to a business-critical function. Cleaning it up takes time and alignment between engineering, IT, and finance. It’s not just about cutting waste, it’s about rebuilding discipline.
The solution isn’t to ban the cloud. It’s to create infrastructure governance that operates at scale, shared controls, automated cost audits, and accountability. If you don’t know what’s deployed where, you’re not running infrastructure, you’re funding inefficiency at scale.
If you’re leading an enterprise with dozens of teams deploying cloud infrastructure without coordination, you’re likely hemorrhaging money. Cloud sprawl isn’t caused by incompetence. It’s the result of speed without policy. Leaders need to restore structure before deciding whether to repatriate, optimize cloud usage, or both. The hidden cost isn’t the infrastructure itself, it’s the loss of architectural clarity.
Data egress charges serve as hidden cost traps, complicating cloud exit strategies
Moving things into the cloud is easy. Moving them out is when you hit friction, especially on your balance sheet. That friction is known as egress fees, and they aren’t always obvious. Companies often don’t realize how much they’ll pay to transfer data out of a public cloud until they attempt to leave or restructure.
These fees create a lock-in effect. The larger your datasets and the more your apps rely on large-scale data movement, the more painful exit becomes. And this isn’t hypothetical. Antoine Jeol, cofounder at Holori, published public estimates in 2023 showing that the top cloud providers charged significantly for moving 50TB of data, costs that can quickly climb if you’re migrating multiple systems.
Yes, some cloud providers waived or reduced egress fees in 2024, but these policies are full of conditions. For example, Microsoft offers only the first 100GB per month for free on its Internet Egress service, and that’s over its premium global network. These aren’t enterprise-tier breaks, they’re breadcrumbs.
If your entire operation depends on large-scale data interaction, internal pipelines, external APIs, backups, live analytics, you’re probably overpaying for outbound traffic. And if you’re planning to repatriate or even shift providers, these fees can delay or derail your entire timeline.
If you’re CTO or CFO, this is the kind of cost that undermines your long-term flexibility without you realizing it until it’s too late. Every business should model potential egress costs as part of their infrastructure evaluation, even if they’re not planning to migrate today. You don’t want to find out you’re trapped when dealing with vendor limitations or strategic pivots.
Legacy “lift and shift” migrations have led to inefficiencies that drive repatriation
Many companies moved to the cloud under pressure, timelines, growth, ambition, but did so without redesigning their systems for cloud-native architecture. Instead of rethinking how applications should run in cloud environments, they lifted their existing workloads and dropped them into the new environment untouched. That’s called lift-and-shift.
The problem? This approach imports old inefficiencies into a system that charges based on usage. Legacy applications weren’t built to scale with cloud cost models in mind. So they become expensive fast, running continuously, over-consuming compute and storage, and missing out on cloud-native performance benefits like serverless computing or autoscaling that reacts based on traffic, not assumptions.
Refactoring to be truly cloud-native is often sidelined because of its perceived cost and complexity. But avoiding it leads to bloated infrastructure, reduced agility, and unpredictable cost behavior. Over time, companies begin asking whether those applications, which never really fit the cloud model anyway, belong there at all.
That’s where repatriation starts to make sense, when costs get high, performance stabilizes, and you realize the application never needed that level of cloud overhead in the first place.
Executives need to stop treating cloud migration as a checkbox. Lift-and-shift may get you there quickly, but if you’re not prepared to invest in adapting the architecture, you’re effectively paying cloud rates for on-prem performance. Make the decision intentionally, rearchitect for the cloud or move high-cost, low-change workloads back to environments you can control and predict.
Security, compliance, and performance challenges also motivate cloud repatriation
Cost isn’t the only factor behind repatriation. For some industries, operating in public cloud environments introduces real friction, not just financial, but regulatory and operational. Certain jurisdictions enforce data residency, which means some data must be physically stored in specific countries. Public cloud regions don’t always accommodate those rules.
Security, too, becomes a concern as businesses scale and the infrastructure touches more critical services. While cloud providers offer robust security controls, they rely on shared responsibility models. That’s not always sufficient under strict compliance regimes or internal audit requirements.
Performance matters as well. Some applications don’t benefit from being distributed globally. If latency, packet routing, or inter-service communication is inconsistent at scale, localizing the workloads can stabilize efficiency. Service downtime or regional disruptions, common in large, multitenant clouds, can also hurt operations in ways the CFO and CTO both notice.
Citrix research confirms this. In addition to cost, companies repatriate workloads due to unexpected compliance issues, inability to meet internal performance expectations, and outages. These factors, while less visible than a billing spike, are equally compelling once they begin impacting customers and revenue.
For boards and C-suite leaders, repatriation isn’t just a technical conversation. It’s about risk management. If infrastructure decisions prevent you from entering a regulated market or increase security exposure, you’re taking on liabilities, not just inefficiencies. Repatriation becomes a strategic safeguard when you need absolute control over where data lives, how it moves, and how it’s protected.
Workload characteristics determine the suitability of repatriation
Not every workload belongs in the cloud. That’s the simple version. The more complex truth is that some workloads scale more predictably, consume fixed resources over long periods, and don’t benefit from the flexibility of cloud infrastructure. At some point, for those applications, renting becomes more expensive than owning.
When workloads are storage-heavy, data backups, replicated archives, logging systems, or large datasets, the cloud storage costs don’t stay small. For example, 37Signals discovered that roughly $1.5 million of their $3.2 million annual AWS bill in 2022 came just from Amazon S3 storage. That’s not a burst in traffic, it’s steady storage cost. They’re already moving key products like Basecamp and HEY towards an on-prem model to fix that.
Compute and network activities also behave differently as companies grow. If your resource usage is predictable, and cloud elasticity isn’t providing measurable performance returns, you’re paying for flexibility you’re not using. In those scenarios, infrastructure ownership, on-prem or colocation, can turn variable costs into more manageable, linear ones.
This doesn’t mean cutting everything from the cloud. It means understanding what works better where, and aligning infrastructure placement with cost efficiency, not with trend-driven IT decisions.
As a C-suite leader, you should push for detailed workload analysis. Technical teams may favor cloud convenience over efficiency. Your job is to tie each architecture decision to financial impact, and to ROI in both performance and cost. Some workloads pay for themselves in the cloud. Others just accumulate expenses. Know the difference, and prioritize accordingly.
Repatriation becomes increasingly cost-effective as organizations scale
Cloud pricing models are optimized for startups, small teams, and low-utilization environments, for good reason. In the early stages of a product or company, agility matters more than cost breakdowns. But as organizations mature and systems become more predictable, those pricing models begin to lose their advantage.
At scale, cloud bills don’t grow linearly, they often rise exponentially, especially when expansion is tied to non-optimized architecture, cloud sprawl, and always-on resource models. Dependencies, network load, API traffic, and storage retention stack fast. At $10K per month, that might be noise. At $1M per year, it’s a problem that can’t be ignored.
As the business grows, investments in hardware, colocation, and dedicated infrastructure return capital efficiency. You spend up front, and in exchange, you gain stable ownership of a resource stack that no longer fluctuates based on usage timing or configuration mistakes. Even with personnel costs, engineers to implement, maintain, and optimize systems, you often come out ahead in long-term TCO.
This level of control can’t be replicated in a multitenant cloud setup. Cloud automation helps, but it doesn’t replace ownership and architectural customization when you’re operating at enterprise scale.
CFOs and CTOs should align early on workload forecasts and projected usage. What looks cost-friendly in the cloud today won’t be later if usage becomes large and predictable. Don’t assume that scaling in the cloud will remain cost-effective on autopilot. Plan your inflection points in advance, model when repatriation beats rental, and act before you’re locked into cost structures that stifle long-term efficiency.
Regulatory and compliance pressures can necessitate a move away from public cloud services
If your industry falls under strict regulation, finance, healthcare, government, defense, you can’t afford ambiguity about where your data lives or how it’s handled. Data sovereignty laws often require that sensitive data be stored within specific national or regional borders. Public cloud doesn’t always provide that level of control.
Most cloud providers operate global networks, but their regional data centers may not cover all compliance obligations. And even when they do, verifying and enforcing controls at the physical and logical levels isn’t always straightforward. Auditors, regulators, and internal stakeholders expect proof, not just provider assurances.
When those expectations aren’t met, business risk increases. That’s where repatriation or hybrid infrastructure becomes more than an optimization exercise, it becomes a compliance necessity. You architect the environment to meet external mandates, not just internal preferences. That protects your market access and reduces the legal uncertainty tied to sensitive data.
And beyond geography, some regulated businesses also prefer more operational transparency, understanding every step of data handling, security protocols, and user access. Cloud providers operate under shared responsibility models. But sometimes, shared isn’t enough.
A structured, multi-step evaluation is critical to a successful repatriation strategy
Ripping workloads out of the cloud without a plan causes more problems than it solves. Cloud repatriation demands a deliberate, multi-phase approach, starting with visibility, followed by detailed workload analysis, infrastructure selection, technology alignment, and execution.
The first step is performing a clean, line-by-line audit of cloud spending. Not just how much you’re paying, but where, and why. Break down storage, compute, and network layers. Investigate underused services. Identify “zombie” assets still incurring costs. Cost signals show you where inefficiencies are hiding.
From there, evaluate workload fit. Determine which applications and services require flexibility versus those with fixed, stable demand. Predictable systems, such as storage replication, internal APIs, or job schedulers, might deliver better ROI when moved to on-premises or colocation.
Next, decide between hybrid and full repatriation. Hybrid models allow critical, customer-facing apps to stay in the cloud while shifting background loads to physical infrastructure. This split environment reduces cost pressure while maintaining performance where it matters.
Then look at your stack, replatforming or switching to open-source technology stacks like Linux, Kubernetes, or OpenStack can reduce dependency on proprietary platforms. This minimizes vendor lock-in and lets skilled teams operate in ecosystems with greater flexibility and lower licensing cost.
Finally, build a migration plan that includes realistic timelines for data movement, hardware setup, personnel training, and rollback contingencies. Egress fees and sequencing become operational factors. Repatriation isn’t just about leaving the cloud, it’s about doing it with minimum disruption and maximum control.
C-suite leaders must treat repatriation like an investment, one that requires upfront clarity, measurable ROI, and well-structured execution. Any move involving critical infrastructure has implications for performance, risk, and timelines. Shortcuts are expensive. Invest the time to align technology strategy with financial and regulatory objectives, and make every phase accountable.
Hybrid infrastructures are poised to be the long-term model for enterprise IT
As infrastructure strategies mature, most enterprises are moving toward hybrid environments, not because cloud is failing, but because operational reality is more complex than a single-platform solution. Businesses are looking for performance, control, and cost efficiency all at once. Hybrid lets them balance those priorities.
Certain workloads, customer-facing platforms, globally distributed applications, services with unpredictable demand, still run well in the cloud. You scale responsively and land closer to your users. But back-end operations, batch processing, storage-heavy systems, and internal tooling often benefit from returning to colocation or on-premise environments where performance is predictable and billing is controllable.
The hardware has caught up. Today, companies can deploy high-performance CPUs, large-core servers, and optimized networking gear in compact, powerful configurations. Colocation centers make infrastructure expansion viable without full data center ownership. You don’t have to build power and cooling, you lease capacity and bring your architecture.
This direction isn’t theoretical. According to a CIO Cloud Trends Survey by Azul, 22% of CIOs plan to repatriate some workloads, and 17% intend to delay or cancel cloud projects to manage costs. At the same time, Flexera’s State of the Cloud Report found that 21% of workloads and data have already been migrated back from public cloud environments. These aren’t isolated decisions, they reflect a growing realignment in enterprise IT strategy.
If you’re leading infrastructure strategy at scale, there’s no need to treat cloud and on-prem as opposing frameworks. They’re tools. The right combination depends on your workload patterns, cost models, and customer requirements. What matters is creating an environment where every workload runs in the most efficient, secure, and performance-aligned way possible, without vendor constraints and budget surprises.
Cloud repatriation signifies a maturing IT strategy centered on cost efficiency and operational control
A few years ago, cloud adoption was a competitive differentiator. Today, it’s just the norm. That shift means enterprises have to rethink their cloud strategies, not through hype or legacy decisions, but through current business objectives.
When companies first moved to the cloud, they were in high-growth, high-velocity phases. The need was speed. But growth changes the equation. As systems stabilize, cost becomes more visible. Flaws in architecture, over-reliance on proprietary systems, and lack of long-term cost controls start to erode the original value proposition.
Repatriation is what comes next when leadership starts asking harder questions, what are we actually paying for, and is it aligned with performance? Can we control infrastructure direction without waiting for a vendor roadmap? Do we understand and own the risk around outages, pricing changes, or service interruptions?
Many are now answering those questions by bringing workloads back under direct control. This isn’t reactive. It’s methodical. It’s what happens when infrastructure evolves to match enterprise maturity.
C-suite leaders should treat repatriation not as a trend reversal, but as a natural evolution. Strategic decisions are not about where workloads live, they’re about how well those workloads serve current business metrics. Cloud served its purpose. Now, the focus moves to precision, efficiency, and autonomy. Repatriation simply reflects that shift.
The bottom line
If the last decade was about getting into the cloud fast, the next is about stepping back and making smarter infrastructure choices. Cloud still solves real problems, scalability, deployment speed, global reach, but it’s not a one-size-fits-all solution. As businesses evolve, so do their operational priorities. Efficiency starts to matter more than convenience.
Repatriation isn’t about rejecting the cloud. It’s about demanding clearer value from it. Leaders aren’t walking away from modern architecture, they’re moving toward alignment. Operational, financial, and regulatory alignment.
This is where maturity shows up. You know what your systems need, what your teams can manage, and what your business is willing to pay for flexibility. The value isn’t in following trends, it’s in building infrastructure that supports the business without questioning every invoice. When the equation no longer makes sense, you rewrite it.
The companies that win long-term will be the ones that build on intention, not inertia. Cost, control, compliance, none of these should be afterthoughts. They’re decisions. And now is the time to make them deliberately.


