AI workloads will exceed data centre capacity by 2027
AI is not just moving fast, it’s moving at breakneck speed. Most leaders in IT and infrastructure already see the writing on the wall: by 2027, the volume of AI workloads will outstrip the capacity of many existing data centres. That’s a problem if you’re trying to operate at the edge of innovation and stay competitive while the infrastructure under you starts to hit its limits.
According to Salute’s 2026 State of the Industry report, 83% of senior professionals in data centre and IT roles believe their current infrastructure won’t keep up with AI demands over the next two years. Even more immediate, 74% admit they aren’t fully prepared to handle AI workloads today. These numbers aren’t fluff. They reflect structural constraints that aren’t being solved fast enough: limited compute space, slow deployment timelines, and outdated operating models.
If you’re running a company that depends on AI to deliver value, whether through predictive tools, automation, or customer personalization, you’ll be constrained by the rate at which your data systems can scale. This isn’t just a CIO problem. It’s a boardroom-level issue. Your infrastructure readiness is about to become a core driver of business agility and risk exposure.
For C-suite leaders, now is the time to come to terms with the speed of transformation. It’s not enough to have a cloud strategy on paper. The systems housing your models and workloads need to be scalable, power-resilient, and ready to run near max utilization by default. If they’re not, plan for disruption, or invest now to prevent it.
AI and high-performance computing drive data center development
AI and high-performance computing (HPC) aren’t side projects anymore, they’re at the center of enterprise infrastructure growth. The way businesses plan their future footprint has shifted. The focus isn’t on how many square feet of racks you’ve got, it’s about performance, density, and speed to execution.
Almost half the respondents (48%) in Salute’s report named AI and HPC as the top drivers of new data centre development. That’s not surprising. What’s more important is how leaders are recalibrating their approach. Instead of scaling out with endless new facilities, they’re increasing density, optimizing power usage, and getting smarter with workload placement. It’s about running harder, not just building bigger.
84% of respondents said speed-to-market influences their investment decisions. That’s a clear signal: execution speed now separates winners and laggards. Gaining capacity faster, deploying with fewer constraints, and getting to production in weeks instead of months isn’t a luxury, it’s a necessity. This changes how we think about ROI, operations, and capital efficiency in infrastructure.
For executives, the playbook is changing. You need performance-led thinking embedded in your strategic infrastructure planning. Start by reviewing how well your current setup handles compute-heavy AI training, model inference, and input/output processes at scale. If your data centre capex and operating plans aren’t aligned with dense, AI-ready performance benchmarks, you’re already behind. Even if you’re not in the business of delivering cloud services, your digital backbone is core to your ability to innovate and scale. Treat it that way.
Electricity availability is the new bottleneck for AI expansion
If you think capacity is the biggest constraint to scaling AI infrastructure, look again. Power is the real limiter, and it’s now the most urgent flashpoint in the system. As demand for AI expands, the grid is starting to lose pace. Data centres are hitting power caps even in advanced markets. And it’s not just about raw megawatts. It’s about stability, availability, and timing.
Salute’s 2026 report names electricity access as the top barrier to AI expansion, ahead of funding, staffing, or supply chain disruption. Why? Because in many locations, power can’t be delivered where or when it’s needed. Grid interconnect times are getting longer. Some operators are stuck in months-long wait queues just to plug into the system. This fundamentally constrains how fast you can grow, no matter how much capital or ambition you have.
Forecasts in the report estimate data centre electricity demand will more than double by 2030. That changes the equation for how boards and C-level leaders need to think about infrastructure planning. Power strategy isn’t something you hand off to ops anymore, it’s a core business issue. From site acquisition to investment sequencing, energy availability is shaping which markets are viable and which ones fall behind.
The good news: companies are starting to adapt by going deeper into power procurement strategies, things like renewable purchase agreements, on-site generation, and battery storage. But make no mistake, large-scale AI still requires grid-level power for sustained performance. As Erich Sanchack, CEO of Salute, puts it: “Scaling AI isn’t just about technology in the rack – it includes the resilience of supply. The grid has become the new bottleneck.” If you’re not solving for that constraint now, it will slow down everything else, including innovation.
Emphasis on upgrading existing facilities over new builds
The data centre industry is shifting its priorities. Instead of defaulting to new construction, operators are now focused on upgrading what they already have. This isn’t just about cutting costs, it’s a move driven by necessity. Power constraints and land use pressures make new sites harder to launch quickly. Upgrades offer a faster, more flexible path to unlock capacity.
Salute’s report shows that around 50% of organisations are focused on improving and retrofitting their current estates. That includes upgrading cooling systems, reworking power distribution, and boosting compute density in existing facilities. These improvements raise utilisation levels without the delays linked to permitting and new site development.
This trend is telling. Infrastructure leaders are being pushed to do more with what’s on hand. Improving thermal performance and power efficiency can immediately increase capacity for AI workloads, especially in urban centres where utility limits are tight. Operators are also standardising resilience strategies like redundant power paths, smarter energy management, and real-time load balancing to stretch what’s possible in constrained environments.
For decision-makers, this shift deserves attention. If your IT footprint sits on aging infrastructure, it’s time to invest in efficiency-first upgrades before chasing expensive expansions. Retrofitting might be less glamorous than launching new campuses, but it delivers faster ROI when energy constraints force you to prioritise near-term gains. Ignore these upgrades, and you risk falling behind markets where operators are already pushing toward optimal compute-per-watt performance.
Critical shortage of specialized operations talent hinders AI readiness
AI infrastructure doesn’t run on automation alone. People still matter, especially the people who understand how to operate, maintain, and optimize high-density systems running 24/7. Right now, that talent pool is too small. The growth in AI workloads is exposing a structural weakness in the industry’s staffing capacity.
Salute’s 2026 report highlights ongoing shortages in specialist operations talent. These are the roles that keep physical infrastructure efficient, stable, and compliant, from electrical engineers and facility managers to systems operators trained on AI-specific workloads. On top of the technical bar being higher, there’s fierce competition from hyperscalers that can pay more, offer bigger projects, and attract the talent that most providers need.
The situation is compounded by a lack of training pathways. Universities and technical institutes haven’t kept pace with the speed and complexity of next-gen infrastructure. High-density AI deployments require strict discipline in power quality, cooling performance, and uptime management. If teams aren’t trained to handle these systems, even the best infrastructure won’t reach full output.
Jon Healy, Managing Director EMEA at Salute, put it plainly: “You can build the most advanced facility in the world, but without specialist operations talent, it will never realise its full potential.” That’s not an overstatement. Talent is now a gating factor in execution. For C-level leaders, the action items are clear: invest aggressively in upskilling programs, form talent partnerships with universities, and create internal development tracks focused on mission-critical roles. If your long-term bets depend on AI, your organisational design needs to reflect it, now.
Strong optimism for AI investment despite cybersecurity and regulatory risks
Even with mounting scrutiny from regulators and heightened cybersecurity threats, confidence in AI infrastructure investment remains strong. For most executive teams, the risk profile is understood, and acceptable. The commercial upside is too significant to ignore, and the market is trending toward faster reward cycles and broad enterprise adoption.
According to Salute’s survey, 75% of industry leaders believe the returns on AI infrastructure outweigh the risks. Most of them expect to see ROI within five years. That level of optimism isn’t naive. It reflects a deepening conviction that the future of competitive advantage, in every sector from logistics to healthcare, depends on AI capability.
This doesn’t mean risk is being ignored. Governance models, security controls, and compliance frameworks are getting built in parallel. As regulators begin to define clearer rules around data use and model transparency, smart operators are getting ahead of those standards rather than waiting to react. The same applies to cybersecurity. Enterprise leaders know that running high-density, multi-tenant environments means every workload needs to be resilient, secure, and consistently audited.
For executives, the path forward is transparent. Stay invested, but be deliberate. Understand that performance, compliance, and security need to scale together. The winners won’t be the ones that just deploy capital the fastest, they’ll be the ones that build infrastructure with enough resilience and foresight to capitalize on AI’s maturity curve over the next decade.
Key highlights
- AI workloads will surpass capacity by 2027: Most data centre leaders expect AI demand to exceed current infrastructure within two years. Executives should prioritize immediate scalability planning to avoid performance bottlenecks and missed opportunities.
- AI and HPC are driving infrastructure strategy: Nearly half of leaders cite AI and high-performance computing as the top drivers of new data centre projects. Investment should shift toward performance-centric, high-density deployments that can meet AI’s compute demands quickly.
- Power constraints are a critical expansion risk: Electricity availability is now the top blocker to AI infrastructure growth, with demand projected to more than double by 2030. Leaders must elevate energy strategy to the board level and align site planning with power availability.
- Upgrading beats building in the near term: With grid limits and siting delays, operators are retrofitting existing facilities to unlock faster, more efficient AI-readiness. Funding should flow to high-impact upgrades in power, cooling, and density to maximize existing assets.
- Talent scarcity hampers AI infrastructure efficiency: A shortage of specialized operations talent is limiting uptime, optimisation, and high-density performance. Leaders should invest in upskilling, retention, and recruitment to close capability gaps that slow deployment velocity.
- Return on AI still outweighs the risks: Despite concerns around cybersecurity and regulation, 75% of leaders expect positive AI ROI within five years. Companies should continue scaling AI capabilities while embedding compliance and resilience into infrastructure from the start.


