Infrastructure complexity is blocking effective AI implementation in enterprises
We’re seeing something pretty frustrating across enterprise IT. Almost every company out there is trying to use AI to make operations faster, smarter, and more efficient. They’ve got the intention. The plans. Even the models. In fact, F5’s latest data shows 96% of organizations are already deploying AI in some form. 73% of them are aiming to use it specifically to improve application performance.
So what’s holding them back?
The actual problem is buried in the infrastructure. According to the same F5 report, 60% of IT professionals say they’re stuck in manual work, troubleshooting, managing configurations, patching systems. They’re firefighting instead of building. The AI tools exist, but teams don’t have time to implement or maintain them because they’re buried in operational chaos.
When you break it down, every layer of complexity, multiple cloud environments, incompatible tools, fragmented APIs, creates drag. It eats away time and energy. Enterprises want to deploy AI to handle that, but the workload keeps growing and AI becomes just another thing they don’t have bandwidth to manage.
Executives need to understand this: when complexity gets out of control, future-focused investment gets choked. If your teams are stuck maintaining fragmented systems, they can’t innovate.
Fix the architecture first. Reduce unnecessary complexity. Only then can your AI tools deliver real impact.
Human resource limitations stem from a lack of available time
A lot of companies think they have a skills problem when it comes to AI. They don’t.
The data says 54% of organizations expect lack of AI expertise to be their top challenge in 2025. On paper, that sounds like a training issue, or a hiring issue. But based on what’s actually happening inside IT departments, the real issue is time, not talent.
Teams know what to do. They either already have skilled people or the ability to train them. What they don’t have is the space to learn, test, and execute. Day-to-day operations are consuming all available bandwidth. Firefighting outdated systems, adapting to vendor updates, keeping fragmented environments running, this eats every hour that could be used to build out AI capability.
The result is predictable. AI adoption stalls, not because people don’t understand it, but because they never get a clear runway to engage with it. This disconnect is why so many AI initiatives look good in strategy decks and fail to materialize in the field.
This matters for decision-makers. Investing in training is important, but it can’t offset a system that gives nobody time to think, test, architect, or deploy. You have to create space, intentionally reduce complexity, to let those capabilities take root. Otherwise, you’re trying to plant seeds on pavement.
Hybrid cloud infrastructures often intensify complexity rather than eliminating it
The hybrid cloud model was sold on the promise of simplicity, one control plane, seamless workload portability, unified policy management. From an architectural standpoint, it still looks efficient. But implementation reveals a more difficult reality.
When you connect on-prem systems with multiple public cloud platforms, each with its own APIs, security frameworks, and operational standards, you don’t end up with one system. You end up with many systems stitched together. This doesn’t simplify management, it aggregates it.
Enterprises are dealing with the fallout from that model. They’re managing applications across multiple clouds, each with its own rules for load balancing, observability, and access control. The pressure this places on IT teams is significant. F5’s research shows that 94% of organizations now operate across multiple cloud platforms. The median is four. That isn’t sustainable without careful simplification and governance.
Even more telling, 79% of enterprises have pulled workloads back from public clouds to on-prem environments, not because of cost, but because they couldn’t manage the complexity. That’s what’s actually driving infrastructure fatigue.
The lesson is clear. Distributed infrastructure without clear coordination increases operational drag. If you’re committing to hybrid cloud, you need a very deliberate strategy for how each environment interacts, what stays standardized, and what should be eliminated.
Vendor fragmentation and inconsistent APIs disrupt automation efforts and stymie AI advancements
Automation is meant to reduce workload. In practice, it often becomes another source of operational stress. The issue isn’t automation itself, it’s the constant instability caused by vendor-led API changes and platform inconsistencies.
Enterprise teams are building automated systems that work well in the short term. Load balancers talk to monitoring tools; orchestration platforms manage cloud deployments. But then a vendor updates their API, introduces a new version, removes a legacy function, or changes authentication models. The integration breaks. The team scrambles to patch the mismatch. Something meant to reduce manual effort turns into another crisis.
This is why API sprawl is now being seen as a strategic problem, not just a technical one. A10 Networks reports that 58% of organizations cite API sprawl as a significant issue. Each API is a new system to learn, monitor, and update. No two cloud providers handle API behavior exactly the same way. AWS, Azure, and on-prem solutions make technical interoperability harder than it should be. That inconsistency derails even strong automation strategies.
F5’s research confirms this dynamic, working with vendor APIs is now the most time-consuming task related to automation. That’s not an edge case. It’s how most enterprises are spending their operations time.
The challenge for leadership is to get ahead of this. That means fewer vendors, simplified integrations, and enforcing internal standards. Without it, your team spends more time fixing automation than gaining from it.
Poor vendor support and escalating licensing costs are exacerbating infrastructure instability at critical times
Enterprise infrastructure is under pressure, more distributed, more critical to revenue, and harder to manage. This is exactly when vendor partnerships should be stabilizing forces. But right now, they’re doing the opposite.
According to data from A10 Networks, 55% of U.S. executives and 47% of EMEA IT professionals say they would switch application delivery controller (ADC) providers due to poor support. That’s not a small signal, it’s a symptom of systemic dissatisfaction. Companies aren’t getting what they need from vendors: quick support, predictable pricing, and consistent service.
Licensing is another growing issue. A10’s report shows 44% of organizations are being negatively impacted by recent vendor license model changes. More concerning, 29% of executives now name licensing cost increases as their top objection, even more than security. For infrastructure platforms that are supposed to be helping enterprises manage complexity, these policy shifts make things worse.
When customers are already navigating multi-cloud deployments, dynamic workloads, and security compliance, the last thing they need is ambiguity from vendors. They need clarity. They need collaboration.
This is where forward-thinking executives should focus. Instead of onboarding more tools or expanding platform scope, audit your existing vendor relationships. Don’t let poor support and unpredictable pricing quietly erode your operational reliability. Select partners who make infrastructure more manageable.
Addressing operational complexity requires disciplined simplification
There’s a consistent pattern in enterprises that succeed with automation and AI, they start by removing noise. That means fewer tools, fewer APIs, clearer standards. You can’t automate what you haven’t controlled, and you can’t control what’s spread across dozens of loosely coordinated systems.
Automation doesn’t create discipline. It amplifies what’s already there. If your environment is messy, automation will turn small inefficiencies into large-scale breakdowns. This is why reducing complexity has to come before attempting deeper automation or AI integration.
F5’s research shows a strong signal here. Today, 95% of organizations are standardizing on observability tools like OpenTelemetry. This is important. Good observability is about creating consistent, structured data that automation and AI can act on. It’s a necessary foundation.
Meanwhile, the pressure to modernize isn’t going away. Two years ago, just 79% of companies said their revenue depended heavily on digital applications. Now, that’s 93%. It means reliability and speed are tied directly to business outcomes.
The gap between enterprises that simplify complexity and those that remain trapped in operational chaos
The divide is already forming. On one side, there are organizations that have invested in simplifying their infrastructure. They standardize their tools, reduce vendor overlap, and align their internal systems. These organizations aren’t just more efficient, they’re AI-ready. They create space for automation, experimentation, and agile decision-making because their systems are manageable and structurally sound.
On the other side, there are enterprises stuck in cycles of reactive maintenance. Operational drag, unmanaged API sprawl, and fragmented vendor support limit their ability to scale. These are the environments where new initiatives stall before they begin, not due to vision, but because systems are too disorganized to support execution.
AI will not slow down to wait for these organizations to catch up. The companies that streamline now will pull ahead, not just in performance optimization, but in product development, security response, customer engagement, and overall resilience. AI-driven decision systems amplify process quality; if the foundation isn’t there, competitiveness erodes quickly.
In conclusion
If you’re serious about using AI to drive performance, reduce latency, or cut operational cost, your first priority isn’t the AI itself, it’s your infrastructure. Right now, too many enterprises are building on unstable ground. Complexity is everywhere, sprawling APIs, overlapping vendors, incompatible tools, and most of it is self-created.
This isn’t a technology problem. It’s a leadership decision.
The gap between companies that simplify and companies that scale chaos gets wider every year. AI accelerates that divide. Enterprises that reduce surface area, standardize on fewer platforms, and enforce operational clarity will move fast. They’ll automate better, recover quicker, and extract real value from their data.
The rest will spend more time patching than progressing.
What’s needed isn’t more investment. It’s intention. Decide what doesn’t belong. Simplify what you already have. Build AI into systems that are already working, not ones you’re still trying to untangle.