Weak oversight and missing governance structures are hindering enterprise AI ROI

AI spending is accelerating across industries. Yet many organizations still struggle to see measurable returns. The problem isn’t the technology, it’s the lack of structure. Solvd’s research shows that most AI pilot failures stem from poor project management, limited visibility, and weak coordination, not from flaws in AI tools themselves. In simple terms, businesses are investing heavily but flying blind when it comes to oversight.

Successful AI integration demands more than enthusiasm and budget. It depends on strong governance, accountability, and transparency. Without these, even the best models can collapse under internal misalignment. When ownership is unclear, decision-making gets delayed, and progress slows. Establishing disciplined frameworks helps teams stay aligned on metrics, timelines, and expectations. That’s how organizations move from pilots that struggle to those that scale.

Executives should not underestimate the cultural side of this shift. Building oversight into AI operations is not about restricting innovation, it’s about channeling it. It ensures teams have both freedom and direction. The companies that formalize AI governance now will move faster later because their infrastructure is built for scale, not firefighting. According to Solvd, visibility and management gaps remain the top reasons AI initiatives fail. Closing them isn’t optional anymore, it’s decisive leadership.

Increased AI spending in 2026

AI spending isn’t slowing down. In fact, 71% of business leaders plan to increase their investment in 2026. But this isn’t the same spending wave we saw in earlier years. The era of open-ended experimentation is giving way to scrutiny. Boards and investors now want proof that every AI dollar delivers visible business outcomes.

This pressure will sharpen how companies approach development cycles, success metrics, and accountability. Leaders can no longer afford to fund projects with unclear goals or inconsistent data. The new benchmark is measurable, data-driven results, performance that connects directly to financial and operational returns. AI projects that can’t defend their value with evidence won’t survive another fiscal year.

Executives need to align AI investments with concrete business drivers. That means setting precise KPIs from day one, applying robust data validation, and continually auditing the real-world impact. Decision-making must shift from optimism to facts. Companies that can measure and communicate value will lead; those that can’t will lose credibility with stakeholders quickly.

For forward-thinking organizations, this new scrutiny is not a threat, it’s a filter. It will help ensure that only AI programs with strategic importance and clear ROI move forward. The 71% of leaders expanding their AI budgets aren’t wrong to spend more. They’re just required to spend smarter.

Okoone experts
LET'S TALK!

A project in mind?
Schedule a 30-minute meeting with us.

Senior experts helping you move faster across product, engineering, cloud & AI.

Please enter a valid business email address.

Continuous experimentation with AI requires embracing repeated iterations for scalable success

AI adoption is moving from raw experimentation to focused execution. Companies are learning that first attempts often fall short, not because the ideas lack potential, but because execution needs refinement. Hulbert highlighted this as a sign of maturity, explaining that successful AI projects are rarely the result of a single effort. Instead, they evolve through multiple iterations until the right balance of architecture, performance, and scope is achieved.

This cycle of trial and adjustment demands a strategic mindset from leadership. Executives must recognize that discarding early work is not failure, it’s deliberate progress. Each iteration produces data, insights, and clarity that strengthen the next version. The best teams document what went wrong, react fast, and rebuild on better structures. That’s how organizations transform scattered pilots into scalable, reliable systems.

For leaders, the challenge is resource discipline. Experimentation cannot mean unlimited cost or endless development. Teams must define checkpoints that evaluate when to pivot, restart, or expand. Building a framework that encourages iteration while maintaining accountability ensures the company continues learning without losing direction.

Enterprises that treat iteration as part of their long-term strategy will move faster when scaling future capabilities. The lesson from Hulbert’s insight is clear: progress in AI is not linear, and disciplined experimentation is the only path to getting it right.

Restarting AI initiatives, though Resource-Intensive, prevents sustained waste on underperforming projects

Abandoning weak AI projects is hard, but continuing to fund them is worse. According to recent findings, 52% of tech leaders admit their organizations still allocate budgets to AI projects that show little or no measurable improvement. This reveals a deeper issue, organizations hesitate to make decisive cuts due to sunk costs or internal pressure to show progress.

Restarting isn’t just about starting fresh, it’s about stopping financial leakage. The longer underperforming projects continue without results, the higher the opportunity cost. When teams acknowledge that a pilot isn’t working, leadership must act quickly to redirect resources toward initiatives with stronger potential. This requires data-driven transparency and the courage to reset priorities without delay.

Executives should establish regular review cycles that assess not only technical viability but also strategic alignment. Every quarter, AI investments should be reevaluated against objectives, ROI metrics, and market relevance. When a project consistently under-delivers, it’s no longer an innovation initiative, it’s a liability. The faster misaligned work is identified and sunset, the faster resources can flow to what truly matters.

For decision-makers, the message is simple: commitment to results outweighs attachment to past investments. By choosing to restart over sustain inefficiency, organizations protect capital, accelerate learning, and stay ready to adapt. The data, 52% continuing to fund non-performing AI initiatives, shows that too few have made this shift. Those that do will lead the next wave of high-impact AI transformation.

Section 5: rapid AI innovation challenges traditional IT development models reliant on Long-Term foundations

AI is advancing faster than any previous wave of technology, and it’s changing how companies must think about development and investment. Traditional IT strategies rely on stable infrastructures that evolve predictably over years. Hulbert points out that this model no longer holds for AI. The technology landscape is moving at a speed that often makes foundational systems outdated before they are fully implemented.

This acceleration forces decision-makers to rethink the balance between permanence and adaptability. Long-term architecture still matters, but flexibility now decides competitiveness. AI platforms and models are updated continuously, and frameworks that worked six months ago may already be inefficient. Executives must plan for agility, systems that can be restructured as technology advances, data scales, and regulatory demands evolve.

This mindset demands operational resilience across the entire organization. Funding cycles must anticipate short lifespans for certain tools. Teams must be structured to pivot quickly and redeploy resources when the technology or data direction changes. Success now depends on how efficiently a company can revise its own systems while still meeting business goals.

Hulbert captures the essence of this change: in AI, sometimes the right move is to discard old systems and replace them with better ones. It’s not waste, it’s progress. As AI grows more dynamic, leaders who embrace rapid innovation and refuse to be slowed by outdated processes will outpace those who remain rigid. Adaptation, not endurance, will define the winners in this next phase of technological competition.

Key takeaways for decision-makers

  • Strengthen oversight to unlock AI ROI: AI investment is rising, but weak governance and unclear ownership are limiting results. Leaders should establish strong frameworks and transparency to ensure projects deliver measurable business value.
  • Demand data-backed performance from AI initiatives: With 71% of companies increasing AI budgets in 2026, scrutiny is tighter than ever. Executives must tie funding to data-proven outcomes and cut projects that lack clear ROI metrics.
  • Embrace iteration as part of AI maturity: Success in AI depends on refining pilots through repeated cycles. Leaders should treat iteration as strategic learning, encouraging teams to rebuild based on insights rather than preserving weak pilots.
  • Cut losses on underperforming AI projects: Continuing to fund weak AI initiatives only delays progress. With 52% of tech leaders reporting wasted spend, executives should redirect resources toward higher-performing, strategically aligned projects.
  • Adopt agile models to keep pace with rapid AI change: Traditional IT approaches are too rigid for the speed of AI evolution. Leaders should design adaptable systems and processes that allow faster iteration, replacement, and scaling as technology advances.

Alexander Procter

April 9, 2026

7 Min

Okoone experts
LET'S TALK!

A project in mind?
Schedule a 30-minute meeting with us.

Senior experts helping you move faster across product, engineering, cloud & AI.

Please enter a valid business email address.