Many AI initiatives fall short of delivering tangible business returns

There’s been an extraordinary amount of money poured into generative AI, $30 to $40 billion globally, according to the MIT 2025 State of AI in Business report, and yet, 95% of companies have nothing to show for it. No measurable return. That should concern anyone betting large on AI.

The problem isn’t with AI itself. It’s powerful tech with broad application. The issue is execution. Too many organizations jump into AI because it’s trending, not because they have clarity on what they’re solving. Use cases get built without strong ties to business objectives. That’s how you end up with impressive demos that don’t scale or solve anything meaningful. AI doesn’t fail because it’s complex; it fails because companies try to implement it without direction, discipline, or long-term thinking.

Research from Gartner shows that 45% of leaders in high-AI-maturity organizations maintain production deployments for three years or more. That’s real impact. The difference? These organizations commit to strategies that are designed around longevity and relevance.

AI’s potential to shake up markets is real. It’s not a question of whether it can deliver value, it can. The question is whether leadership is disciplined enough to build for results, not for headlines. If you’re not doing that, you’re spending money without leverage.

A clear, outcome-oriented focus is critical for AI strategy success

If you’re putting AI into your business and can’t say, in plain terms, what business problem it’s solving, you’re wasting resources. Focus is everything. Kathy Kay, CIO at Principal Financial Group, puts it clearly: when AI projects aren’t tied to specific outcomes, what you get is “AI sprawl.” That means too many experiments, too little return, and more technical debt.

Executives need to drive AI adoption by asking: What outcome do we want? Is this about reducing costs, speeding up growth, making customer experience better? If it’s not solving a problem that meaningfully shifts a business metric, why do it?

There’s a common mistake here, using AI to automate tasks just because it’s possible. Writing emails, creating documents, generating summaries, sure, generative AI can handle those. But unless those tasks are bottlenecks or provide some differentiating value, the return will be minimal. Don’t sacrifice AI capacity for convenience. Invest it where it drives real leverage, on outcomes that move your competitiveness.

Peter Mottram from Protiviti says it well: CIOs should funnel AI use cases through filters tied to metrics. You want value on the other side. Set KPIs at the start. Track them in-flight. And don’t get attached to flashy tools, get attached to results.

When you strip away the complexity, this comes down to discipline. Successful leaders align projects to goals. They ask hard questions early. They hold teams accountable for value creation. If that mindset isn’t in place, even the best AI tools won’t save the effort.

You don’t build AI strategy around tools, you build it around impacts. That’s where the actual ROI lives.

Aligning AI risk tolerance with strategic planning is essential

Any executive using AI needs to understand this, if you’re unclear about your organization’s risk appetite, you’ve already introduced risk. AI isn’t just code deployed at scale. It’s a new system of decision-making, and that demands a new approach to governance.

Ted Paris, EVP and Head of AI at TD Bank, is on point when he says it’s not enough to list risks. You need to decide where the boundary is, what level of risk your business is prepared to accept, and under which conditions. That means asking specific questions: How much autonomy do we allow AI to have? What decisions can it make? What can’t it do under any circumstances?

The data from EY’s 2025 Responsible AI Pulse Survey is sharp: 99% of companies faced financial losses from AI risks; 64% lost more than $1 million. Even more alarming, only 12% of executives could name effective controls for core AI risks. That’s a failure in design, not in technology.

You want AI to create value sustainably? Then you need a framework that guides how it’s deployed, measured, and governed. Faruk Muratovic from Deloitte outlines this clearly, your AI strategy needs enforceable guardrails. This isn’t about limiting growth. It’s about enabling innovation with confidence that you’re not exposing your business to uncontrolled liability.

CIOs should drive this conversation with legal, risk, privacy, and security leaders in the room. Doing it right means defining what is acceptable, who makes the final decisions, and how issues will be detected and corrected. Leave that undefined, and you’re running a system you can’t defend.

AI can shift the trajectory of your business, but only if you’re clear on where your limits are. If those limits don’t exist, now’s the time to set them.

Balancing innovation with trust and transparency is vital for sustainable AI

AI innovation without trust is risk disguised as progress. Charles Thompson, who leads technology for Vanguard’s wealth division, put it clearly: you can’t separate trust from AI deployment. This technology affects how customers interact, how teams work, and how decisions get made. If people don’t trust it, adoption stalls and reputational risks grow.

Externally, AI must meet expectations for transparency, fairness, and security. Internally, it must be explainable and reliable. Not just accurate, understandable. AI that makes decisions staff can’t interpret isn’t just unhelpful, it’s a liability.

This is where governance becomes essential. Too many organizations push AI features fast, but skip over clarity. Customers and employees want to know what the technology is doing and why. Without that, even something technically brilliant won’t survive scrutiny.

Executives shouldn’t see oversight as a brake. It’s how you make sure everything you build with AI actually works and holds up when challenged. Thompson is clear, if your team is overly cautious because governance isn’t defined, that’s a signal to recalibrate. Give them the freedom to innovate, but with rules they trust.

Leaders have to own the message. Be public and precise about where AI is used and how it benefits people. That’s what builds alignment between innovation and purpose. It keeps trust moving in the same direction as technology. And when the external environment, media, regulators, competitors, asks questions, you’ll already have the answers.

A mature data strategy is a prerequisite for effective AI deployment

You can’t build serious AI without serious data. It’s simple. If you don’t know what data you have, where it lives, or whether you’re even allowed to use it, you’re already behind. AI doesn’t perform well with messy or restricted data. If your data strategy isn’t current, your AI strategy won’t matter.

Anthony Caiafa, CIO and CTO at SS&C Technologies, makes the point clearly: do you know which datasets AI needs to ingest? Do you have the rights to use them? Are they governed properly? These are foundational questions, but they’re often left unanswered until it’s too late. And with increased regulation, mishandling data doesn’t just slow projects, it opens up direct legal risk.

The issue isn’t only about access. It’s about infrastructure too. Faruk Muratovic from Deloitte highlights that most enterprises don’t have data that’s structured or curated well enough to support AI tools. Whether your data sits on-prem, in the cloud, or across multiple silos, making it usable in real-time for AI takes intentional design. Skipping this step guarantees failure at scale.

The numbers back it up. According to the 2025 State of Data Security Report from Immuta, 55% of data leaders said their strategies weren’t keeping up with AI demands. Another 64% reported real problems granting secure and timely access to authorized users.

CIOs won’t fix this on their own. They need to work closely with data teams and business leaders. Functional leads must own their department’s data, not just in terms of access, but readiness to drive outcomes. If business leaders want AI to deliver value, they need to help deliver the right data.

Don’t assume your data is ready. Validate it early. Pressure test its security, structure, quality, and compliance. Then align your infrastructure to support the workload and latency demands of your AI models. Without that, deployment remains theoretical.

Securing AI systems is imperative to prevent exploitation and breaches

As more AI gets embedded into enterprise tools, the threat surface expands rapidly. Security must be part of the original AI strategy, not added after deployment. If you ignore this, you’re gambling with vulnerabilities that can be exploited quickly and silently.

Matt Costello, VP of Commercial AI Solutions at Booz Allen Hamilton, makes it clear: enterprise AI applications are being actively targeted. This isn’t theoretical. According to a Gartner spring 2025 survey, 29% of cybersecurity leaders experienced attacks on their generative AI infrastructure. Another 32% were hit by prompt-based attacks that manipulated AI outputs.

AI platforms aren’t magically immune from risk. Malicious prompts, system hijacking, and biases in training data are real attack vectors. The risk doesn’t only come from inside the core enterprise architecture. CIOs need to analyze all AI supply chains, especially AI features being pulled in from SaaS products and third-party APIs. Each integration point is a possible breach location.

You also can’t rely purely on the cybersecurity team to fix this on their own. AI needs custom threat models, code reviews, pipeline validation, and regular testing. Costello points out that even widely adopted frameworks can have deep vulnerabilities, just like we’ve seen in traditional software stacks.

The responsibility is with leadership. If you’re deploying AI across your enterprise and haven’t audited for security, you’re exposed. Include your risk, IT, and legal teams early. Secure your pipelines. Test your models in adversarial conditions. Vet your tools from the bottom up.

If AI is becoming integral to your decisions, it must be defended just as thoroughly as your customer data and infrastructure. Anything less is negligent strategy.

Determining the degree of AI autonomy is crucial

As AI systems become more capable, the question is no longer whether they can make decisions, it’s how many decisions you are willing to let them make without human involvement. This is a governance issue, not just a technical one. If you’re planning to scale AI, you need a clear position on how much autonomy you’re comfortable delegating.

Peter Mottram, Managing Director at Protiviti, emphasizes that CIOs must lead the conversation on AI autonomy. You can’t wait for regulators or external crises to force the issue. Now is the time to define what AI is allowed to decide, and under what conditions those decisions can happen without human review. Establishing scope isn’t restrictive. It ensures systems don’t drift beyond your control.

Agentic AI, systems that can act and make decisions on their own, is not theoretical anymore. As these technologies mature, the risks of overextending autonomy become real. Without tight controls and clear workflows for escalation or override, you’re risking decisions being made in live environments that haven’t been vetted within organizational standards.

This isn’t about fear, it’s about readiness. The decisions you delegate to AI should be mapped, monitored, and tested under real-world conditions. That includes defining tolerance thresholds, establishing human checkpoints, and identifying failure modes. If something goes wrong, who is responsible? What actions can be reversed? What data informed the decision?

This level of control is essential if you expect AI systems to operate in mission-critical workflows. Without it, you compromise accountability and scalability. More importantly, you create conditions where trust in AI, internally and externally, can erode quickly.

C-suite leaders must treat this topic as a strategic discussion, not only a technical design choice. Set your boundaries. Document them. Measure how often systems operate within them. As AI agents evolve, your governance must keep pace. Anything less places long-term value and trust at risk.

In conclusion

AI isn’t complicated. Mismanaging it is. The real challenge isn’t the technology, it’s the decisions around it. If your strategy isn’t grounded in outcomes, built on clean data, aligned with your risk tolerance, and protected by real security, you’re not building something durable. You’re stacking risk.

Autonomy should be earned, not assumed. Trust should be designed into the process, not hoped for after rollout. And innovation should never come at the cost of clarity.

The leaders getting AI right aren’t just deploying tools. They’re making hard calls early, aligning across teams, and thinking several steps ahead. That’s what it takes. The rest is noise.

Alexander Procter

November 25, 2025

11 Min