Establishing AI governance structures for purposeful innovation
A lot of companies still treat AI like a science experiment, scattered pilots, disconnected POCs, and not much to show for it. That’s fine early on. But at scale, that randomness slows you down. Organizations that win with AI don’t just try things. They build systems that make trying things useful and repeatable. Walgreens is a good example of getting this right.
When Dan Jennings became CTO of Walgreens in 2023, he saw a company full of AI energy but little direction. Multiple teams were experimenting with models and tools, but each was doing its own thing. That’s not innovation. That’s noise.
So he created an AI Center of Enablement. It’s not a physical room with flashing lights. It’s a virtual structure, cross-functional teams, standardized processes, and shared accountability. The COE connects technology, data, security, and business units under a common framework. It does two things well: it enables innovation and enforces consistency.
No project moves forward without a business case, a roadmap, clear metrics, and a test cycle, from idea, to proof of concept (POC), to MVP, to measurable impact. That’s how you make innovation predictable. And that’s what Walgreens now does.
More importantly, the COE helps manage the trade-off between speed and risk. Walgreens is both a retailer and a healthcare provider. Retail moves fast; healthcare requires control. With this structure, AI initiatives serving the retail business, inventory optimization, personalization, engagement, can iterate quickly. Healthcare use cases, like pharmacy forecasting or staff scheduling, are routed through a more disciplined track with the required oversight.
So what changed? Enthusiasm turned into intention. Random acts of AI became governed workflows. What people once treated as tech playthings are now treated as strategic tools. That’s the shift.
This approach works because it makes AI practical. You stop wasting time duplicating efforts across teams. You stop “playing” and start solving actual problems that matter to the business. And you do it safely, at scale.
Measuring AI ROI beyond traditional financial metrics
Most executives still default to one question with AI investments: “What’s the return?” That makes sense, but it’s also incomplete. AI earns its value in more ways than just saving money or boosting revenue. The companies doing this well are expanding what ROI actually means.
Take FMOL Health. Will Landry, their CIO, is clear about what success looks like in a nonprofit healthcare system. It’s not only about dollars. It’s about physician satisfaction, patient experience, and operational sustainability. They’ve rolled out ambient listening systems in hundreds of clinics. These tools reduce the time doctors spend finishing notes after hours. That leads to less burnout and more time spent actually talking to patients during visits. That shift, fewer screen hours, more personal care, matters.
But it doesn’t stop with staff morale or patient sentiment. There’s operational lift too. Landry pointed out a key signal: technology spending hasn’t ballooned even as the system scaled. Revenue and service demand went up, but tech costs remained steady. That means the AI is doing real work, handling more output with the same input. That’s tangible efficiency.
Same logic applies at Steelcase. Steve Miller, their CTO, doesn’t look at AI as just a cost-reduction tool. He ties AI projects to the metrics that matter to that part of the company. Sometimes that’s sustainability, cutting energy use or reducing wasted materials. Other times, it’s better user experiences with intelligent product design. At Steelcase, some AI systems optimize manufacturing flow. Others track how people interact with their workspace solutions. Both types return value, even if not directly affecting the bottom line right away.
For leadership teams, the takeaway is clear. AI value shows up in multiple dimensions, employee output, customer trust, sustainability, or public health efficiency. The narrow lens of financial ROI won’t capture that. If you restrict your definition of success to what’s easily quantified, you’ll miss half the upside.
What FMOL Health and Steelcase show is that AI decisions need a broader accountability model, one that includes human outcomes and long-term metrics. Efficiency matters. But resilience, satisfaction, and reputation matter just as much, and AI can move them all, in measurable, meaningful ways.
Building trust through embedded governance and ethical safeguards
If AI is going to run core parts of your business, especially in healthcare, finance, or infrastructure, then trust isn’t optional, it’s the entry point. You can’t skip governance and think your teams or customers will just go along for the ride. The companies doing this properly are making governance part of the build process, not something tacked on later.
Will Landry at FMOL Health runs his AI program on that principle. Their AI tools help doctors save time, especially with documentation. But the process is controlled from the start. Every system involving patient data requires explicit consent. Any AI-assisted note still passes through the physician before it ends up in a patient record. It doesn’t matter how accurate the tech is, human oversight is baked in.
This mindset extends beyond clinical settings. FMOL Health also shares systems with independent clinics through “Community Connect,” which uses Epic, one of the major electronic health record platforms. That opens up a wide data flow between organizations. To manage this, Landry’s team coordinates with data security and privacy leads to ensure all AI-related data use is compliant and monitored. The point isn’t just to check boxes, it’s to signal to patients and staff that AI is being handled with care and accountability.
Steelcase approaches this differently, but with the same intent. CTO Steve Miller helped implement a cross-functional data governance council. They review every AI project from multiple angles: privacy, security, legal risk, and fairness. That includes policies around which data can be used, for what purpose, and who owns the outputs. This isn’t theoretical. It’s operational. They’ve embedded a governance specialist inside the AI development team, someone with real authority to raise flags, manage risk, and ensure bias and misuse are addressed before tools ship.
This structure isn’t slowing innovation, it enables it. When teams know the rules early, they can move fast within clear boundaries. Governance gives confidence to act, especially in high-stakes sectors where one misstep can erode trust overnight.
For C-level leaders, the message is simple: Don’t separate trust from execution. Build it in. Design for it. If you don’t, someone else, regulators, customers, or your own staff, will raise the issue for you, probably after you’ve exposed a risk. Governance done well isn’t red tape. It’s infrastructure for durable innovation.
Empowering people by using AI to scale human expertise
Too many companies treat AI as a replacement for workers. That’s short-sighted. The smarter move is to use AI to scale the capabilities of your best people, then spread that across the organization. ZoomInfo has leaned into this approach, and the results are clear.
Russell Levy, Chief Strategy Officer at ZoomInfo, explained that some of their most effective AI tools weren’t built by engineers or data scientists. They were built by frontline sales reps who understood exactly what worked in real conversations. These reps designed simple AI agents to automate routine tasks like call summaries, follow-up reminders, and outbound email suggestions. The agents don’t operate in isolation. They’re always reviewed and deployed with a human in control.
This isn’t just about building useful tools. It’s about transferring high-value knowledge quickly and at scale. When a top-performing rep shares an AI agent that mirrors their approach to customer interactions, that expertise becomes available to the entire sales team. It raises the baseline for performance across the board.
That shift, from isolated talent to shared capability, happens naturally when people see that AI tools are something they can shape, not just consume. This philosophy has taken root inside ZoomInfo’s workflow. Nontechnical employees now participate in AI development. The cultural impact is significant: people feel empowered, not sidelined. And adoption rates rise without extra pressure, because the tools are tailor-made by the people who actually use them.
What makes this model work is the clarity around roles. AI helps employees, but it doesn’t make decisions for them. Every agent has a human in the loop. That boundary is critical. It maintains accountability and ensures that AI remains an enhancer, not a shortcut that sacrifices quality, nuance, or judgment.
For executives, the key message is this: you don’t need to automate people out, you need to amplify what your best people already do well. When you put that capability into the hands of the full team, your entire organization gets smarter, faster, and more adaptive. But it only works if AI stays aligned with human expertise, not in place of it.
Key executive takeaways
- Build structure to scale AI impact: Leaders should formalize AI initiatives through shared governance structures like a Center of Enablement to move from scattered use cases to measurable business results with faster, safer execution.
- Broaden ROI metrics beyond profit: Executives should measure AI success not just by cost reduction but also by its impact on employee engagement, customer experience, and operational efficiency to capture the full value AI creates.
- Make governance a requirement: Embedding cross-functional governance into AI development is essential to ensure ethical use, protect data, and maintain trust, especially in regulated or high-sensitivity sectors.
- Use AI to amplify human skills: Empower employees to create and own AI tools that scale their workflow expertise; this drives faster adoption, preserves accountability, and increases overall team capability.


