Traditional governance frameworks are misaligned with the rapid, decentralized adoption of generative AI
We’re watching AI move faster than the systems meant to keep it under control. Most companies still use governance models built for slower, centralized decision-making. These frameworks assume approvals happen at the top. AI doesn’t wait for top-down approvals. It’s already in motion, embedded in tools, copilots, SaaS platforms, and third-party products, impacting how your teams make decisions, engage customers, analyze data, and build software.
This isn’t just a process issue. It’s a structural failure. Governance isn’t happening where the work actually happens. When oversight is stuck in paperwork and policies reviewed after the fact, teams bypass it. Not out of malice, just necessity. The tools are there. The pressure is real. So they move. That makes sensitive data exposure easy, model misuse likely, and accountability blurry.
Ericka Watson, CEO of Data Strategy Advisors and former Chief Privacy Officer at Regeneron, summed it up: “Companies still design governance as if decisions moved slowly and centrally. But that’s not how AI is being adopted.” She’s right. We’re operating with rules that assume friction, in a space that thrives on velocity.
If you want governance to work, stop treating it like a final check. It needs to be part of the workflow. Build it into the systems and tools your teams already use. Give them in-the-moment controls, not policies they’re supposed to remember after pressing send. This also means moving beyond just looking at the models. Focus on where your data flows, who’s using the outputs, and what AI features are touching your critical processes.
This isn’t abstract. It’s operational hygiene. Either you build in governance early, or you’re cleaning up after a problem that already happened.
Legacy data governance models fail to address the challenges posed by generative AI’s dynamic and output-driven nature
The old way of doing data governance doesn’t scale to generative AI. These tools aren’t just consuming information, they’re creating new data, in real time, based on unpredictable inputs. That breaks the rules most companies rely on: rules built for structured data, static reports, and well-defined pipelines.
Fawad Butt, CEO of Penguin Ai and former Chief Data Officer at both UnitedHealth Group and Kaiser Permanente, put it clearly: “Classic governance was built for systems of record and known analytics pipelines. That world is gone.” He’s not exaggerating. In this new environment, even secure systems can generate harm. AI models can hallucinate, spit out inaccurate results, or worse, biased and non-compliant ones. No intrusion needed. Just a bad prompt and weak guardrails.
The real problem? Most governance systems are focused on outputs. But the biggest risk sits upstream. The prompts, retrieval techniques, context inputs, and external tools AI systems access, these are the new attack surfaces. You won’t find that in a traditional audit trail. And by the time you see something go wrong in the results, the origin of the mistake could be long gone.
So here’s what needs to happen: Stop writing policy documents before you understand how your AI systems behave. Start by defining what’s off-limits. Restrict where high-risk inputs come from. Limit what AI agents can access dynamically. Observe what happens when real users interact with these tools. Use what you see to design controls that actually reflect your risk.
Business leaders need to shift their mindset. Instead of hoping rules will keep up with change, create flexible guardrails that evolve with how these systems are adopted. This puts you in a stronger position to manage actual risk, not just check compliance boxes.
Governance vulnerabilities are exacerbated by reliance on vendor-embedded AI solutions
AI isn’t just being developed internally, it’s arriving via your vendors, embedded in everyday platforms your teams already use. That’s where governance usually falls apart. When AI shows up inside a SaaS product or enterprise tool, most companies default to vague vendor paperwork, checklists, or generic assurances.
Richa Kaul, CEO of Complyance, works directly with global enterprises dealing with risk. She consistently sees the warning signs: “What we’re seeing is use before governance.” That means products are hitting the workplace before any serious oversight happens. Worse, reviews are often done by large committees with no shared baseline, 10 or 20 people asking different, open-ended questions and hoping for solid answers. Instead, they get what Kaul calls “happy ears”—comforting statements that aren’t backed up by real evidence.
Risk varies by deployment method. A vendor using Azure OpenAI through a secure enterprise interface isn’t the same as one calling ChatGPT directly using a public API. But many review teams treat them as identical. They aren’t.
There’s a simple fix most companies skip: inspect your vendor’s subprocessor list. This is where the details live. Most focus on the cloud provider and miss the AI-specific layers underneath, like which LLM is used, how it’s accessed, and whether customer data is re-used in training. These details affect your exposure to privacy, IP, and compliance risks.
Executives need to stop treating vendor AI as a second-tier concern. It’s not. It’s a first-class risk domain. Without clear roles, responsibilities, and review criteria for vendor-provided AI features, bad decisions don’t just propagate, they go unnoticed until it’s too late.
Behavioral deficiencies, rather than technological drawbacks alone, lead to recurring AI incidents under pressure
The driver of most AI misuse isn’t technology, it’s behavior. Employees know the tools. They see the upside. They’re under pressure to move fast, hit targets, and deliver results. So they lean on AI wherever they can. Even in sensitive workflows. Even when they shouldn’t.
Asha Palmer, SVP of Compliance Solutions at Skillsoft and former U.S. federal prosecutor, sees this firsthand: “We knew this could happen,” she said. “The real question is: why didn’t we equip people to deal with it before it did?” The motive is rarely malicious. It’s performance-driven. Employees are improvising, usually outside formal governance, because most guidance is too abstract or doesn’t arrive in time.
Blanket bans don’t work. If you remove access to responsible AI use but keep the performance pressure in place, people will use it behind the scenes. That makes oversight impossible. No amount of policy can catch what’s invisible.
You need to train people for real-world conditions, under deadlines, in workflows, when urgency overrides caution. Palmer calls it “moral muscle memory”—the ability to pause, assess risk, and make a better call under pressure. This is different from generic AI literacy. Scenario-based training targets the actual risks teams face day to day.
For regulators and auditors, this matters. They look for evidence that people closest to the risks are getting the right training. One-size-fits-all awareness materials don’t cut it. Training doesn’t need to be perfect, but it needs to be practical and constant.
Executives should focus on building behavioral reflexes in the workforce, not just issuing reminders. If your people only understand responsible AI use in theory, it won’t show up when it matters. And when the next headline surfaces about AI going wrong, it won’t be the model that gets blamed. It’ll be the governance that trained (or didn’t train) the people using it.
Superficial governance measures are inadequate in satisfying auditors and regulators
A document is not governance. A policy is not proof. Many companies assume that because they’ve defined responsible AI principles, they’re ready for audits or regulatory questions. They aren’t. What matters is whether those principles shape decisions. At the point of use. In real business scenarios.
Danny Manimbo, ISO & AI Practice Leader at Schellman, evaluates AI governance systems for a living. What he consistently sees is overconfidence based on paperwork. As he puts it, “Organizations confuse having policies with having governance.” When auditors ask for proof, like a rejected vendor due to AI risk, a delayed deployment, or a decision changed due to governance, most companies can’t show anything. That’s a red flag.
Governance that matters leaves evidence. It impacts timelines. It limits access. It informs which tools get greenlit and which team gets resources. If you can roll out platforms and AI features with no pushback, no risk assessment, and no modifications, then governance isn’t functioning, it’s being bypassed.
Manimbo’s advice is clear: governance must operate as a system, not a standalone compliance task. Working policy frameworks must connect risk management, change control, monitoring, and internal audit, continuously, not occasionally. That’s the difference between checking boxes and building resilience.
For leaders, this means governance should be assessed by action, not presence. Teams should be able to show their decisions were shaped by AI risk logic, not just that a document existed somewhere outlining intentions. Any governance approach that doesn’t slow something down, block something new, or signal limitations isn’t in control of your AI landscape. You are exposed, and you likely won’t see the fallout until after it’s too late.
The core challenge in responsible AI deployment is a timing issue
Most governance failures aren’t due to a lack of intelligence or intent. They happen because action comes too late. Controls are applied after AI tools are already embedded, used, and doing work. At that point, it’s hard to trace data lineage, pin down accountability, or reverse risky decisions.
Several experts interviewed said the same thing in different ways: responsible AI isn’t a future program. It’s not something you roll out later when the tech stabilizes. It must start now, and it needs to be operational, continuous, and directly linked to where real business work gets done.
Ericka Watson, CEO of Data Strategy Advisors, was direct: “You can’t govern what you can’t see.” Her point is that companies don’t even know where AI is being used across SaaS platforms or inside teams. Without that visibility, there is no governance, only guesswork.
Fawad Butt, CEO of Penguin Ai, added depth from a data perspective. His view is that inventories must recognize the system in context. The same AI feature deployed in HR versus marketing carries different risks. Treating them the same is a governance failure. The function and the data it interacts with must guide your level of control.
Richa Kaul, CEO of Complyance, emphasized that companies often miss the additional subprocessor layers that come with vendor and embedded AI. The only way to understand risk is to force teams to trace the full data path, even if vendors say it’s fine. The surface assurances are often misleading.
Asha Palmer, from Skillsoft, reminded everyone that high-pressure environments don’t disappear. Waiting for people to pause and behave ideally isn’t a strategy. Training must be inserted early and calibrated to real-world use cases. Otherwise, noncompliant behavior becomes the norm.
Danny Manimbo provided the final test: if your AI governance hasn’t delayed, rejected, or constrained any product, then it doesn’t exist in practice. Effective governance changes behavior. It alters timing. It stops things when required, not just theoretically, but operationally.
Leaders need to move fast to embed governance into early stages of adoption. Late-stage oversight is too weak and too slow. The risk isn’t coming, it’s already embedded in your tools and your workflows. If governance isn’t already shaping decisions now, it’ll become a clean-up operation after the damage is done.
Key takeaways for leaders
- Governance is misaligned with AI deployment speed: Leaders should embed governance directly into workflows instead of relying on slow, centralized processes. Static approval mechanisms can’t match the pace or scale of decentralized, SaaS-based AI adoption.
- Legacy data governance is breaking under genAI: Traditional frameworks miss the primary risks, which now lie in AI inputs, not outputs. Executives should focus on monitoring prompts, data sourcing, and dynamic system access in real time.
- Vendor AI is exposing unseen risks: Treat AI features within vendor tools as high-risk components requiring structured, consistent scrutiny. C-suite leaders should demand visibility into subprocessors and tighten standards around third-party AI deployments.
- Employee behavior drives AI misuse under pressure: Compliance failures stem from performance pressure, not ignorance. Ban policies don’t work, leaders must invest in scenario-based training that reinforces responsible action under real-world stress.
- Policy alone doesn’t satisfy auditors or regulators: Governance must influence actual business decisions, delaying rollouts, rejecting tools, or redesigning workflows, or it doesn’t count. Executives need to implement systems that leave audit-ready proof of impact.
- Governance delayed is governance failed: Most AI risks stem from oversight arriving too late. Decision-makers should shift governance upstream, ensuring early visibility into AI usage, contextual risk mapping, and proactive control at deployment.


