Securing AI projects demands the same rigor as other critical IT assets
Artificial Intelligence is powerful, no question. But if you’re deploying AI without thinking deeply about security from day one, you’re doing it wrong. Most companies are rushing to integrate AI into operations, products, and workflows, but speed without guardrails is risk.
AI systems don’t behave like traditional software. Once trained, they can adapt, and in a live environment, they can be attacked in ways legacy systems aren’t designed to handle. Think model theft, manipulation of learned behavior, and data poisoning, where attackers corrupt your training data to push the AI toward false conclusions. If you’re not continuously monitoring these systems and securing the full pipeline, from raw data through to the deployed model, you’re leaving the gate wide open.
Mick McCluney, ANZ Field CTO at Trend Micro, said it best: AI systems demand the same discipline and control as any high-priority digital asset. This means locking down access to sensitive data sets, enforcing role-based control over models, and making sure data flowing into your AI stays clean and trustworthy. It also means aligning governance with emerging regulations. The legal landscape is shifting fast, and if your AI lacks explainability or accountability, that could backfire.
Security is what ensures innovation doesn’t create brand-new problems. When you embed cybersecurity into your AI projects from the first line of code, you gain a more robust, resilient system. And more importantly, you build trust, with customers, regulators, and internal stakeholders.
C-suite execs need to ask a basic question: if AI becomes a mission-critical capability, are we treating it that way from an operational risk standpoint? Security should be built in, automated, continuous, and measurable. Otherwise, you scale risk with every new AI feature you deploy.
AI can expand what’s possible in your company. But without security embedded into the process, you’re solving problems on one side and creating bigger ones on the other.
Non-human identities now dominate IT environments, creating governance and security gaps
Right now, most companies are running digital ecosystems with more machine identities than human ones. That shift isn’t small, it’s structural. Service accounts, APIs, bots, autonomous AI agents, these are operating 24/7 across cloud and hybrid platforms. They authenticate just like people do, make decisions, pull data, trigger actions, and in most cases, do it faster and at higher scale than any human can.
According to Paul Walker, Field Strategist at Omada, the ratio of non-human to human identities in enterprise environments is now around 82:1. That’s not just a matter of scale, it challenges how identity gets defined, managed, and secured. These digital actors are being generated rapidly, often automatically, and they don’t follow typical operational lifecycles. They don’t go through onboarding. No HR systems track them. They’re rarely decommissioned unless someone actively removes them, which often doesn’t happen.
The tech stack most companies are using, especially older identity and access management (IAM) systems, was never built for this type of environment. These tools were designed for predictable, human-centered identity structures. They aren’t equipped to monitor, govern, or audit autonomous, rapidly multiplying digital identities. That creates blind spots and expands the attack surface.
For C-suite leaders, this isn’t just a problem for IT teams. It directly impacts your ability to control internal risk, ensure data integrity, and comply with regulations like the NIS2 Directive or the Digital Operational Resilience Act (DORA). Attackers know machine identities are often overlooked. If those identities connect to high-value systems or hold elevated access privileges, the risk multiplies.
Securing the organization means understanding everyone, and everything, that touches your data. That includes bots, containers, automated workflows, and AI models. Governance frameworks need to account for these identities in real time. You need lifecycle controls in place, preferably automated, that track creation, access, behavior, and removal of all machine entities.
Current regulatory frameworks are intensifying cybersecurity demands for AI and machine identities
Governments are stepping in, fast. Frameworks like the EU’s Digital Operational Resilience Act (DORA) and the NIS2 Directive aren’t vague suggestions. They’re legal requirements with hard expectations. If your company runs digital infrastructure, and especially if it uses AI or automated systems, these regulations apply to you. And if your systems interact with European data or services, you’re already in scope, whether you’ve prepared or not.
These mandates demand clear accountability, not just for human users, but for every identity with access to key systems and data. That includes machine identities, APIs, bots, AI agents, every digital actor that moves or processes information. The burden of proof is shifting. Leaders must be able to show how these identities are created, governed, and audited. If you can’t demonstrate that control, regulators will consider that a failure in compliance.
This is driving companies to reevaluate their risk management strategies. The key question decision-makers need to ask is: are we managing risk in line with how our systems actually work today? AI systems and automation change how data flows, how systems behave, and how threats can evolve. Traditional governance structures often ignore this complexity. That’s no longer acceptable.
The gap between innovation and regulation is closing. Compliance teams and security teams need access to the same information. Visibility, traceability, and accountability need to be built into every identity, human and non-human. When those elements are missing, you’re not just risking a breach. You’re risking operational disruption, reputational damage, and regulatory penalties.
This isn’t about slowing down AI adoption. It’s about controlling the environment you build it in. The technology is already moving. Compliance is now the baseline, and security leadership has to be fully aligned with regulatory context. That’s how you keep your systems moving forward without creating friction downstream.
Identity governance must evolve beyond the scope of human users
The idea that identity management is just about people is outdated. Today, digital ecosystems are run by both humans and non-human entities, bots, APIs, containerized workloads, AI agents. These entities authenticate into systems, execute processes, access data, and make decisions. If your governance model only tracks human identities, you’re only seeing a fraction of what’s happening inside your environment.
Paul Walker, Field Strategist at Omada, highlighted that this shift isn’t temporary. The volume and importance of machine identities is growing rapidly, and the way organizations secure and manage them needs to match that scale. Traditional identity governance systems weren’t built for this. They rely on predictable access patterns, manual lifecycle processes, and human oversight. That approach fails when you’re dealing with thousands, or even millions, of automated entities acting in real time.
Visibility is the first step forward. Leaders need operational dashboards that surface all identity types, human and non-human, with clear relationships to the systems and data they touch. That means integrating identity into security architecture at the design level, not as an afterthought. You can’t secure what you can’t see, and without a unified framework, environments stay fragmented and exposed.
This shift also changes how we think about accountability. When AI-driven systems act semi-autonomously, they need traceable activity logs, credential lifecycle controls, and policy-based access management that adapts to actual behavior. These are core to reducing risk, stopping lateral movement inside networks, and satisfying both internal governance and external compliance standards.
For C-suite leaders, this is a strategic issue. Security, compliance, productivity, they all share the same bottleneck if identity governance doesn’t scale. The focus now should be on modernizing IAM systems to bring machine identities into full scope, automating lifecycle processes, and closing the visibility gap across environments.
This is where resilience starts, by ensuring all identities are known, monitored, and governed. That’s what builds trust across the company, with customers, and with regulators. And that’s what sets the foundation for secure growth in an increasingly automated world.
Key takeaways for decision-makers
- Treat AI like a core business asset: AI systems introduce new risks, like model theft and data poisoning, that standard controls miss. Leaders should apply the same security discipline to AI that they use for other critical infrastructure.
- Prioritize governance of non-human identities: With machine identities outnumbering humans 82 to 1, outdated IAM systems create blind spots. Executives must invest in lifecycle management and monitoring for digital actors like bots, APIs, and AI agents.
- Align cybersecurity with regulatory pressure: Laws like DORA and NIS2 demand accountability for all identities, not just people. Companies need traceable, auditable security frameworks that meet current compliance expectations.
- Modernize identity strategy to match system scale: Identity governance must expand to cover autonomous and automated agents operating across cloud environments. Leaders should drive the shift toward unified, scalable identity control systems to reduce risk and enhance resilience.