AI agents introduce unprecedented cybersecurity vulnerabilities

We’re now watching artificial intelligence transition from being an assistant to becoming an operational actor. AI isn’t just giving suggestions anymore, it’s executing commands, deploying updates, running configurations, and reshaping production systems. That’s a lot of control to hand over to software. Yet in many organizations, AI agents are being given these powers with very little oversight. That’s a problem.

The simple truth: businesses are moving faster than their security frameworks can handle. And as teams race toward implementing AI, they’re introducing new attack surfaces. Prompt-injection attacks, where malicious instructions are hidden in user input, are becoming a real concern. These can trigger unauthorized workflows or escalate privileges without detection. Another issue is response-time attacks, where timing is manipulated to extract or leak sensitive data.

Alan Radford, Global Strategist at One Identity, warns that the first high-impact security breach from an AI interface isn’t just coming. It’s imminent. He highlights that many organizations have authorized AI agents to make changes autonomously, without the necessary safeguards in place. That’s a perfect storm for cybercriminals looking for easy high-level access.

If your current security architecture doesn’t account for AI agents with elevated privileges acting independently, you’re exposed.

This isn’t a call for fear, it’s a wake-up call to get serious about governance before something breaks your system in real time.

Security priorities must shift from solely protecting human identities to securing AI’s internal machine and agent identities

CIOs and CISOs have spent years building walls around human user identities. Firewalls, multi-factor authentication, credential management, it’s all been about keeping human access under control. Now, the landscape has changed. AI systems are generating their own accounts, running services, and interacting with critical infrastructure. These are not human identities, but they carry the same weight, and often much more risk. If you’re not watching them closely, someone else will.

Most companies still focus on “who” is using AI. But the smarter move is to focus on “what” the AI is becoming. These agents often operate with high privileges and minimal oversight. They make decisions. Those decisions can’t just appear in your logs; they need to be traced back to a clear trigger, whether it was a human directive, a system process, or another AI. Without this traceability, accountability breaks down.

Radford explains this next shift well. Businesses must go beyond user governance and begin governing the identities within AI itself, the agents, workflows, and decision trees. That means building out clear audit trails, logging every action, and assigning ownership to every autonomous decision. In short: if an AI does something, you need to know who or what told it to do so, and whether that trigger met your governance standards.

This is foundational. It isn’t about checking boxes for compliance. It’s about maintaining control over your own systems in an environment where software is starting to think, decide, and act. The organizations that build these AI identity frameworks now are the ones that will stay ahead, technically and strategically.

Continuous, proof-based supply-chain oversight is becoming essential

Most companies understand their systems. Fewer understand their supply chains. And that’s where the cracks are starting to show. Third-party vendors are now among the fastest-growing sources of security breaches, not because the partners mean harm, but because attackers know this is where verification tends to be weakest.

Regulators see this too. They’re no longer satisfied with annual access reviews or static reports. The standard is shifting toward real-time verification, proof of who accessed what, when, and under whose authority. Identity has become a shared responsibility between you and your suppliers. If a third-party tool has access to your systems, you’re still accountable for its behavior.

Stuart Sharp, VP of Product Strategy at One Identity, stated this shift clearly: “The next evolution of governance will be proof-based.” That means it’s not enough to trust your vendors, you have to constantly validate their access and enforce controls that are active, not passive. Boards won’t wait months for answers. Regulators won’t either.

This isn’t about removing third-party risk. It’s about controlling it visibly and continuously. Identity as a shared control plane, between you and every vendor, enables real-time tracking of access decisions and supports immediate responses. If something unusual happens, you should be able to see it and shut it down without delay.

For executives, this changes procurement discussions. When evaluating a new vendor or partner, the question isn’t just “What does this solution offer?” It’s “What does this solution access, and can we continuously prove it’s doing the right thing?” Implementing this standard reduces exposure and builds confidence on all sides.

Non-human digital identities are poised to become a significant insider threat

Non-human accounts now run much of your infrastructure, even if you never think about them. These include bots, scripts, service accounts, API keys, and now AI agents. On many enterprise networks, they outnumber employees by 50 to 1. That gap matters. While businesses often have solid discipline around human user controls, non-human identities tend to be created, used, and forgotten.

That’s a liability. These identities often continue to exist long after their initial purpose ends, drifting into shadow IT. Many remain overprivileged and invisible to standard audits. If compromised, they give attackers a direct line into critical systems with no human behavior patterns to flag suspicious activity.

Robert Kraczek, Global Strategist at One Identity, made the point directly: “We’ve reached the point where the biggest insider risk doesn’t have an employee ID.” This shifts the focus of internal threat programs. It’s not just about monitoring employees anymore, it’s about tracking every account capable of automated action across your stack.

To regain control, identity teams need to apply the same rigor to these accounts that they do to users. That includes creating ownership chains for each non-human identity, implementing expiry dates, enforcing permissions reviews, and setting up automated deactivation protocols. Emergency shutdown capabilities also need to be standard, not optional.

Leadership must recognize this as part of core governance. As AI systems scale and automation layers deepen, securing non-human access isn’t an add-on, it’s an operational requirement. You don’t need to monitor everything manually. But you do need policies in place that ensure those accounts don’t persist unchecked. That work starts now.

AI model poisoning represents a growing threat to the integrity of AI systems

As AI becomes woven into more business processes, from analytics to decision-making, there’s a growing form of attack that doesn’t target infrastructure or users directly. It alters behavior by corrupting the AI models themselves. This is known as model poisoning. It happens quietly, during training or fine-tuning, and it can skew outputs without triggering alerts.

What makes this dangerous is how subtle it can be. A poisoned model might still function normally on the surface. Internally, however, decisions may become biased or strategically flawed, based on inputs manipulated by attackers. For systems that automate high-value processes, this risk is serious. It compromises trust in outcomes without causing obvious technical failure.

Nicolas Fort, Director of Product Management at One Identity, emphasized that AI assurance is now inseparable from identity assurance. Every training event, prompt, policy change, or configuration update needs to be authenticated and logged. Knowing who interacted with your model, when, and under what authority, is becoming as important as the model’s accuracy.

C-suite leaders should treat model integrity as a core business risk, not just a technical one. If your AI systems are learning from internal data sets, those pipelines must be secured. The people modifying the models, whether developers or data scientists, need to be operating through traceable identities and governed environments.

This matters not only for internal prevention but also for demonstrating resilience under regulatory pressure. Decision-makers must be able to answer: who influenced this output, what data shaped it, and when was it last verified? If those answers aren’t already available in your systems, they need to be.

EU digital identity wallets will drive the expansion of federated identity systems

Europe is locking in a new standard for digital identity. Under eIDAS 2.0, EU citizens will have digital identity wallets that carry verified credentials usable across industries and borders. This changes how enterprises authenticate people, and how they handle external identities within their environments.

These wallets aim to streamline access while maintaining high levels of trust. As adoption accelerates, users will expect seamless login experiences using their government-certified digital IDs. But for enterprises, accepting these credentials isn’t just about compatibility, it requires adapting existing security policies to integrate with external trust sources while retaining internal governance.

Stuart Sharp, VP of Product Strategy at One Identity, stated that the shift to citizen-driven federated identity is happening now. By 2026, he expects digital identity wallets to be widely used across the EU. Businesses will need to treat these external identities as first-class inputs, without relaxing control over who gets access to what.

If your organization operates in or with European markets, this isn’t optional. It means systems must be ready to recognize and validate verified third-party credentials, apply the same access policies you use internally, and record every interaction for auditing. Enterprises that wait will find themselves out of compliance, and out of sync with user expectations.

To stay ahead, start by auditing your identity platforms, can they process multiple identity sources with real-time policy enforcement? If not, it’s time to upgrade. Digital identity is going global. The leadership decision is whether your systems can evolve with it.

The future of cybersecurity demands an identity-centric approach focused on resilience and foundational controls

The direction cybersecurity is heading isn’t just technical, it’s structural. As threats evolve and AI becomes more embedded in operations, enterprises need to rethink the fundamentals. Identity must sit at the center of this redesign. Not just for access control, but as the basis for resilience, traceability, and long-term governance.

One Identity’s leadership forecasts this shift clearly. Organizations will move beyond reactive recovery methods and instead embed resilience directly into their systems. That means anticipating disruptions, not just responding to them. It also means tightening controls around who can access what, and, crucially, ensuring that those permissions are smart enough to adapt in real time.

A renewed focus on data access governance is already happening. Enterprises are improving visibility across identity lifecycles, automating entitlements, and using behavioral insights to detect and limit unusual actions. These systems won’t rely solely on static rules. They’ll learn, evaluate, and respond dynamically to reduce risk without slowing down the business.

Another expected shift is toward what One Identity describes as “AI-based immune systems for identity.” These will go beyond traditional alert-based monitoring. They’ll offer adaptive security environments capable of detecting abnormal identity behaviors and shutting down threats before they cause damage. These aren’t experimental ideas, they’re being built now.

For executive teams, the takeaway is clear: cybersecurity strategy can no longer be limited to endpoint protection or firewalls. The most effective security posture starts with identity, and strengthens through discipline, automation, and continuous verification.

To lead in this environment, prioritize identity infrastructure upgrades. Use this foundation to support both innovation and compliance. The companies getting this right aren’t just securing systems, they’re building capacity to scale with control, no matter what gets thrown at them next.

Recap

AI isn’t waiting around for security teams to catch up, and neither are attackers. The identity landscape is expanding, fast. Agents, models, third-party connections, and non-human accounts are now core components of enterprise infrastructure. They’re powerful. They’re scalable. And if left ungoverned, they’re dangerous.

For executive teams, the message is simple: identity is no longer just a control. It’s your foundation. The systems you build today must define identity clearly, enforce accountability automatically, and detect deviation instantly. That’s how you scale with confidence instead of risk.

Regulators will demand more proof. Partners will expect real-time transparency. Users will bring their own credentials. And as AI becomes a frontline actor, the question won’t be whether your systems can process decisions, but whether they can prove who made them, under what authority, and with what controls in place.

If your architecture can’t answer that now, it needs to soon. This is where business, technology, and governance converge. Lead from the center, with identity.

Alexander Procter

December 17, 2025

10 Min