AI agents create unprecedented internal security risks that can mirror ransomware behavior

AI has moved from a supportive role to one capable of full autonomy, and this shift is redefining internal cybersecurity. Autonomous AI agents, systems designed to perform complete tasks without human intervention, are now being embedded into business operations. This gives them the power to perform functions such as searching corporate directories, encrypting files, and executing automated backups. Leigh McMullen, Gartner Fellow and Distinguished Vice-President Analyst, cautioned that without strict behavioral limits, these same processes could resemble ransomware activity. He described it as “handing over the keys” to systems that, if misprompted or manipulated, can act destructively without clear distinction from an actual cyberattack.

For executives, this is a governance and risk control challenge, not a technology debate. Granting such agents broad, perpetual access to enterprise infrastructure means that one ill-intended prompt, or even a system misinterpretation, can have catastrophic results. The issue lies not in their capability, but in the lack of oversight. As AI agents continue to evolve, the gap between productivity and risk grows wider.

Leaders should prioritize real operational guardrails: strict permission boundaries, continuous behavioral monitoring, and automated stop points that prevent unapproved actions. This is not about fear, it’s about balance. These systems can drive immense efficiency, but only with well-defined operational limits and real-time auditing. If not, AI could inadvertently become an insider threat, executing actions indistinguishable from ransomware on your own network.

The rise of AI agents has made identity and access management the core battleground of cybersecurity

As AI agents automate processes that used to require human review, identity and access management (IAM) has become the new control center for cybersecurity. Greg Harris, Gartner Analyst, explained that modern AI systems are already executing tasks like processing payments or accessing sensitive internal data, operations that historically required multiple layers of human approval. Without robust IAM, the entire zero-trust security model collapses. Leigh McMullen emphasized that the way agents now communicate, through natural language prompts rather than secure application interfaces, creates new vulnerabilities. When AI agents exchange contextual information freely, malicious actors can exploit that exchange to hijack or redirect system behavior.

Currently, there is no universal AI governance solution that can mitigate this at scale. The reality, as McMullen put it, is that a holistic, “out-of-the-box” AI control layer is still a fantasy. Enterprises are left to build their own defensive frameworks, often combining prompt filters, data loss prevention protocols, and deterministic access controls. It requires precision, not improvisation.

For executives, the focus must shift from compliance to capability. Managing AI identity is no longer an IT responsibility alone, it’s a board-level priority. As these systems begin making autonomous business decisions, controlling who they are, what they can access, and how they communicate becomes vital to both security and continuity. Investing in identity infrastructure that supports AI-driven environments is not optional; it’s foundational for the next phase of digital resilience.

Okoone experts
LET'S TALK!

A project in mind?
Schedule a 30-minute meeting with us.

Senior experts helping you move faster across product, engineering, cloud & AI.

Please enter a valid business email address.

AI-augmented social engineering and deepfake attacks represent the next evolution of external cybersecurity threats

AI is transforming the way cyber attackers operate. Instead of building new attack methods, they are using generative AI to strengthen the old and proven ones, mainly social engineering and credential theft. Leigh McMullen, Gartner Fellow and Distinguished Vice-President Analyst, described how attackers are using low-cost tools to collect mobile identifiers from public spaces, such as waiting rooms, to build detailed profiles of their targets. Once enough data is gathered, the attacker triggers what he called a “deepfake kill chain.” Within seconds, a cloned voice or fake identity can be used to impersonate a trusted source, tricking victims into giving away critical financial information. These are not isolated incidents, they reflect a pattern of increasingly personalized, scalable cyber fraud.

The driving force here is cost-efficiency. As McMullen explained, “It’s way easier to steal $500 from 1,000 people than it is to steal $500,000 from one person.” That change in economics allows attackers to scale faster than defenders can adapt. The consequence is a systemic risk, organizations face a larger volume of smaller, highly convincing attacks that bypass traditional detection systems.

Executives need to treat this not as a future scenario but as an active reality. Relying on outdated security measures such as single-factor authentication is a liability. Businesses must adopt multi-factor systems that go beyond passwords, combining device identification, network routing validation, and behavioral context checks. Such measures make deepfake and voice-cloning attacks significantly harder to execute successfully.

The next step for leadership teams is cultural: training employees to detect and question suspicious communications, even those appearing authentic. AI-driven deception will only get better. Success depends on how rapidly organizations integrate adaptive verification and authentication across every customer-facing and internal workflow.

Major enterprises are developing in-house AI defenses to counter escalating automated threat landscapes

As AI becomes a weapon in cyberattacks, large enterprises are building their own AI-driven defenses to stay ahead. The Commonwealth Bank (CBA) is a strong example. Andrew Pade, General Manager of Cyber Defence Operations at CBA, stated that the bank processes an estimated 400 billion threat signals every week. To manage that scale, the organization built proprietary AI tools, developed jointly by senior security analysts and data scientists, to automate investigation, hypothesis generation, and incident response. These tools now complete analysis tasks in under 30 minutes that previously required up to two days, freeing human experts to focus on more complex and strategic risks.

CBA’s approach represents a structural shift in cybersecurity operations. Their AI systems not only detect and respond faster but also interpret context before an incident is flagged by humans. For example, their AI-powered response agent can identify behavioral anomalies, such as unexpected login times or changes in data flow, before traditional security triggers activate. The result is faster containment and more reliable risk differentiation.

For executives, this model offers two valuable lessons. First, speed is now an operational necessity, not a performance metric. Threat actors already use AI to automate reconnaissance and attack staging; slow decision-making is an open invitation for infiltration. Second, purpose-built, internal AI systems ensure security teams retain control over both data and algorithms, minimizing exposure linked to vendor dependency.

Pade underscored another critical benefit: workforce sustainability. As AI handles monotonous and repetitive threat-triage work, cybersecurity professionals can focus on higher-value analysis. This balance improves not only efficiency but also morale, maintaining talent engagement in a high-burnout industry. For board leaders, this demonstrates how AI adoption in defense, when guided properly, enhances both security outcomes and human performance.

Main point 5: overdependence on AI risks eroding core security skills and contributing to increased leadership burnout

AI is redefining cybersecurity operations, but unchecked reliance on it is beginning to weaken essential human expertise. Gartner forecasts that 75% of security operations centers (SOCs) will become overly dependent on AI in the coming years. Greg Harris, Gartner Analyst, cautioned that large language models (LLMs) are non-deterministic, they can produce varied results for identical queries over time. This unpredictability makes them unreliable as stand-alone security decision engines. When human specialists stop engaging deeply with core investigative and analytical work, their ability to identify subtle or emerging anomalies diminishes. The result is a growing skills vacuum that weakens long-term resilience.

The challenge extends beyond technical teams. Gartner Vice-President Analyst Christopher Mixter highlighted that by 2028, 50% of chief information security officers (CISOs) will also be tasked with disaster recovery responsibilities, adding to an already complex workload. He linked this expansion to increasing burnout and high turnover across the cybersecurity leadership layer. Gartner projects a 40% rise in leadership attrition by 2027 as CISOs struggle with mounting accountability without proportional control or resources. Mixter emphasized that assigning recovery governance to CISOs without structural realignment harms broader operational stability because true business continuity depends on coordinated operational leadership, not security oversight alone.

For executives, this presents a dual priority: maintain human-in-the-loop oversight across AI-driven security operations and enforce clear role boundaries at the leadership level. Investing in continuous training pipelines for cybersecurity teams ensures that foundational analytical skills remain intact, even as AI automates procedural monitoring. At the same time, executive management must protect leadership bandwidth by delineating responsibilities and aligning ownership of resilience planning with operational departments. Allowing AI to dominate decision processes without these safeguards invites long-term instability, as both technical acumen and leadership endurance degrade over time.

Leaders should view this as a strategic governance issue. AI should be an accelerator, not a replacement for human judgment. Ensuring balanced integration preserves critical knowledge, sustains leadership capacity, and supports a continuous security posture that can adapt as technology, and threats, evolve.

Key executive takeaways

  • AI autonomy creates internal ransomware risk: As AI agents gain operational control, they can unintentionally execute ransomware-like actions when misconfigured or manipulated. Leaders should enforce strict behavioral constraints, audit access levels, and use real-time monitoring to prevent internal AI misuse.
  • Identity control is the new frontline of cybersecurity: AI agents performing high-level business tasks require precise identity and access management. Executives should prioritize zero-trust frameworks, context-based access control, and strengthened IAM protocols to avoid cascading security failures.
  • AI-fueled social engineering demands adaptive defenses: Generative AI is powering scalable deepfake and voice-clone attacks that exploit trust and context. Companies must transition to multi-factor authentication integrated with contextual verification to counter this rising form of deception.
  • In-house AI defenses strengthen speed and control: Enterprises like CBA are building proprietary AI tools to process hundreds of billions of threat signals weekly and cut detection times drastically. Investing in internal AI capabilities allows faster response, greater data control, and improved cyber team efficiency.
  • Overreliance on AI threatens human expertise and leadership stability: Excessive automation risks eroding essential cybersecurity skills and overloading CISOs with expanding responsibilities. Organizations should maintain human oversight in AI workflows, invest in continuous training, and structure leadership roles to prevent burnout.

Alexander Procter

March 31, 2026

8 Min

Okoone experts
LET'S TALK!

A project in mind?
Schedule a 30-minute meeting with us.

Senior experts helping you move faster across product, engineering, cloud & AI.

Please enter a valid business email address.