AI will amplify cyber threats but cannot replace human oversight in critical infrastructure security
AI is getting better, fast. It can detect patterns in massive amounts of data and flag threats that human teams would miss or not respond to fast enough. That’s useful. But let’s not confuse speed with wisdom. In cybersecurity, especially around critical infrastructure, taking action without judgment can be worse than doing nothing at all.
No matter how good the software gets, critical infrastructure, grids, transport networks, manufacturing lines, needs uptime. If AI makes the wrong call and disconnects a mission-critical system, the consequences go far beyond a false alarm. You need AI to assist, not act alone. Automation works well for predictable responses, like isolating an infected device. But when the stakes involve public safety, production continuity, or national economies, the final call needs a human brain.
Leaders are increasingly automating threat detection and response. That makes sense. AI excels in spotting anomalies, especially those stealthy, credential-based attacks where adversaries log in instead of breaking in. But pattern detection alone doesn’t equal resilience. A truly resilient system proves it can respond under pressure, tech and humans working together. And that proof has to come from real-world testing, not assumptions or marketing hype.
The teams that win at this will combine fast-moving AI with experienced human judgment. Together, they can detect malicious behavior and act quickly, without introducing new risk.
Dave Spencer, Director of Technical Product Management at Immersive Labs, said it well: “Full automation isn’t resilience. It’s a risk.” That’s the mindset smart organizations are adopting, using AI to enhance human performance, not replace it. The goal is not perfect automation. The goal is controlled, intelligent response. AI gets us part of the way there, but it’s people who ensure the system makes the right move when it matters.
Convergence of IT and OT will complicate industrial cybersecurity amid persistent legacy systems
IT and OT are coming together. That’s the direction forward, more connected, more intelligent systems running industrial operations. This means real-time data, smarter automation, and tighter controls. But convergence isn’t clean. Many organizations will still operate legacy systems for years. These systems weren’t built with modern threats in mind. They’re vulnerable, and keeping them secure, while also upgrading environments, is a serious challenge.
The basic problem is pace. OT environments can’t be patched on a standard schedule. They don’t tolerate unplanned downtime. So integrating new AI-driven controls and monitoring tools into these environments without disrupting operations requires discipline, change management, rigorous testing, and deeply coordinated maintenance windows. That takes work, and shortcuts aren’t an option.
Security controls also have to adjust. You can’t protect converged systems with flat policies. That’s where asset visibility, segmentation, and zero trust frameworks come in. These are not just IT concepts anymore. They’re becoming essential for operational control systems, especially for industries where uptime, accuracy, and physical safety all intersect. AI helps by identifying threats faster, but response paths must be tightly aligned with real-world operational constraints.
Sam Maesschalck, Lead OT Cyber Security Engineer at Immersive Labs, explained this clearly: legacy systems won’t disappear by 2026, even as AI and cloud-integrated controls scale up. Success in these hybrid environments will come down to how well organizations coordinate across disciplines, IT, security, engineering, and how efficiently they apply new security models without introducing new problems.
The regulatory side is also catching up. Frameworks like ISA/IEC 62443 and NIST 800-82 are evolving to reflect this growing IT/OT interdependence. Expect increasing pressure to show compliance with OT-specific resilience standards. That means investing not just in tech, but also in people. Continuous training, hands-on labs, and knowledge-sharing between IT and OT teams will be just as important as any firewall or AI tool deployed.
This is early-stage territory. Leaders who move fast and stay disciplined will define the standard for operational cybersecurity in the next decade.
Cyber extortion will evolve as stolen data gains value as AI training material
Cyber extortion is evolving, fast. By 2026, threat actors will start treating stolen data not just as leverage for ransom, but as a resource they can sell. As the demand for large, high-quality datasets grows in AI development, stolen private data, internal communications, and software repositories become assets criminals can monetize in different ways. That changes the risk profile for most businesses.
Cybercriminals are adapting to this new opportunity. Instead of only issuing threats to leak sensitive data, they may bypass public exposure altogether and sell stolen content directly to black market AI developers. This shifts extortion from traditional ransom models to quieter, long-term monetization. The business impact becomes harder to track, and potentially more damaging, because data loss won’t always result in visible leaks. It will disappear into AI training pipelines with zero transparency.
What makes this more concerning is accessibility. Basic threat actors, the ones without much technical depth, will start to gain real capabilities. With the help of AI tools and pre-trained models, even entry-level hackers can analyze code, identify weak spots in open-source software, and execute usable exploits. There’s still a limit, stealth and advanced operational navigation remain a human skill, but automation bridges the gap for many. AI will make low-skill attackers more effective and give high-skill actors more powerful tools.
We’re already seeing the early signs of adaptive malware. Threat software integrating with language models or other AI APIs can evolve mid-operation, writing or adjusting malicious code depending on the environment it encounters. That makes some attacks more responsive and resilient, delaying detection and extending how much damage they do before being shut down.
Ben McCarthy, Lead Cyber Security Engineer at Immersive Labs, points out that attackers will “threaten to sell [data] to AI companies desperate for new training material,” rather than simply leak it for maximum exposure. He also highlights that even novice attackers may gain an edge, thanks to AI-generated research tools capable of spotting real vulnerabilities.
For executives, this raises the stakes. Policies, detection capability, and response planning must now address the risk that data will be quietly extracted, not simply locked or leaked, and sold in ways that are hard to audit, trace, or stop after the fact. Data protection strategies must be re-evaluated, with new attention on how information may be re-used by third parties, long after an initial breach.
AI-powered social engineering will escalate, demanding a people-centric defense strategy
AI is changing the dynamics of social engineering. It’s not just about fake emails anymore. By 2026, attackers will use generative tools to scale deception, more convincing messages, more personalized manipulation, and more realistic deepfakes. These tools will mimic human behavior and communication patterns with high accuracy, making it harder for employees to recognize and reject malicious content.
Technical defenses alone won’t carry the weight. You can have all the right policies, detection systems, and access controls, but human error remains a persistent gap. Attackers know that, and they’ll increasingly target people with precision-driven tactics designed to bypass firewalls and endpoint protections. These AI-enhanced methods will exploit psychological cues and behavioral patterns to undermine attention, trust, and decision-making.
The problem isn’t awareness. Most employees already know phishing exists. The issue is preparedness, actually recognizing attacks in context and responding correctly under pressure. Many companies have invested heavily in security awareness programs and policy training, but the impact remains flat. Without real-world drills and high-fidelity simulations, most people won’t retain the skills they need to push back against advanced threats.
John Blythe, Director of Cyber Psychology at Immersive Labs, made it clear: “Organizations that rely solely on technology, processes, and policies as their primary solution will fail.” He cited a major gap, 71% of organizations consider their resilience programs ‘extremely mature,’ yet those programs haven’t shown measurable improvement in workforce readiness.
For C-level teams, this means re-evaluating how human resilience is developed and tested. Resistance to social engineering requires more than brief training modules or compliance checklists. It demands routine, live-fire exercises that expose employees to real-world attack scenarios in controlled environments. Over time, this training builds pattern recognition and decisiveness, two things AI still can’t mimic in defense.
Secure organizations will be the ones that embed human adaptability into their defense architecture. That includes empowering employees across departments, not just IT or security, and making cyber readiness a measurable performance factor. With the threat landscape becoming more personalized and fast-moving, people are no longer a weak point by default. With the right approach, they become one of the most effective layers of protection.
Key executive takeaways
- AI requires human oversight to ensure resilience: AI can accelerate threat detection, but critical infrastructure security still depends on informed human decision-making. Leaders should ensure automation is backed by tested human controls to avoid unnecessary disruption or damage.
- IT and OT convergence introduces long-term security risk: Merging legacy operational systems with modern AI-driven controls increases complexity and exposure. Executives must invest in disciplined change management and cross-functional alignment to maintain performance and security.
- Stolen data is becoming fuel for AI: Cybercriminals are shifting from “leak and ransom” to selling stolen data as AI training material. Leaders should reevaluate data protection and breach response strategies with this evolving threat model in mind.
- People remain the frontline defense against AI-powered deception: Attackers will scale social engineering using generative AI tools, increasing pressure on employees. Organizations should move beyond awareness training and conduct live scenario testing to build staff resilience.


