Human-linked cybersecurity incidents are surging alongside AI integration

We’re seeing something important here. As companies embrace AI for productivity, automation, and competitive advantage, a new risk vector is opening up, human behavior. KnowBe4 reports a 90% increase in cyber incidents involving human elements in just the past year. That jump isn’t random. It reflects how AI systems are changing workflows, tools, and the way people make daily decisions.

Most security problems still originate from what people do, clicking malicious links, trusting fraudulent email messages, or misconfiguring systems. Now, when you layer AI onto that, the risk compounds. Employees are increasingly working alongside AI systems, but many don’t understand how these systems operate or what their limitations are. The result: misjudgments, misplaced trust, or overreliance on algorithms.

For leadership, this transformation signals a new mandate. Traditional cybersecurity focused on hardware, firewalls, and software patches. That still matters, but it’s no longer enough. You now need to manage how your employees interact with AI, because those interactions can create new vulnerabilities. Behavioral risk is no longer a soft issue, it’s core security infrastructure.

There’s upside in solving this. The organizations that respond effectively, by educating teams, updating policies, and securing the human-AI interface, stand to minimize downtime, avoid reputational damage, and prevent massive financial losses. Done right, managing human cybersecurity risk creates trust across your systems and teams.

Email is still the weakest link in security architecture

Email isn’t new. It’s probably the oldest digital tool most of your workforce uses every day. Still, it remains the top vector for attacks. KnowBe4’s latest data shows a 57% rise in email-related incidents and that 64% of organizations were hit by external email-based threats.

The numbers make the situation pretty clear. Attackers aren’t ignoring complexity, they’re just exploiting familiarity. Email gives them direct access to your people in a format they trust. They don’t always need advanced malware when social engineering through a simple, well-worded email still defeats most defenses.

Phishing has become a human issue. And now with AI-generated content capable of producing almost perfect imitations of trusted messaging, it’s becoming even harder for most people to tell what’s real and what’s fake.

C-suite leaders need to internalize this: even if you’ve invested in high-end infrastructure and endpoint protection, if your people aren’t trained to question what lands in their inbox, you’ve left a door wide open. Email defense today is less about spam filters and more about awareness, alertness, and ongoing education.

There’s no magic solution here. It’s about consistency. Reinforce the basics. Make threat recognition part of your operational routine. The companies that win in this space don’t get comfortable, they stay sharp. Email is simple, but so is the cost of overlooking it.

Human error and insider threats remain persistent risks

Despite advancements in tooling and monitoring, human mistakes still account for much of what goes wrong in cybersecurity. KnowBe4 found that 90% of organizations experienced breaches caused by employee error. Not misconfigurations in code, just simple, preventable mistakes made by people who weren’t trying to cause harm.

Add to that the insider threat, where someone inside your organization deliberately causes damage or leaks sensitive data, and you’re dealing with risks that don’t depend on hackers finding technical exploits. KnowBe4 reported that 36% of organizations faced incidents involving malicious insiders. These are real people with credentials and access, acting against your business.

This isn’t about blaming employees. It’s about recognizing the functional reality: people create risk, whether by accident or intent. Organizations need a framework that treats employee behavior as something measurable and manageable, the same way we treat system availability or performance.

The solution starts with visibility. You need to know where errors typically occur and why. Then you build a culture that doesn’t ignore mistakes. People need to be trained, not once, but continuously. Policies need to prevent excessive access while enabling people to do their jobs. And internal threat monitoring must evolve without becoming invasive or reactive.

For business leaders, this is about controlling long-term cost and exposure. Data breaches hurt more than brand reputation, they cut into your operating margin. Managing human error and insider threats effectively means you retain control over your risk profile without slowing down execution.

Companies need to invest more in the human layer of cybersecurity

High-performing organizations invest where the impact is. Right now, the gap between human risk and budget allocation is too wide. 97% of cybersecurity leaders surveyed by KnowBe4 confirmed they need more funding specifically to manage behavioral vulnerabilities.

This isn’t a case of more tech equals better outcomes. You can have the best detection systems, but if the people interacting with those systems aren’t equipped to recognize threats, you’re exposed. Most organizations are still spending disproportionately on technical controls while underinvesting in training, change management, and behavior-focused risk analysis.

The equation is changing because the risk landscape is changing. AI is accelerating the pace, threats are coming faster, they’re more convincing, and they’re evolving in real-time. That requires organizations to be just as agile in how they train, communicate, and reinforce secure behavior across departments.

The value in investing here is clear. It doesn’t just reduce breach frequency, it increases resilience. When employees know what to look for and how to respond, your recovery speed improves, operational disruptions are limited, and risk mitigation becomes part of daily business practice.

If you’re serious about competitive advantage and long-term trust with customers and partners, this is a non-negotiable priority. Budget decisions in cybersecurity should reflect where the real vulnerabilities are, and right now, that’s the human layer.

AI-powered threats are growing, and they’re smarter, faster, and harder to detect

AI isn’t just driving business transformation. It’s also scaling cyber threats in ways most companies haven’t faced before. The KnowBe4 report shows a 43% rise in AI-related incidents over the past year. These aren’t isolated events, they’re industry-wide, and most leadership teams aren’t fully prepared.

Deepfakes, in particular, are gaining traction in attack strategies. Nearly 1 in 3 organizations, specifically, 32%, reported an increase in deepfake-related threats. These manipulations aren’t obvious. They’re crafted to deceive even trained employees by mimicking voices, faces, and documents with unsettling accuracy.

At the same time, AI is being used to build phishing content that adapts in real time. These attacks aren’t limited by volume or language. They’re scalable and hyper-targeted, making them more likely to bypass traditional detection filters.

KnowBe4’s data confirms that 45% of cybersecurity leaders now consider AI-powered threats their number one risk. That shift in perception matters. The challenge isn’t just about defending against these tools; it’s about understanding how your security model must evolve when the tools attacking your systems are learning and iterating faster than ever before.

For you as an executive, this means your security protocols must move from static to adaptive. Your teams need the budget and flexibility to test and deploy defenses that evolve with the threat landscape. This includes investing in detection systems capable of identifying imitation, manipulation, or anomalous machine activity before reputational and financial damage occur.

Employee dissatisfaction with AI policy is fueling shadow AI usage

There’s a growing gap between how companies roll out AI policies and how employees are actually using AI. According to KnowBe4, 56% of employees are unhappy with their organization’s approach to AI tools. That’s not a minor issue, it opens a direct path to shadow AI activity.

Shadow AI refers to when employees start using external AI tools without approval or oversight. It’s a security problem hiding inside a productivity boost. People are turning to these tools because they’re fast and useful, but when they adopt them informally, security teams lose visibility and control.

Security measures are only effective when they map to how people actually work. If they create friction or feel disconnected from day-to-day goals, employees will bypass them. Right now, a significant portion of the workforce is doing exactly that, building informal workflows that include high-risk AI platforms.

Companies can’t stop employees from pursuing efficiency, but they can keep them on secure platforms by meeting them halfway. AI governance needs to be clear, fast-moving, and aligned with how teams operate. You don’t prevent shadow AI by banning tools, you prevent it by giving your teams better options that also meet your security standards.

This is a strategic issue. Unchecked, shadow AI increases your risk surface and weakens your incident response. Addressed early, it becomes an opportunity, where employees, leadership, and innovation can operate securely, without disruption or regulatory exposure. Policy needs to match reality. Anything else slows you down.

Threat actors are adopting multi-channel, AI-enhanced attack tactics

Cybercriminals are evolving their tactics fast, not just using AI to create smarter attacks, but combining multiple communication channels to increase effectiveness. KnowBe4’s report shows an uptick in coordinated incidents that integrate email, messaging platforms, and voice phishing (also called vishing) into a single attack strategy.

What this means is the classic email-only phishing approach is no longer the full picture. Attackers now build layered campaigns that might begin with an email, escalate through a messaging app, and end with a convincing voice call generated via AI. The result is higher pressure, faster manipulation, and more successful outcomes for attackers, especially when employees aren’t trained to connect signals across platforms.

These threats are being supercharged by AI. Automation tools are helping attackers create persuasive content at scale, with personalized targeting that mimics business norms. The precision of these efforts makes detection harder and response time shorter.

Executives need to make sure that security teams are not siloed by channel, email security, chat monitoring, and phone threat detection all need to operate as part of one unified strategy. Employees should be trained to view communications holistically, not in isolation. The focus should be on recognizing behavioral cues rather than relying solely on technical filters or automated alerts.

From a leadership perspective, the takeaway is structural. Your communication ecosystem is a connected terrain. Every surface is a potential entry point. Investing in cross-channel threat detection and real-time user awareness is no longer optional, it’s necessary infrastructure.

Security programs must adapt to oversee both human and AI behaviors

The business environment is shifting. AI agents are now part of daily operations across industries, working alongside human employees. With this shift comes new risks. AI-based decisions, automated actions, and unsupervised tool usage are introducing behaviors that don’t always align with organizational policies or cybersecurity frameworks.

Javvad Malik, Lead CISO Advisor at KnowBe4, put this clearly: “The productivity gains from AI are too great to ignore, so the future of work requires seamless collaboration between humans and AI. Employees and AI agents will need to work in harmony, supported by a security program that proactively manages the risk of both.”

Right now, most security architectures are built to manage human activity. That’s only half the equation. AI agents have their own logic, their own access levels, and their own limits. Without visibility into what these AI systems are doing, or how they’re being used, organizations are essentially blind to an entire part of their operational landscape.

Security programs need to scale with this evolution. That means adding monitoring capabilities for AI output, customizing access controls for AI usage, and ensuring logging includes both human and machine interactions. It also means updating incident response models to account for machine-generated errors, misjudgments, or misuse.

For C-level decision-makers, the message is straightforward. If your security efforts aren’t expanding to cover how AI behaves across your infrastructure, you’re missing a major component of risk management. Make sure your teams are addressing the full picture. That includes not just the human error, but also the unintended consequences of autonomous, AI-driven operations, and the blend of both working together.

Concluding thoughts

AI adoption is moving fast, and most businesses are leaning into it for the right reasons, efficiency, scale, and growth. But as this shift accelerates, so does the exposure to risk. The most critical vulnerabilities aren’t in the code or the systems, they’re in people, behaviors, and overlooked decisions at the edge.

Executives need to lead with clarity here. Human error, shadow AI, and multi-channel threats aren’t temporary problems. They’re structural challenges that come with the future of work. You can’t isolate security from strategy anymore. How you secure your workforce, both human and AI, directly impacts brand trust, operational stability, and competitive resilience.

Security strategies must modernize to reflect how work actually happens. That means increased budget, better tools, continuous education, and clear governance around AI usage. Not once a year, continuously.

The opportunity is real: stronger teams, smarter systems, and fewer disruptions. But it won’t happen by default. It takes direct involvement, sharp prioritization, and a willingness to treat cybersecurity as a leadership function, not just an IT concern.

Alexander Procter

December 23, 2025

11 Min