AI is revolutionizing cybersecurity by empowering both defenses and attacks

Artificial intelligence is the present reality of cybersecurity. It’s already transforming how companies defend their infrastructure by making attack detection faster and smarter. AI helps security teams identify threats they would normally miss. It responds to incidents automatically, without waiting for human intervention. That’s good. But here’s the flip side, cybercriminals now have access to the same AI firepower.

We’re seeing more precise, automated attacks. Groups are using AI to manipulate training data, inject hostile prompts, and compromise security workflows, all with increasing efficiency and adaptability. Without the right protections, your AI models can be hijacked from the inside out. If you rely on sensitive datasets, whether for customer behavior prediction or operational automation, those data sources need to be protected like core infrastructure.

What separates companies that fall behind from those that move ahead is how they treat security within their AI stack. Takanori Nishiyama of Keeper Security said it best: “Success will belong to those who treat AI security not as an afterthought but as a prerequisite for innovation.” He’s right. You need to implement continuous session monitoring, enforce least-privilege access, and limit exposure of data to both humans and machines.

If you’re innovating with AI, and nearly all of you are, the question isn’t whether you should secure it. The question is how fast you can embed those controls deep into your architecture. Because in a landscape where machines write attacks in milliseconds, defense that reacts late will always be irrelevant.

The zero-trust security model is becoming essential to combat escalating threat complexity

Zero trust is now operational survival. In an environment where attackers move faster and systems talk to each other more than ever, assuming anything is safe by default is reckless. Whether it’s a connected device, an employee, or a piece of software, none of it gets a free pass.

Here’s the idea, simplified: verify every access request, limit privileges to the absolute minimum, and revoke access when it’s no longer needed. Do it across humans and machines alike. This is what we mean when we talk about ‘identity-first’ security. Get this wrong, and you leave a lot of entry points vulnerable, especially when systems are automated, distributed, or working at scale.

Combine zero trust with Privileged Access Management (PAM), and now you have a layered and controlled structure, even for your most sensitive accounts. Nishiyama from Keeper Security put it clearly: “In a world of autonomous systems and machine-to-machine communication, zero trust ensures that no identity, device or process is trusted by default.” That’s the rule. And when you build this into your core security model, you reduce lateral movement after a breach and slow attackers down before they even start.

The point is, showing up with traditional perimeter defenses isn’t enough. The attack surface now includes your code repos, your AI models, your service accounts, and smart sensors connected to your offices. Threats evolve. Your security has to match or move faster. Zero trust gives you that leverage. It’s not overly complex, it’s just discipline, applied consistently.

Oversight of non-human identities (NHIs) is critical in the age of automation

The number of non-human identities, bots, AI agents, service accounts, has exploded. These aren’t just background processes. They actively access APIs, move data, make decisions, and sometimes escalate privileges, all without human touch. That creates risk. And most organizations aren’t tracking these entities with the same rigor as human users.

This is a gap in visibility and governance that needs urgent attention. Every non-human identity must be treated as a user, with unique credentials, strict permissions, and full auditability. Role-based access can’t just be for employees. It must extend to every automated process that touches your systems. If these identities are left unmonitored or operate with default settings, they become internal threats, easy targets for compromise or misuse.

Prakash Mana, CEO of Cloudbrink, pointed out that by 2026 we’ll see at least one AI agent per connected person. In three years, that number could rise to ten. That’s exponential growth in machine activity, and most of it is invisible to current security tools built around human workflows. Security leaders need to put formal AI usage policies in place, track these agents, apply guardrails, enforce identity coverage.

As automation accelerates, companies that embed security into their machine identity lifecycle, from creation to deprovisioning, will gain clarity and control. Those that ignore it will face data exposure they didn’t even know existed. The choice is clear. Extend your oversight now or be forced to clean up a breach that came from a bot you didn’t configure.

Embedding secure-by-design principles is vital for robust software development

Reactive security doesn’t scale. Baking security features into your systems and applications from the start is proven to cut risks, reduce costs, and keep your teams focused on advancement instead of recovery. Secure-by-design is just how modern, responsible engineering is done.

This also goes for AI systems. The models you train, the data they consume, the logic they execute, these all need protection from day one. If you wait to address integrity threats, you’ll be exposed to issues like data poisoning, model tampering, bias, and unauthorized access that’s hard to trace and even harder to clean up. You’re not just securing lines of code, you’re securing behavior that could influence business operations directly.

You need input validation built in, multifactor authentication mandatory, real-time logging always on. Those aren’t add-ons. They are ground-level requirements. And if your code isn’t developed with role-based access in mind, it’s already vulnerable. As Nishiyama from Keeper Security highlighted, AI must be protected from bias and manipulation, which means security architecture has to start at the design stage, not after an incident occurs.

To make this real, leaders should incentivize development teams to adopt secure coding standards. Make it part of performance metrics. This isn’t about slowing innovation, it’s about ensuring what you build today doesn’t become what you fix tomorrow.

The shift towards quantum-resistant security solutions

The impact of quantum computing on cybersecurity isn’t speculative anymore. It’s practical, and it’s on a defined timeline. The biggest risk? Data that’s encrypted today is already being harvested by attackers who plan to decrypt it later, once quantum machines are strong enough to break current algorithms. This “collect now, crack later” strategy means long-term privacy and data integrity are at stake right now.

Companies that rely on encrypted communications, secure transactions, or sensitive archives must begin integrating quantum-resistant encryption. Cryptographic agility, being able to switch encryption protocols as needed, will soon be a baseline capability. If you store sensitive data for ten years or more, it’s already at risk. Encryption that’s considered safe in 2024 may not last another five years.

Regulations are beginning to follow that trajectory as well. Across APAC, policies are tightening around data residency, AI accountability, and privacy. If your architecture isn’t prepared for that level of compliance, you’re going to find yourself scaling back innovation just to meet the bare minimum required by law.

Takanori Nishiyama made the point clearly: to be ready for the post-quantum era, enterprises need to embed compliance and encryption readiness directly into system architecture. This shift isn’t just about avoiding risk. It’s about protecting agility. Organizations that take these steps now will be the ones still moving fast in five years, when others are stuck retrofitting outdated security.

Evolving work patterns are intensifying security challenges and demanding adaptive IT strategies

Work habits are becoming increasingly flexible. Employees aren’t confined to a single device, location, or schedule anymore. That’s not the exception, it’s the new baseline. When employees log in from remote locations at 7 a.m. or transmit large volumes of data late Friday afternoon, it’s still part of the enterprise activity footprint. Security has to follow that footprint in real time, without assumption.

This shift brings two challenges. First, more connected devices, like wearables, earbuds, and voice assistants, introduce casual but significant entry points into networks. Second, off-hours productivity often goes unmonitored by default, leaving gaps in visibility and control. None of this is inherently bad, but it makes traditional perimeter security models obsolete.

Cloudbrink CEO Prakash Mana pointed out that “work from anywhere” is fast becoming “work anytime.” According to their data, tech workers are showing usage peaks between 7:00 a.m. and 7:00 p.m., especially on Fridays. The behavior reveals a trend: remote employees tend to put in longer, more varied hours. That can signal increased productivity in the short term, but without balance, it can also push people toward burnout.

From a leadership perspective, productivity at the expense of sustainability is a poor tradeoff. Security controls must be adaptable, capable of protecting data regardless of when or where work happens. But that’s only half the equation. The other half is employee well-being. You can’t secure your infrastructure at the cost of disengaging your top talent. Smart policies do both, monitor effectively without micromanaging, and empower workers without compromising control.

AI-driven social engineering, including deepfakes, is redefining the threat landscape

We’re past the point where deepfakes are experimental or rare. Attackers are now using real-time voice and video cloning to impersonate executives, manipulate conversations, and deceive employees. These AI-driven tactics are pushing social engineering to a level where traditional checks no longer apply. The email isn’t just suspicious, it sounds legitimate. The video isn’t just fake, it looks live.

This changes how we think about verification. It’s not enough to lock down networks or block known phishing domains. When AI can generate convincing business email compromise (BEC) messages that evolve mid-conversation, the risk moves closer to the individual user. The attack vector is not the system. It’s the human who trusts what they see and hear.

Prakash Mana, CEO of Cloudbrink, stated clearly that attacks using deepfakes and AI impersonation will become standard. Criminals are no longer focused on bypassing firewalls, they’re focused on exploiting human trust. And with remote work still dominant, the inconsistencies in when and where people communicate reduce the likelihood of second-guessing an unexpected request.

From a strategic standpoint, the solution lies in moving past location-based or perimeter-oriented security. Every login, every communication, every transaction needs to be verified in real time. Behavioral analysis, biometric confirmation, and continuous identity verification give organizations a way to respond to dynamic threats. Trust has to be earned with every interaction, automatically, consistently, and without exception.

Increasing AI adoption is driving upgrades in IT infrastructure requirements

More AI means more data movement. Training large models and running inference tasks push both bandwidth and compute power well beyond standard enterprise needs. Companies are already feeling the strain. It’s not just about having the hardware. It’s about having infrastructure that can adapt to high-throughput, low-latency workloads, on demand.

That’s where planning becomes essential. Shared GPU systems, edge processing, user-side acceleration, these are now requirements, not extras. The shift to distributed workloads is happening fast, particularly within tech teams deploying AI-powered apps that need to ingest and process massive datasets in real time. Centralized infrastructure isn’t built for that level of access or agility.

Prakash Mana explained that training AI models is placing new pressure on corporate networks, requiring reconfiguration and smarter resource allocation. Instead of ballooning data centers, companies should enable GPU time-sharing and push tasks closer to where the data originates, whether that’s on a device, at the edge, or in a hybrid cloud environment.

For leadership, the implications are clear. Performance bottlenecks aren’t just an IT issue, they slow down innovation. Users won’t wait for lagging systems to catch up. If your infrastructure can’t handle the load, your products will stall. Investing in scalable, distributed computing today ensures your teams can work at full pace tomorrow, without compromise. If you’re serious about deploying AI, you need to be just as serious about supporting it technically.

Concluding thoughts

Security in 2026 isn’t about reacting faster, it’s about designing smarter. The complexity of today’s threats, AI-generated scams, machine-to-machine risk, and quantum-era vulnerabilities, demands proactive, structural change. This isn’t theoretical. The systems you build now dictate how resilient and adaptable your organization will be over the next five years.

Executives who treat security as a product of design, not a feature added later, will be the ones positioned to move quickly, scale confidently, and defend effectively. That means zero trust isn’t optional. Oversight for non-human identities isn’t a luxury. And building quantum resilience and visibility into AI tools isn’t forward-looking, it’s present tense.

The edge belongs to companies that fuse innovation with discipline. If AI drives your growth, let security guide your control. Because in a space defined by autonomous action and constant connection, you don’t get second chances. Better to be deliberate now than disrupted later.

Alexander Procter

December 23, 2025

11 Min