Enterprise AI adoption surge and emerging security concerns
AI is a fundamental shift in how enterprises operate. We’re talking about a 3,000% increase in enterprise use of AI and machine learning tools within just one year, based on real traffic data analyzed by Zscaler’s ThreatLabz.
AI is being integrated across business units, customer service, supply chain, financial forecasting. But this sudden acceleration brings significant risks. More systems are exposed, more data leaves the building, and more people interact with models they don’t fully understand. Most businesses weren’t designed for this level of digital exposure. The same AI tools that simplify workflows are also creating massive new attack surfaces. That means traditional perimeter-based security isn’t going to cut it anymore.
If your company is investing in AI, and you don’t yet have a comprehensive AI risk strategy, you’re already behind. Data is moving faster and further than many organizations can track, Zscaler reported over 3,624 terabytes of enterprise data sent to AI tools across just ten months in 2024. You need visibility into who’s using what, for what purpose, and with what data.
For board-level leadership and CIOs alike, the takeaway is clear: AI unlocks opportunity only if your security framework evolves just as fast. Anything else is reckless.
Dominance of ChatGPT in enterprise AI usage and blocking
ChatGPT is dominant. It accounted for 45.2% of all AI and machine learning transactions in Zscaler’s network. That’s almost half the traffic. If you think your teams aren’t interacting with it, you’re probably wrong.
And here’s the twist, it’s also the most blocked tool across corporate environments. Why? Because it creates serious blind spots. Employees are using it to summarize documents, generate code, even write client emails, but they might also be pasting sensitive data without realizing the implications. That’s how you end up with intellectual property in places it doesn’t belong.
At Eaton Corporation, Jason Koler, their Chief Information Security Officer, put it directly: “We had no visibility into [ChatGPT]. Zscaler was our key solution initially to help us understand who was going to it and what they were uploading.” That’s the real issue, if executives don’t know where their data is going, they can’t protect it.
The broader signal here is that productivity tools are evolving faster than corporate policies. You need guardrails that don’t block innovation but do catch risks before they go global. Control comes from knowing what’s happening inside those tools, and having the insight to act when needed. If you’re not actively managing this balance, you’re vulnerable.
Open-Source and agentic AI models as new cyber risk vectors
We’re watching a new class of AI gain traction, agentic models and open-source systems. These are systems that can build other tools, automate tasks, make decisions, and run processes end to end without oversight. That capability raises the stakes significantly.
Open-source platforms like China-developed DeepSeek are a prime example. It’s gaining traction because it performs well, is affordable, and openly accessible. It’s disrupting the dominance of U.S.-based platforms like OpenAI, Anthropic, and Meta. But while broad access to powerful tools drives innovation, it also lowers the bar for threat actors. Bad actors now have faster, cheaper, and easier ways to scale attacks.
What’s different now is the level of autonomy and scalability these models provide. They’re capable of moving without human involvement, which removes traditional friction points that helped contain risk. This is where security protocols break down. Organizations using these solutions can’t rely on legacy safeguards, they need real-time oversight, isolation strategies, and AI-level countermeasures.
For executives, the conversation must shift from “Can this model help us innovate?” to “Can it do damage when left unchecked?” Because the cost of inaction here is operational and reputational.
Finance and manufacturing sectors drive highest AI usage amid security demands
If you want to understand where AI is hitting hardest, look at finance and manufacturing. These two sectors alone account for nearly half of all enterprise AI traffic, 28.4% and 21.6% respectively, according to Zscaler’s 2025 AI Security Report. And that makes sense. These industries have a lot to gain from speed, automation, and cost efficiency.
In finance, AI is running fraud models, managing risk portfolios, and powering customer advisory platforms. In manufacturing, it’s optimizing supply chains, directing robotics systems, and predicting downtime in real-time. These aren’t experiments, they’re core operations. So naturally, the security exposure climbs just as fast.
Regulatory scrutiny is climbing. Attack surfaces are more complex. And the data that fuels AI, customer information, financial records, supplier relationships, is often highly sensitive. Using AI without locking down that data is a breach waiting to happen.
This is where strategy matters. Deploying AI at scale without customizing your security architecture by sector leaves gaps. Executives in these verticals should already be aligning IT and compliance leads, embedding primitives like data segmentation and breach simulation directly into their operating stack. If that isn’t happening, there’s risk accumulation that the board needs to flag.
Rapid AI adoption in Asia-Pacific reflects global shifts
AI adoption is shifting geographically. The Asia-Pacific region is becoming a major driver of enterprise AI activity, with India, Japan, and Australia leading regional volumes. According to Zscaler’s ThreatLabz 2025 AI Security Report, India accounted for 36.4% of all AI transactions within the Asia-Pacific region, followed by Japan at 15.2% and Australia at 13.6%. This shows the momentum is real and building fast.
Increased access to AI tools, digital-first government policies, and skilled technical workforces are accelerating adoption across industries in this region. But scalability brings complexity. Cross-border operations mean dealing with data sovereignty laws, different regulatory environments, and inconsistent enforcement. Executives can’t treat cybersecurity as a one-size-fits-all program, especially not in countries where regulatory maturity varies.
Globally, the United States still leads with 46.2% of AI/ML traffic, but the distribution is flattening. As regions like Asia-Pacific grow, threat actors also increase focus on these markets. The pressure is dual, capitalize on AI-driven productivity while maintaining control of data governance, vendor compliance, and workforce security standards across jurisdictions.
Eric Swift, Vice President and Managing Director at Zscaler Australia and New Zealand, framed it clearly: “The rapid rise of AI adoption across Australia and New Zealand is reshaping the way employees and organisations work. This surge in AI usage also shines a spotlight on the urgent need for robust security measures to protect sensitive data and sustain innovation.” That’s the executive lens needed going forward, scale with precision, or risk disruption.
Zero trust frameworks as the cornerstone for secure AI integration
The more AI becomes embedded into enterprise processes, the more urgent it is to rethink traditional security models. Trust-based infrastructures simply don’t work when interactions span cloud services, distributed endpoints, and autonomous model behavior. That’s why zero trust is becoming the core design principle for modern security.
Zero trust operates on a simple assumption: never trust, always verify. That verification is applied continuously, to devices, users, and data. With AI tools moving sensitive information across systems in real time, enterprises need visibility that operates at the speed of those transactions. Anything slower creates exposure.
Zscaler’s Zero Trust Exchange is built for that. It processes over 500 trillion signals per day across users, applications, and networks. That scale allows for high-fidelity threat detection, data classification, and access control, without interrupting workflows. In fast-moving environments, that level of real-time enforcement becomes non-negotiable.
Necessity for AI workforce upskilling to sustain safe adoption
AI adoption is expanding, but people define sustainable progress. Most companies are adopting AI faster than their teams can keep up. That’s a real challenge. You can’t scale secure and responsible AI deployment without upskilling the workforce that interacts with, manages, and governs these systems every day.
According to Zscaler’s 2025 AI Security Report, 83% of Australian business leaders are prioritizing AI integration by 2025. But 40% of them also flagged workforce training as a critical gap. This signals growing recognition that AI implementation isn’t just a technical or strategic issue, it’s an operational one that touches roles across the organization.
If your teams don’t understand how data flows through AI models, which risks those interactions create, and how to manage access, you’re operating with a knowledge deficit. That includes marketing teams using generative tools, HR teams automating recruitment workflows, and engineering teams deploying custom ML models. They’re all involved, and all exposed, if untrained.
C-suite leaders should stop framing AI enablement as a technology-first conversation. It’s a people-first problem with major implications for compliance, security, and brand reputation. You need clear standards, practical training frameworks, and ongoing exposure management that scales with tool adoption. This isn’t a box to check. It’s infrastructure you build, like cloud, like apps, except here, the endpoint is your workforce.
The organizations that succeed with AI long-term are the ones treating human capability as core infrastructure. If you’re not investing in the right people processes today, you’re limiting the value of the smartest tools tomorrow.
In conclusion
AI is changing the architecture of business. That’s no longer a prediction, it’s happening now, and fast. But scale without security is a gamble. The data movement, the open models, the rise of agentic systems, all of it is redefining the boundaries of control. If you’re leading a company pushing into AI, that means your exposure surfaces are changing daily. And so should your strategy.
Zero trust isn’t optional anymore. Workforce training isn’t a nice-to-have. Real-time visibility into AI usage is now foundational. These aren’t checklists, they’re structural decisions that determine whether AI becomes a competitive edge or a liability.
Leaders should act with clarity. Build the policies. Level up the teams. Rethink the perimeter. Productivity is part of the value, but only if trust scales with it. The ones who move now, with purpose and precision, won’t just keep up. They’ll define what secure AI adoption looks like across their industries.