Google deploys on-device AI via Gemini Nano

Scams today aren’t always loud or obvious. They’re subtle. Fast-evolving. They rely on social engineering and look just legitimate enough to bypass traditional filters. Google’s move to deploy Gemini Nano, the company’s lightweight, on-device large language model, is a significant leap in fighting these threats proactively.

With Chrome 137 on desktops, Google is embedding Gemini Nano directly into the Safe Browsing system. That means it evaluates individual pages in real time, right when you’re viewing them. Because it’s running locally, it can respond instantly to unfamiliar or never-before-seen scams. That matters. Especially when attackers evolve faster than your traditional blacklist can update.

This model isn’t just scanning for keywords or looking at URLs. It’s analyzing the structure and behavior of a webpage. For example, it flags technical tricks used by scam sites, like when a webpage hijacks your keyboard input using specific APIs. Inside your browser, Gemini Nano spots these signs, extracts security signals, and works with Safe Browsing to flag the risk instantly, before the user takes any action. That level of security, combined with local execution, provides both speed and privacy.

Now, from a technical leadership standpoint, why does on-device matter? Simple. Running AI models locally preserves user privacy, reduces dependency on server infrastructure, and, critically, eliminates delays. It’s not polling a server. It’s acting now. That’s a smarter, more responsible AI strategy. And it’s efficient, too. Developers at Google, Jasika Bawa, Andy Lim, and Xinghui Lu from the Chrome Security team, highlighted how it’s throttled and optimized. It uses minimal GPU. It’s asynchronous, so it doesn’t slow down your system. And it’s launched only when it’s needed, not every time a site loads.

This isn’t just about reducing scam exposure. It’s the framework for more scalable, predictive security without bloated software or centralized monitoring. And that’s the direction things are heading, more private, more capable, more aware systems that anticipate risks and act before users can be harmed. If you’re thinking about how you design security at scale, this opens up possibilities.

Advanced AI-driven safe browsing enhancements

Google has upgraded its AI-driven Safe Browsing to operate not just faster, but smarter, across trillions of search queries. These enhancements don’t just catch scams. They systematically reduce exposure to webpages designed to impersonate legitimate entities. What makes this interesting from an executive standpoint is how measurable the impact has already been.

With AI now deeply integrated into how Chrome and Search assess pages, Google’s updated systems are identifying 20 times more deceptive or malicious pages than previous iterations. These aren’t theoretical numbers. In 2024 alone, scams mimicking airline customer service portals dropped by over 80%. Fraudulent pages posing as official government resources, visa processing, policy documentation, etc., fell by more than 70%. These improvements reflect something critical: when machine learning evolves from classification to contextual interpretation, it delivers tangible defense at scale.

What’s happening behind the scenes? Google’s latest models are not just filtering URLs or domain reputations. They’re interpreting content and intent. The AI can now analyze whether a site is trying to deceive by pretending to be an airline hotline, a government form, or a toll payment service. That differentiation, and the ability to act on it at query time, makes their protection proactive instead of reactive.

For C-suite leaders, this speaks to operational trust. When your customers or employees search for official support, pay a toll, or check visa information, they rely on the search engine to only offer legitimate results. By embedding enhanced detection directly into that layer, Google aligns product experience with digital trust. The security now lives in the infrastructure. And it doesn’t require user awareness or action.

There’s another takeaway here. These AI systems are increasingly adaptable. They’re not just tuned to certain types of scams, we’re seeing a model that can expand to new verticals, fast. That’s critical for any sector facing persistent impersonation tactics. The engine running this detection isn’t static, it evolves. And that gives it long-term enterprise value.

Introduction of an AI-powered notification warning system in chrome on android

Not every threat starts on a webpage. Increasingly, scam tactics are showing up in low-effort formats, push notifications, pop-ups, and system-level alerts that aim to trigger an immediate user reaction. Google’s latest move tackles this directly within Chrome on Android with an on-device machine learning model designed to flag deceptive or spammy site-notifications in real time.

Here’s how it works: When a user receives a push notification via Chrome, the local model assesses it before the user interacts. If the content appears suspicious, such as clickbait urging a download or an alert disguising itself as a security warning, Chrome flags it. It clearly displays the name of the sending site, a cautionary message, and gives the user two choices: review or unsubscribe on the spot. No guesswork. No delay.

From a functionality standpoint, this happens locally, which means no cloud latency, no external scanning, and better privacy. That’s key. The machine learning model is trained to recognize behaviors and linguistic patterns commonly used in notification scams. The same deceptive tactics seen globally are reflected in its training, making the protection relevant across languages and regions.

For C-suite leaders, the value is clear. This eliminates an overlooked point of entry for attackers. Site-based notification spam has become a favored vector because many users unknowingly enable it during casual browsing. Stopping malicious intent right at the notification level significantly reduces downstream risks, malware, phishing, and unwanted downloads. More importantly, it maintains brand quality and the integrity of the app ecosystem on devices employees and customers use daily.

Hannah Buonomo and Sarah Krakowiak Criel, from Chrome Security, explained that the initiative gives users more control over the content reaching them from the web. It’s not just about filtering out spam; it’s about restoring transparency and allowing users to make informed choices, in context.

This isn’t just useful for individual security. It reinforces the broader strategic direction: push AI down to the edge, keep it local, and empower users automatically. That’s a scalable approach to defense, the kind that doesn’t need user training to be effective.

Continued expansion of AI-based scam protections across Google’s ecosystem

Google isn’t limiting AI-based threat detection to the browser. What we’re seeing is a coordinated, cross-platform rollout of security features that run across messaging, calling, search, and the operating system itself. This is a systemic shift, moving from reactive intelligence to preemptive defense embedded at every user interaction point.

Start with Android Messages. Earlier this year, Google deployed AI to identify scam messages, anything from fake delivery updates to fraudulent verification requests. These alerts happen on-device, with machine learning models analyzing content before a user even taps the message. The same principle was carried into scam call detection. Unknown numbers now come with call-screening intelligence that interprets call intent, tagging potential fraud in real time.

Now, that’s being taken further. With Android 16, Google is preparing Advanced Protection features that raise the security baseline across all devices. That includes disabling 2G mobile connections, often exploited in spoofing attacks, blocking JavaScript by default in unsecured contexts, and enabling theft-resilient locking mechanisms, like Offline Device Lock and Theft Detection Lock. These protections are expected to be on by default, which matters when your users or teams aren’t always aware of specific threats.

For business leaders, this ecosystem-wide integration does something essential: it offloads security complexity from users. You no longer have to rely on training or compliance prompts to reduce exposure. If fraud protection, scam detection, and device-level locks are built directly into the OS, risk declines, and deployment overhead is minimized.

Android Authority also reported Google is working on functionality that would identify when a user is being socially engineered during a call, specifically, tricks used to get someone to open their banking app mid-conversation. That level of contextual risk awareness expands protection into real-world interactions, something conventional antivirus or endpoint security never covered.

Internally at Google, Jasika Bawa, Andy Lim, and Xinghui Lu from the Chrome Security team are among the leaders executing this direction. Their focus on device-level enforcement, running AI only when necessary, managing resource consumption, and scaling native protections, is a direct answer to today’s fragmented threat environment.

This signals a long-term shift. Scalable security won’t come from after-the-fact alerts. It will come from the design layer, tools that anticipate risk earlier and operate before a compromise is even possible. That’s where Google is positioning itself, and that’s the direction enterprise mobile security is heading.

Key takeaways for leaders

  • On-device AI redefines proactive threat detection: Google’s Gemini Nano runs locally in Chrome, analyzing webpages in real time to flag scams without sending data to the cloud. Leaders should explore on-device AI to improve responsiveness and minimize privacy risks.
  • AI upgrades sharply reduce impersonation scams: Google’s enhanced scanning detects and blocks 20x more scam pages in Search, cutting airline and government impersonation scams by over 70%. Businesses should assess how AI-driven content filtering can boost consumer trust and prevent phishing exposure.
  • Local ML models now filter malicious notifications: Chrome on Android uses on-device machine learning to flag deceptive push notifications before user interaction. Product and security teams should consider local notification controls to reduce social engineering risk on mobile platforms.
  • Ecosystem-wide security shifts responsibility from users: Google is embedding AI-powered protections into Android’s core, including scam detection in calls, Messages, and upcoming OS-level safeguards like Theft Detection Lock. Executives should align security strategies with system-level automation to reduce human error and improve baseline device safety.

Alexander Procter

June 6, 2025

8 Min