A Vietnam-linked cyber group is exploiting global interest in AI tools
Cyberattacks are no longer an insider threat or a backdoor vulnerability tucked away in your server logs. They’re going mainstream. A Vietnam-based cyber group identified as UNC6032 has found a way to turn the growing excitement around artificial intelligence into a weapon. Their campaign isn’t complex in execution, but it’s incredibly effective in reach. They use paid ads on familiar platforms, Facebook and LinkedIn. These aren’t lab experiments or theoretical threats. These are ads people across the world are clicking on every day.
They mimic real AI brands, names you know, like Canva’s Dream Lab, Luma AI, and Kling AI, and look convincingly authentic. Once clicked, the ads redirect users to spoofed websites that look nearly identical to the legitimate ones. But instead of accessing cutting-edge AI tools, users unknowingly open their systems to malware. This malware quietly collects sensitive data, login details, credit card numbers, personal cookies, effectively bypassing usual security measures.
We’re seeing this because interest in AI has exploded globally. The demand curve is steep, and that creates opportunity, not just for businesses, but also for attackers. The wider the interest, the broader the attack surface.
Yash Gupta, Senior Manager at Mandiant Threat Defense, puts it plainly: this group has weaponized “the explosive interest in AI tools” using “realistic branding and trusted platforms.” The attack relies on trust. It hijacks familiarity, something most users don’t question. That’s why it’s working on such a massive scale.
The malicious advertising campaign has reached millions
The scale of this operation isn’t guesswork, it’s measured. Mandiant’s team tracked over 120 malicious Facebook ads targeting users inside the European Union. Estimated reach? More than 2.3 million people. That’s a signal, not noise. On LinkedIn, roughly 10 malicious ads were also detected. That number might sound smaller, but the targeting is often more specific, enterprise users, professionals, decision-makers.
This isn’t just about how many ads were pushed. It’s also about how. UNC6032 didn’t rely solely on fake identities. They used compromised, legitimate accounts to activate their campaigns. That made them harder to detect and faster to activate. Some of these campaigns went live for just a few hours, enough time to capture user data and disappear before security protocols caught up.
Now, you might assume platforms like Meta (Facebook’s parent company) and LinkedIn would have safeguards in place, and they do. But when attackers rotate domains and use real accounts, even advanced detectors need time to catch them. These threats evolve quickly because the ecosystem does. AI is moving fast. So are the people trying to exploit it.
For business leaders, the message is clear: advertising isn’t just a marketing function anymore. It’s a potential security gateway. You have to monitor not only what your teams post, but also what others might mimic under your brand’s name, or those that look close enough to it to confuse your users.
The attackers work quickly. To match that speed, platform security, enterprise vigilance, and real-time threat intel need to be faster.
The Python-based malware known as STARKVEIL
Here’s the core of the attack. Users lured by fake AI tools don’t just visit a spoofed site, they download a payload. It’s a Python-based malware strain Mandiant calls STARKVEIL. This isn’t off-the-shelf software. It’s custom-built and designed to stay flexible, allowing attackers to install various information-stealing tools and persistent backdoors into the user’s system.
Once deployed, STARKVEIL quietly gets to work. It collects login credentials, credit card numbers, saved browser cookies, and background system data. Then it transmits that data over encrypted communication lines to infrastructure controlled by the attackers, Telegram being one of the channels they use. The method is lightweight, fast, and effective. It bypasses most traditional antivirus tools because it doesn’t behave like older malware strains.
This malware doesn’t just pose an isolated threat. It can initiate secondary exploitations through the backdoors it installs. That means even after removal, the threat may persist if deeper parts of the network or system were accessed.
For security-conscious executives, this underscores the need for endpoint monitoring and behavioral anomaly detection. If your systems are running without continuous visibility, STARKVEIL or similar malware could operate in the background unnoticed. That’s a risk you can’t afford in any regulated or reputation-sensitive industry.
Credential theft via these fraudulent campaigns
Let’s look at impact. Stolen credentials are more than a minor inconvenience. They’re one of the top entry points for large-scale cyber intrusions across industries. Mandiant’s M-Trends 2025 report confirms that compromised credentials are the second most common initial access method used by threat actors.
That means once those logins are exposed, whether personal accounts, cloud-based tools, or internal admin consoles, your entire operation is potentially accessible. And it often happens without immediate detection. A user logs into a spoofed interface believing it’s a trusted AI platform. Behind the scenes, their information is harvested and weaponized.
From there, attackers use the stolen credentials to escalate access privileges, steal proprietary information, or move laterally through a network. The initial breach rarely reveals their full intent. That comes later, sometimes weeks or months down the line.
For organizations, basic protections like strong password policies and two-factor authentication are necessary. But relying only on those won’t cut it anymore. You need identity threat detection tools that go beyond login attempt monitoring, integrating context-aware access models. Executives must also push for regular employee training, because the fastest adoption of AI tools is happening through individual users, not centralized teams.
If your workforce is eager to experiment with AI solutions, great. Just make sure they’re doing so within verified, secured environments. Anything less is a risk decision you may never get to revisit with the same degree of control.
Social media platforms and cybersecurity teams are actively responding to these threats
Social platforms aren’t standing still. Meta and LinkedIn began removing harmful ads from their networks in 2024, even before external alerts came in from investigators like Mandiant. That’s a good start. But these attackers adapt quickly. They launch new domains daily, rotate ad content, and swap in new tactics faster than platforms can respond. The result is a daily arms race between prevention and infiltration.
What makes this situation more complex is the use of legitimate user accounts, often compromised, not fake, to distribute malware. Campaigns are kept short, sometimes live for only a few hours, surfacing and disappearing fast enough to slow platform intervention. Detection engines are improving, but they’re not fully closing the loop.
Yash Gupta, Senior Manager at Mandiant Threat Defense, made the real point: stopping campaigns like this “requires ongoing cross-industry collaboration.” It’s not a one-team fix. Platforms, security vendors, policy makers, and enterprise IT departments all have to act together. The response must be shared and constantly updated.
From a leadership perspective, relying solely on platform safeguards is inadequate. Security teams need to monitor ad networks, scan external brand references, and collaborate with their industry peers. Executives should prioritize partnerships, not just with tech providers, but also with ecosystem players who face similar threats. The risk is distributed, so the defense must be too.
Proactive security measures and user vigilance are key
Attackers are targeting rapid AI adoption. That’s the pattern. People are experimenting with AI tools from all angles, in design, engineering, marketing, logistics. But that flexibility opens doors. If employees download from unverified sources or click an ad without checking the URL, you’re exposed.
Mandiant recommends a few basic but essential defenses. First, users should avoid downloading AI tools or software from social media ads, especially if the source isn’t verified. Second, URLs should always be inspected before clicking. Domains that are misspelled or newly registered are red flags. Third, antivirus and endpoint protections need to be up-to-date across the organization. And fourth, suspicious ads should be reported to the platforms, not ignored.
Enterprise IT and CISOs should institutionalize these actions. Set clear security policies around third-party tech installations, develop internal approval layers for AI tool adoption, and automate URL vetting where possible. These aren’t costly moves, and they reduce exposure significantly.
C-suite teams play a role beyond budgeting here. Strong policies come from the top. If speed and innovation are priorities, secure infrastructure and digital awareness must grow at the same pace. It’s not about slowing down adoption, it’s about enabling it safely. That’s the mindset that needs to be driving tech leadership teams today.
Key highlights
- Cyber attackers exploit AI hype to spread malware: A Vietnam-based group is using fake AI ads on Facebook and LinkedIn to distribute malware via spoofed sites mimicking real AI brands. Leaders should ensure marketing, security, and brand teams monitor impersonations across digital channels.
- Ad infrastructure is being weaponized at scale: The campaign used over 30 fake domains and reached millions of users through thousands of deceptive ads. Executives should invest in real-time monitoring of brand usage and ad fraud detection beyond internal campaigns.
- STARKVEIL malware enables deep data theft: Victims unknowingly download malware that steals credentials, payment data, and more, then sends it via encrypted channels to attacker-controlled servers. CISOs must adopt endpoint protection capable of detecting script-based and behavioral anomalies.
- Credential theft poses long-term enterprise risks: Stolen credentials fuel wider intrusions and privilege escalations, now the second leading entry point for cybercriminals. Decision-makers should enforce multi-factor authentication and deploy identity threat detection across the organization.
- Ongoing platform response requires cross-industry support: Although Meta and LinkedIn began removing malicious ads early, attacker tactics evolve too quickly for platform-only containment. Executive teams must prioritize threat intelligence sharing and cybersecurity partnership networks.
- User vigilance and system hygiene are critical last lines of defense: Mandiant advises strict scrutiny of AI tool ads, URL verification, and updated endpoint protections. Leaders should mandate internal security education and harden policies around third-party tool adoption.