Generative AI is used by job seekers to deceive employers
It’s not an overstatement, AI-driven hiring fraud is becoming more frequent and more convincing. Job seekers are using tools like ChatGPT to fabricate skills, rewrite their work histories, and even script their interview answers in real time. In some cases, it’s not even the applicant showing up on camera, but someone else who’s better qualified. You might be hiring a fraudulent profile, without knowing it, because it’s getting harder to spot the red flags when AI sharpens the presentation.
This isn’t a fringe case. Companies across sectors are seeing a global uptick in these tactics, especially in engineering roles. Joel Wolfe, President of HiredSupport, says he sees AI-enhanced resumes across all roles, but they’re particularly noticeable in tech, where complex-sounding jargon gets thrown around to mask weak fundamentals. When sentence structures feel unnatural and buzzwords are excessive, it’s often a sign that AI did the heavy lifting, not the candidate.
Cliff Jurkiewicz, VP of Global Strategy at Phenom, notes that 10% to 30% of interviews now involve some form of deception. These aren’t just padded resumes; some of these applicants are outsourcing the interviews themselves or feeding in real-time, AI-generated answers through a second screen. Gartner’s projection that 25% of candidates could be fake by 2028 isn’t a warning, it’s a timeline to act.
This isn’t just a problem for your HR team. It’s a strategic threat. If fraudulent candidates make it through the system, you’re introducing risk at the core, access to intellectual property, customer data, and operational trust.
Job seekers admit to exaggerating qualifications with AI
The numbers speak for themselves. A 2023 StandOut CV survey found that 73% of U.S. workers would consider using AI to improve, or manipulate, their resumes. More than 64% admitted they’ve already lied on their CV at least once. This was before GenAI tools truly scaled.
Another survey from Resume Builder backs this up: 45% of respondents exaggerated their skills using GenAI. Specifically, 32% lied on their resumes, while 30% did so during interviews. It’s not just isolated white lies, this is widespread use of accessible tech to game recruiting systems. And most hiring systems aren’t built to handle it.
As AI becomes more embedded in society, its use during job searches will continue to normalize. But C-level leaders need to understand the nuance. Not all AI use is harmful. A candidate using ChatGPT to polish grammar or format a resume isn’t the issue. The problem is when it’s used to invent capabilities they don’t have, bypassing basic screening filters designed to protect the organization.
This is where leadership matters. Define standards. Be clear where you draw the line. Because once AI tools make lying effortless and low-risk, more people will do it, unless there are systems in place to catch and prevent it.
The influx of fake applicants adversely affects genuine candidates
Fake candidates distort the talent pipeline and degrade overall hiring quality. Every time a company brings on someone who faked their way through with AI, it pushes capable, honest candidates to the sidelines. Over time, this reduces confidence in the recruitment process and discourages high-quality applicants from spending their time on lengthy application cycles that feel unfair or rigged.
Operationally, the financial impact is measurable. The U.S. Department of Labor estimates that a bad hire can cost up to 30% of that employee’s first-year salary. For senior or technical roles, that quickly adds up. Some HR consulting firms estimate per-fraud losses as high as $850,000, when you factor in onboarding, payroll, system access, and lost productivity.
Cliff Jurkiewicz from Phenom shared a real example: one hire, based in Texas, secretly outsourced her work overseas and held down four jobs simultaneously, earning between $300,000 and $500,000 annually. She didn’t do much actual work herself. The fraud wasn’t caught quickly because there are few deterrents built into hiring systems today, and even fewer enforcement paths once deception is uncovered. Executives need to consider the compounding risks, not just wasted funds, but exposure to sensitive systems and decaying team morale as others pick up the slack.
Over time, every fraudulent hire multiplies these effects. And if the company doesn’t publicly signal deterrence or refine screening methods, the problem doesn’t go away, it scales.
Many employers remain open to the ethical use of AI
GenAI isn’t just a threat, it’s also a tool. Used properly, it saves time, improves language quality, and ensures better alignment between candidate messaging and job descriptions. Many companies recognize this. According to ZipRecruiter’s Q4 2024 Employer Report, 67% of 800 employers said they’re fine with candidates using AI to help write resumes, cover letters, or applications, so long as the underlying information remains accurate.
Executives should distinguish between augmentation and deception. Having a candidate polish language or clarify their experience using AI doesn’t degrade the process. It improves communication. But when AI is used to create job histories, inflate accomplishments, or simulate experience, that crosses a hard line.
As AI becomes a normal part of job search behavior, it’s critical for companies to embed transparency expectations into job postings and interviews. Make it clear that AI-assisted formatting is fine; falsifying qualifications is not. This clarity will support fair hiring at scale without penalizing applicants for using efficient tools.
The nuance here is important. Tech shouldn’t be the scapegoat, poor verification systems should. If you’re using outdated interviewing techniques or manual review processes, AI-driven fraud will keep slipping through. It’s not just about intent. It’s about systems being built to spot and differentiate honest augmentation from manipulation.
Generative AI represents both a challenge and an opportunity
We’re dealing with a double-edged tool. While generative AI makes it easier for candidates to manipulate hiring processes, it also gives companies a way to fight back with greater precision and automation. If AI can be trained to fabricate convincing content, it can also be trained to detect it.
Cliff Jurkiewicz, VP of Global Strategy at Phenom, emphasized this shift, announcing that their platform is developing an AI solution to identify deepfakes and catch fraudulent digital behavior during the hiring process. These technologies can be embedded directly into applicant tracking systems to flag voice inconsistencies, facial mismatches, or even language patterns that suggest AI output rather than natural speech.
This is where companies need to rethink their approach. Relying on human intuition to identify deception is no longer reliable or scalable. Automated detection systems powered by AI can learn to spot what recruiters miss, especially when fraud techniques evolve quickly. This is already underway in forward-thinking HR tech environments.
From a leadership perspective, investing in this type of infrastructure secures far more than your next hire. It protects your internal systems, intellectual property, and teams from the cost and disruption caused by embedded bad actors. The longer organizations use outdated screening methods, the more ground we lose to those who are actively exploiting the gaps.
Organized exploitation of generative AI poses national security risks
This isn’t just about individual applicants looking for shortcuts. We’re seeing coordinated campaigns, including actions by state-backed groups, aimed at infiltrating companies through falsified identities enhanced by AI. The U.S. Department of Justice indicted five individuals in January connected to a fraud scheme involving North Korean IT workers. These individuals often steal real American identities, use VPNs to spoof their locations, and hide behind polished resumes that mask malicious intent.
These schemes go beyond simple deception. Once inside an organization, these impostors can move files, exfiltrate data, trigger ransomware, or monitor systems undetected. In some cases, session logs are manipulated and malware is embedded during routine operations. That’s not theoretical, it’s happening now, particularly in critical infrastructure and tech-heavy industries.
For C-suite leaders, the takeaway is clear: hiring is now part of your threat surface. Any system that grants access, from cloud environments to workflows, assumes the person behind the login is verified. When that assumption fails, the attack is already inside.
The DOJ’s seizure of $1.5 million and 17 linked domain names demonstrates the scope of the issue. But it’s not just a law enforcement problem. It’s a corporate security issue that starts at the interview stage. Companies need to integrate stronger ID verification, behavioral screening, and AI-driven fraud analysis directly into recruitment. Waiting for this to be solved externally won’t cut it when the attack vector is inside your own applicant queue.
Key highlights
- Fake applicant risk is scaling fast: GenAI is making it easier for unqualified or fictitious candidates to bypass hiring filters, particularly in tech roles. Leaders should accelerate investment in fraud detection systems before these risks embed deeper into operational workflows.
- Misrepresentation is becoming normalized: With up to 73% of candidates open to using AI to lie on resumes and nearly half actively exaggerating qualifications, hiring deception is no longer rare. Execs should treat verification as a strategic priority, not just an HR task.
- Fake hires carry real financial damage: Fraudulent employees can cost firms up to $850K each and displace genuine talent. Leaders must adopt stronger ID checks, behavioral vetting, and post-hire audits to reduce exposure.
- AI use is acceptable when transparent: 67% of employers support ethical GenAI use in applications, such as formatting and grammar support. Execs should create clear policies that enable productive AI use while drawing a hard line on misrepresentation.
- GenAI can also be the detection engine: Emerging tools like AI fraud detectors and deepfake spotters are already in development, offering a powerful defense. Leaders should invest in AI-driven vetting tools to protect recruitment from modern impersonation threats.
- The threat includes state-sponsored fraud: Groups using AI and stolen identities are infiltrating companies to access systems and assets, with active DOJ cases confirming this. Decision-makers must treat hiring pipelines as attack surfaces and apply security protocols accordingly.