AI-enabled cheating in virtual interviews drives a return to in-person hiring
Remote interviews became standard for obvious reasons, speed, accessibility, and scale. Those gains came with a trade-off: higher risks of candidate fraud. The easy availability of tools like ChatGPT makes it simple for unqualified candidates to present like experts. Some even go a step further, using video manipulation or deepfake tech to appear as someone else entirely.
Companies are reacting. A recent Gartner survey revealed that 72.4% of recruiting leaders now lean toward in-person interviews to combat this rise in what they call “candidate fraud.” For C-level leaders, this hits retention, productivity, and brand trust. The more AI advances, the more sophisticated the fraud becomes. And the only way to stay ahead is to adapt the process faster than the threat evolves.
Google, Cisco, and McKinsey have made in-person interviews part of their verification arsenal. There’s no nostalgia here, just performance risk mitigation. The goal is straightforward: ensure the person you meet is capable, accountable, and who they say they are.
Expect this trend to grow. It won’t replace virtual entirely, but it will become a standard part of intelligent hiring systems, especially for technical or client-facing roles. If you’re not already doing this, start now.
In-person interviews better reveal human qualities
There’s no software today, AI or not, that can fully measure judgment, empathy, or integrity. These qualities decide how well someone leads, solves conflicts, makes decisions, or collaborates across teams. They don’t show up in a résumé or an AI-assisted Q&A. You need to interact with the real person to see it.
McKinsey said it best: face-to-face interviews help reveal human strengths you can’t replicate with any tech. This is about using them alongside what people still do best, connecting, understanding, thinking on their feet.
In-person interviews create a setting where these traits surface naturally. You see how a candidate reacts under pressure, how they reason out loud, how they engage in real time. These signals aren’t available in canned, AI-aided responses.
Executives who care about building resilient, human-centered organizations should prioritize this. As roles shift toward creativity, adaptability, and emotional intelligence, these hard-to-measure skills drive performance. Use AI for what it’s good at, processing data and speeding up workflows. Trust humans to recognize other humans.
In short: machines can’t do this part for us. Nor should they.
Multi-layered verification strategies combat interview fraud
As AI-enabled deception becomes more common, companies are moving from single-layer defenses, like asking the right questions, to full-stack verification systems. You need more than one line of defense. In-person interviews help, but they’re not enough on their own. Candidates are getting smarter about gaming the process. That means businesses need to get smarter too.
Leading organizations now combine multiple methods to reduce fraud. These include technical screening, background verification, real-time ID checks, and geolocation monitoring. Some platforms track the consistency of login devices and locations. Others cross-check activity timelines and credential submissions. It’s not about monitoring everything a candidate does, it’s about verifying what matters before you make a hire.
Gartner’s Emi Chiba stressed that layering these systems provides better fraud detection without slowing hiring velocity. Identity verification tools have matured. A hiring manager can now ask a candidate to snap a selfie with their phone, then authenticate it against a government-issued ID while verifying location data. If the details don’t align, it flags the process before the interview even starts.
This level of vetting is no longer optional for roles that carry risk, tech, finance, healthcare, and client-facing positions. It’s an operational priority. If your current system doesn’t support it, you’re putting long-term performance at risk for short-term convenience.
Selective integration of AI in interviews for realistic job simulation
Some companies are not just defending against AI, they’re embracing it where it makes sense. In specific industries, especially software engineering, AI is now central to the work itself. Hiring practices are starting to reflect that. This isn’t a step backward; it’s progress. If the role demands AI fluency, then let candidates demonstrate how well they use it during the interview.
TEKsystems and Meta are moving in this direction. They understand that engineers today build with AI systems, debug with AI tools, and optimize workflows using machine-generated insights. Assessing those skills in isolation from AI doesn’t give you useful data. It tests for a skill set that’s no longer current.
Armando Franco, Director of Technology Modernization at TEKsystems Global Services, said allowing AI in coding interviews “isn’t just a novelty, it’s an inevitability.” That’s the reality companies are now designing hiring systems around. The interview becomes a testing ground not for memory recall, but for capability within complex, tech-integrated environments.
But it’s not a free-for-all. AI needs to be turned on at the right moments, under the right conditions, with clear guidelines. It’s the responsibility of recruiters and hiring managers to set those boundaries beforehand. The goal is to filter out unqualified candidates, not those who know how to work with the machines they’ll use daily on the job.
If you’re thinking long term, it makes no sense to build hiring systems for a workplace that no longer exists.
Rising AI utilization by job seekers requires ethical boundaries
Job seekers are using AI more than ever, and the trend is accelerating. They’re using it to write resumes, generate cover letters, and prep for interviews. It’s efficient and gives them an edge, up to a point. The tipping point is when AI stops being a support tool and starts becoming a substitute for actual competence. That’s where hiring integrity breaks down.
ZipRecruiter reports show where candidate behavior is headed. The use of AI to help with resumes rose by 39% last year, AI-generated cover letters increased 41%, and interview training through AI climbed 44%. When used properly, this is a productivity gain. But when candidates rely on AI to fabricate skills or present false understanding, the process becomes flawed. You’re hiring output that belongs to a machine.
Sam DeMase, Career Expert at ZipRecruiter, made it clear. Candidates who used AI in honest, strategic ways received twice as many offers, despite applying to only 40% more roles. The message is simple: using AI to enhance authenticity works. Using it to fake competence doesn’t.
As an executive, you should encourage transparency. Establish clear policies that define what responsible AI use looks like during the hiring process. Make sure candidates know where AI support ends and personal accountability begins. If you don’t control the standard, someone else will, and it’s usually not who you want driving your talent acquisition strategy.
The fine line between preparation and deception in AI usage
One of the biggest challenges with AI-assisted hiring is ethical. There’s often no obvious line between preparation and deception. A candidate might believe they’re preparing by using AI to structure answers, but when those responses misrepresent true knowledge or capability, the line’s been crossed.
That’s why definition and communication matter. Lindsey Zuloaga, Chief Data Scientist at Hirevue, stated it clearly: “We define cheating as deceptive or dishonest actions taken by a candidate to misrepresent or embellish their knowledge, skills, abilities, or potential.” It’s not enough to know candidates are using AI. You need to know how, when, and why.
Hirevue’s own data confirms that genAI tools like ChatGPT tend to underperform in real-time job tryouts. They perform only at an average level in AI-scored assessments. That reinforces a useful insight, qualified candidates still outperform AI-generated input. So the solution isn’t a complete ban. It’s about clarity. Define what counts as ethical use. Share expectations before any assessments begin. Build systems that reward real skill, not automation fluency.
For C-suite leaders, this is a long-term governance issue. If you want accurate evaluations, you need transparent hiring systems. That means making candidates aware of what the rules are and enforcing them consistently. This isn’t about eliminating AI from the process. It’s about making sure the process leads to honest, qualified hires.
Defining boundaries in the dual role of AI in hiring
AI now plays a dual role in hiring. It helps streamline evaluations, identify qualified candidates, and automate early-stage decision-making. At the same time, it opens the door to misuse when candidates apply it without transparency. This duality means organizations can no longer afford vague policies. Executives need to define where, how, and when AI use is allowed, on both sides of the hiring process.
The inconsistency in expectations is a growing issue. Some teams allow AI-powered tools during application stages. Others don’t clarify their position at all. That gap creates confusion, which increases the risk of reputational harm, poor hiring decisions, and friction with qualified candidates who followed unclear rules. Clear boundaries prevent that friction and accelerate decision-making at scale.
Emi Chiba, HR tech analyst at Gartner, emphasized this point: “The best strategy for organizations is to have transparent and consistent communications about the expectations of AI use throughout the recruiting process and at the organization itself.” Without that clarity, candidates will continue to test limits, and recruiters will waste time flagging behavior that should have been preemptively addressed.
For executive teams, this isn’t just a tactical question, it’s a strategic priority. It affects talent acquisition performance, compliance, brand integrity, and operational efficiency. Precision in policy leads directly to performance in execution. Define what your organization allows in AI-assisted hiring now. If you wait, the standards will be set for you, by candidates, competitors, or the tools themselves. And those outcomes usually result in reactive fixes, not proactive solutions.
Concluding thoughts
AI is no longer a future consideration, it’s already shaping how people apply, assess, and get hired. That shift brings opportunity, but it also brings risk. Smart companies aren’t rejecting AI. They’re setting clear rules, reinforcing human oversight, and tightening verification where it counts.
Executives need to treat this as a strategic priority, not an operational detail. Hiring fraud impacts team quality, security, and long-term performance. Relying solely on virtual processes without updated safeguards will leave cracks in your hiring pipeline, and competitors will move faster to close theirs.
The companies staying ahead are doing two things well: building layered defense systems that prevent fraud without slowing hiring, and using AI ethically to speed up the right parts of the process. They’re also reintroducing human contact where it adds the most value.
In short, don’t let the tools dictate your hiring standards. Define how you’ll lead in this new landscape. That clarity is what keeps the quality high, the risk low, and the edge sharp.