AI Candidates Are Getting Better. Your Interview Process Isn't.
In 2022, convincing a hiring manager you were someone else over video required expensive equipment, technical expertise, and a fair amount of luck. In 2026, it requires a laptop, a £20 subscription, and about fifteen minutes of setup.
The tools have changed. The interview process hasn't.
The Asymmetry Nobody Talks About
Every quarter, AI companies publish benchmarks showing their models getting sharper, faster, more convincing. Deepfake video quality doubles roughly every eighteen months. Voice cloning has crossed the threshold where most listeners cannot reliably distinguish a synthetic voice from a real one in a short call.
Meanwhile, the standard video interview process at most companies looks like this: send a calendar invite, paste a Zoom link, ask the candidate to join. No verification. No confirmation the person on the call is the person on the CV. Just a link and a hope.
That asymmetry — rapidly improving fraud tools against a static interview process — is where the risk lives.
What "Getting Better" Actually Means
It is worth being specific about what AI-assisted interview fraud looks like today, because the threat model has evolved significantly.
Face swapping in real time. Tools like DeepFaceLive allow a person to overlay a different face onto their video feed with low latency. Early versions were visibly glitchy. Current versions, running on a mid-range GPU, produce output that passes a casual visual check on a compressed video call.
Voice cloning from public audio. A few minutes of publicly available audio — a conference talk, a LinkedIn video, a YouTube interview — is sufficient to train a voice model that mimics a specific person. In a hiring context, this means a fraudulent candidate can sound like their CV claims they should.
AI-assisted live responses. Screen-sharing tools and discreet earpieces allow a candidate to receive coached answers in real time. The person on the call may be genuine but is effectively being puppeteered by someone with the technical skills the role requires.
Coordinated candidate fraud. In some documented cases — particularly in technology and financial services — organised groups have created entire fake candidate identities, complete with fabricated LinkedIn profiles, Github repositories, and reference networks. One person handles the application and screening. Another attends the technical interview.
Each of these vectors has a different detection signature. Most interview processes are equipped to catch none of them.
The Cost Is Not Hypothetical
The assumption that interview fraud is rare or low-impact is increasingly hard to defend.
The FBI issued a public service announcement in 2022 specifically warning employers about deepfake candidates applying for remote technology roles. Since then, the number of reported incidents has grown consistently year over year. Several large technology companies have publicly acknowledged discovering employees who misrepresented their identity through the hiring process.
The cost of a bad hire is well-documented — typically estimated at one to three times annual salary when you account for recruitment, onboarding, lost productivity, and the cost of re-hiring. When the bad hire was deliberate fraud, there are additional risks: data access, IP exposure, and in regulated industries, potential compliance breaches.
For roles with security clearance requirements, remote access to sensitive systems, or financial controls, the risk profile is materially higher still.
Why The Interview Is The Vulnerable Moment
Background checks happen after an offer. Reference checks happen after a shortlist. But neither catches the person who was never who they claimed to be in the first place.
The video interview is the moment when identity should be confirmed — but it is also the moment where almost no verification takes place. A candidate with a convincing fake identity can sail through an ATS, pass a phone screen, perform well in a structured interview, and clear standard background checks, because standard background checks verify the paper identity, not whether the person who showed up matches the person the paper describes.
This is the gap. It is structural, it is well-understood by fraudsters, and it is largely unaddressed by the tools most hiring teams use.
What Forward-Thinking Teams Are Doing
The companies taking this seriously are adding a verification layer at the interview stage specifically — not before, not after.
The key elements of an effective approach:
Liveness detection before the call begins. Confirming the candidate is a live human — not a pre-recorded video or a photograph — before they join the interview. AWS Rekognition and similar tools can perform this check in under sixty seconds.
Device and environment analysis. Virtual cameras, VPNs, and headless browsers are all signals worth examining. A candidate connecting through a data centre IP with a virtual camera active is a different risk profile from one connecting from a residential address on a standard browser.
Identity cross-referencing. Checking that the face on the call is consistent with the LinkedIn profile attached to the application. Not a forensic match — a sanity check. Enough to catch the obvious cases.
An audit trail. Knowing that verification took place, when, and with what result — so that if a problem emerges later, you have a record.
None of this is invasive. It takes less time than the average pre-interview small talk. And it closes the gap between what fraud tools can do and what your interview process was built to handle.
The Trajectory Only Goes One Way
AI capability curves do not plateau. The tools available to fraudulent candidates in 2027 will be more capable than the ones available today, just as today's tools are more capable than those from three years ago.
The interview process that worked in 2019 was not designed for this environment. The question is not whether to update it — it is how quickly.
Confiri verifies candidate identity before they join your video interview. Liveness detection, device analysis, and identity cross-referencing — in under 60 seconds, without replacing your existing video platform. Request access →
Sources
- FBI Public Service Announcement IC3-22-060: Deepfakes and Stolen PII Used to Apply for Remote Work Positions (June 2022)
- SHRM: The Real Costs of Recruitment (2023)
- Gartner: Predicts 2027 — Cybersecurity (2024 report, synthetic identity projection)