The $500,000 Interview: What Deepfake Fraud Really Costs
In early 2024, an employee at Arup — one of the world's largest engineering consultancies — joined a video call with what appeared to be several senior colleagues. The faces were familiar. The voices matched. The conversation was routine.
It was entirely fabricated. Every participant on the call except the employee was a deepfake. The result: $25 million transferred to fraudulent accounts.
Arup's case was unusually large, but it wasn't unusual in kind. Deepfake-enabled fraud is growing at a rate that should concern every organisation that conducts business over video — including every company that interviews candidates remotely.
The numbers behind the threat
Deloitte's Centre for Financial Services projects that AI-enabled fraud losses will reach $40 billion by 2027, growing at a compound annual rate of 32% from $12.3 billion in 2023. The acceleration isn't linear. Each generation of synthetic media tools makes fraud cheaper to execute and harder to detect.
Research from DeepStrike found that the average business loss per deepfake incident in 2024 was approximately $500,000. That figure includes direct financial losses, investigation costs, remediation, and reputational damage. For a hiring fraud incident specifically, the costs compound: salary paid to a fraudulent employee, security remediation after credentials are revoked, potential regulatory exposure if sensitive data was accessed, and the opportunity cost of the position remaining effectively unfilled.
The FTC's data on job-related scams shows a trajectory that mirrors the broader fraud explosion. Reported losses from employment-related fraud rose from $90 million in 2020 to $501 million in 2024. While this captures the consumer side — people defrauded by fake job postings — the employer side is harder to quantify because many incidents go unreported or undetected entirely.
The detection gap
If humans could reliably spot deepfakes, the threat would be manageable. They cannot.
iProov's 2025 study presented participants with real and synthetic images, videos, and audio. The average human accuracy rate was 24.5% — substantially worse than random chance. Only 0.1% of participants could reliably identify fakes across all modalities.
This isn't a training problem. The study included participants who had been briefed on deepfake indicators. Awareness didn't meaningfully improve detection rates. The synthesis technology has simply surpassed human perceptual capability.
A Gartner survey of 3,000 job seekers found that 6% admitted to committing interview fraud — and that's the self-reported figure. The actual rate is almost certainly higher.
The protocol gap
The detection gap would be less concerning if companies had systematic defences in place. Most don't.
Research compiled by Keepnet and reported by Business.com found that 87% of companies have no anti-deepfake protocols whatsoever. No identity verification step in the interview process. No device analysis. No liveness detection. The assumption remains that if someone shows up on a video call and answers questions competently, they are who their CV says they are.
Only 13% of organisations have any form of deepfake detection or prevention measure. For most companies, the entire defence against synthetic candidate fraud is the interviewer's ability to notice something "off" — a capability that, as the iProov data shows, is fundamentally unreliable.
The economics of prevention vs. reaction
The cost asymmetry between prevention and reaction is stark.
A verification check — liveness detection, device analysis, geolocation — costs a fraction of a single day's salary. It takes under 30 seconds. It requires no additional software for the candidate, no government ID, and stores no biometric data.
A single fraudulent hire that reaches onboarding can cost tens or hundreds of thousands in direct losses, plus unquantifiable reputational damage. For regulated industries, add potential compliance penalties. For agencies, add the lost client relationship.
The calculus isn't close. Spending £1-2 per candidate on identity verification to avoid a potential six-figure loss isn't a security investment that requires justification. It's a baseline operational control — the same category as verifying a bank transfer before releasing funds.
What changes from here
Three forces are converging that will make pre-interview identity verification standard within the next 24 months.
First, the technology to create convincing deepfakes is becoming commoditised. Real-time face-swapping tools that once required significant technical skill now run as browser extensions. The barrier to entry for fraud is dropping to near zero.
Second, high-profile incidents — Arup, the North Korean IT worker schemes documented by the FBI, the Pindrop case study — are creating board-level awareness. Security teams are being asked questions they don't yet have answers for.
Third, the regulatory landscape is tightening. The EU AI Act, GDPR requirements around data integrity, and sector-specific regulations in financial services and government contracting are all moving toward requiring identity assurance as a documented process.
Companies that adopt verification now build operational muscle and detection data before it becomes mandatory. Companies that wait will be implementing under pressure, without the benefit of that baseline.
The cost of inaction isn't theoretical. It's $500,000, or $25 million, or a compliance violation — and it arrives with a deepfake smile on a video call you thought was routine.
Sources: Deloitte Centre for Financial Services (AI fraud projections), DeepStrike (average loss per incident), Arup deepfake incident (2024), FTC employment fraud data, iProov Deepfake Detection Study 2025, Gartner job seeker survey, Keepnet/Business.com (protocol gap data).