Why 1 in 4 Job Candidates Could Be Fake by 2028
The idea that a quarter of job candidates could be fraudulent sounds alarmist. But that's exactly what Gartner projects: by 2028, one in four candidate profiles worldwide will be fake.
It's not a prediction plucked from thin air. The data trail leading to that number is already visible.
The scale of the problem today
In 2024, voice authentication company Pindrop opened a single senior engineering role. Of the applicants they screened, 12.5% used fake identities — fabricated names, synthetic profile photos, and AI-generated résumés. One candidate who made it to the final interview round was later identified as using a real-time deepfake overlay during the video call.
That's one role at one company. Scale that across the thousands of positions open at any given moment across the tech industry alone, and the numbers become difficult to ignore.
A survey of hiring professionals by Checkr found that 23% of companies reported encountering identity fraud among new hires. Not applicants who exaggerated their experience — people who were fundamentally not who they claimed to be.
The North Korea connection
This isn't just opportunistic fraud. The FBI and Department of Justice have documented cases of over 300 US companies unknowingly hiring North Korean IT operatives. These individuals used stolen American identities, AI-generated profile photos, and remote desktop setups to pass interviews and gain employment at technology firms, defence contractors, and Fortune 500 companies.
The scheme was sophisticated: laptop farms based in the US received company-issued hardware, while the actual work was performed remotely from overseas. Salaries were funnelled back to fund state operations. The FBI estimates these schemes have generated hundreds of millions of dollars.
These weren't edge cases. They were systematic operations that exploited the fundamental assumption underlying every video interview: that the person on screen is the person who applied.
Why it's getting worse, not better
Deepfake fraud grew 3,000% in 2023 according to identity verification platform Sumsub. The tools that make this possible — real-time face-swapping software, voice cloning, AI-generated headshots — are becoming cheaper, easier to access, and harder to detect.
A 2025 survey by Greenhouse found that 91% of US hiring managers had encountered or suspected AI-generated interview answers. Meanwhile, an iProov study found that humans correctly identify deepfakes only 24.5% of the time — worse than a coin flip. Just 0.1% of participants could reliably spot fakes across all formats.
The detection gap is widening. The generation technology improves faster than human perception adapts.
The cost compounds
When a fraudulent hire slips through, the costs extend far beyond the salary paid. There's the intellectual property exposure, the security credentials granted, the client relationships damaged, and the institutional trust eroded. For regulated industries — financial services, healthcare, government contracting — a single fraudulent hire can trigger compliance violations with seven-figure consequences.
And for recruitment agencies, the stakes are existential. An agency that submits a deepfake candidate to a client doesn't just lose that placement. They lose the client relationship entirely.
What companies can do now
The uncomfortable truth is that traditional hiring processes — résumé screening, phone calls, video interviews — were designed for a world where identity was assumed. That assumption no longer holds.
Companies hiring for roles with access to sensitive data, financial systems, or client information need to verify identity as a standard step in the process. Not as a background check after an offer is made, but before the interview begins.
The technology exists to do this without creating friction for legitimate candidates. Liveness detection, device analysis, and geolocation checks can be completed in under 30 seconds. No government ID required. No biometric data stored.
The question isn't whether this becomes standard practice. It's whether your company adopts it before or after the first incident.
Sources: Gartner (2024 prediction), Fortune/Pindrop (2024), Checkr hiring survey, FBI/DOJ North Korea investigation, Sumsub Identity Fraud Report 2023, Greenhouse Hiring Manager Survey 2025, iProov Deepfake Detection Study 2025.