AI has transformed hiring, but it has also created a rapidly growing risk: fake candidates, enhanced resumes, and deepfake interviews. Hiring managers are now facing a new reality: determining who is truly qualified is no longer as simple as reviewing a resume or conducting a standard interview.

The data is sobering:

  • Nearly 65% of job candidates admit to using AI tools in the application process (CNBC, 2025).
  • In 2025, Amazon blocked approximately 1,800 suspected fraudulent job applicants tied to deepfake identities and scam networks.
  • Gartner predicts that by 2028, 1 in 4 candidate profiles worldwide could be fake. Gartner also reports that 6% of candidates admit to interview fraud, including impersonation.
  • The Citi Institute projects that up to 8 million deepfakes will be shared online by the end of 2025.
  • Deloitte estimates generative AI–enabled fraud losses could reach $11.5 billion by 2027, calling it the “fraud economy.”

With these risks accelerating, hiring teams must adopt stronger safeguards to protect their organizations.

4 Important Tips to Reduce Risk
Here are four proven ways to reduce the risk of AI and deepfake candidate fraud:

 

1. Prioritize live interviews and skills testing.

Whenever possible, conduct in-person or live video interviews along with real-time skills assessments. This helps validate identity and reduces the chance of candidates “gaming” the system. During assessments, ask candidates to place phones and mobile devices away to ensure fairness.

2.Implement biometric identity verification tools.

Work with a reliable background screening partner to use ID verification and biometric technology—especially for remote roles. These tools compare a candidate’s government-issued ID with live video and facial biometrics to help prevent deepfakes and identity fraud.

-Always consult legal counsel to ensure compliance with state and federal requirements.

3.Create clear AI usage policies.

Rather than banning AI outright in background screening, define what is acceptable and what is not. Develop disclosure guidelines and ethical standards so candidates understand how AI may (and may not) be used during the hiring process.

4.Train hiring teams to spot red flags.

Teach staff to watch for inconsistencies in resumes or answers, long pauses, unnatural eye movement, video glitches, or signs that a candidate is reading from another screen. When interviewing remotely, always use video to better detect potential misrepresentation.

 

Deepfake and AI-driven hiring fraud will continue to grow as technology advances. By applying the tools and practices above, hiring managers can significantly reduce risk
while improving confidence that they are hiring authentic, qualified talent.

Posted by: Rudy Troisi, L.P.I., Founder, CEO, Reliable Background Screening and Dr. Alan Lasky, SVP Client Success & Partnerships.