We almost hired an AI candidate. Here’s what saved us

Fatima Zaidi shares the red flags she spotted while dealing with an AI-generated applicant.

Fatima Zaidi is the founder and CEO at Quill Inc., an award-winning production agency specializing in corporate audio, and CoHost, a podcast growth and analytics tool.



We thought we’d found our next hire.

After two months and seven interviews, our team at CoHost had narrowed our search down to one candidate. He was sharp, personable, and technically impressive. Conversations flowed naturally. Our team genuinely liked him. We were days away from making an offer.

Then something started to feel off.

It wasn’t one thing, it was a slow accumulation of details that individually seemed explainable, but together started painting a picture we couldn’t ignore. His technical answers were almost too polished: not just confident, but frictionless in a way that felt rehearsed. When we moved into reference checks, the first contacts responded instantly by email. That’s unusual. Strong candidates coordinate their references in advance, but there’s always a little back-and-forth. These arrived as if they’d been waiting.

The references were all Gmail addresses. Each one explained this away by noting they were currently between jobs. Plausible, so we kept going.

But when we tried to independently verify references by searching for their LinkedIn profiles and cross-referencing professional histories, almost nothing came up. Profiles that existed were thin, newly created, or had almost no activity. One reference never responded. When we flagged it to the prospective here, that contact was quickly swapped out. 

It wasn’t one thing, it was a slow accumulation of details that individually seemed explainable, but together started painting a picture we couldn’t ignore.

Then came the moment that stopped us cold. During a video call with a reference, the person mirrored our candidate’s speech patterns, his mannerisms—the way he moved was almost identical. What appeared to be a voice and video filter was subtle enough that it could be dismissed in isolation. But by that point, we were seeing a pattern. 

When we pressed for verifiable references, corporate email addresses, and HR contacts from previous employers, the candidate refused. Instead, he pushed to accelerate the hiring process. 

Instead, we sent the rejection email.

Within half an hour of sending that rejection email, the candidate’s LinkedIn profile was gone. His references vanished from the internet. Phone numbers were disconnected. Every digital trace was erased almost simultaneously. The speed of it was its own confirmation: we had spent two months interviewing a candidate who wasn’t reaI. 

The persona, the resume, the work history, the references, all of it appears to have been fabricated using AI. What is AI-assisted candidate fraud? After doing some digging, we learned that this is a new scam where a deepfake, AI-powered candidate impersonates a real person in order to get a job at your organization. 

Competence has texture. Fluency without texture is a warning sign.

Our team is very experienced. We run background checks through third-party companies—this was going to be the next step after reference checks. The background checks are incredibly strict with security protocols, and ensure verifiable references are provided and contacted. 

This isn’t a story of a green team with unsophisticated hiring practices. This is a story about how good these scams have become, and how completely unprepared many hiring teams are to catch them.

AI-assisted candidate fraud is not a future problem. It is happening right now, to founders and hiring managers across the economy. 

“The tools to fabricate a convincing video persona, generate technically fluent interview answers in real time, and manufacture a digital paper trail have never been more accessible. And startups with lean HR resources, fast-moving hiring timelines, and a bias toward trusting their gut are the most exposed,” Quill CTO Abhinav Mathur told me, adding that because we’re an AI-first company, we’re harder to fool. “Getting ahead of AI-assisted fraud has been a top priority at Quill and CoHost.” 

So, here is what we’ve learned, and what we recommend all companies build into their processes.

Red flags to watch

Perfect is suspicious. A candidate who never hesitates, never says, “I’d have to think about that,” and answers every technical edge case with precision is worth a second glance. Competence has texture. Fluency without texture is a sign.

The opposite pattern should draw attention, too. A candidate who always pauses before answering, and fills that pause with the same phrases like, “that’s a really good question,” or, “interesting, let me think about that,” is buying time. Genuine hesitation is irregular. Scripted hesitation has a pattern.

References that arrive too fast, from personal email addresses, with no independent footprint, are also part of a pattern. Hiring teams can run a check on the age of the email addresses (tools like IPQualityScore make this easy) and look for LinkedIn profiles with real history and, ideally, mutual connections. When verifying a candidate’s background, go directly to the companies listed on their resume to confirm employment history, and don’t rely solely on the references the candidate provides. Reach out independently to HR departments or known contacts at those organizations to validate job titles, tenure, and responsibilities. The references a candidate hands you are curated; the ones you find yourself are not.

When you push for verification, and a candidate deflects, that’s your answer. Legitimate candidates want to be verified. They prepare their references. They don’t pressure you to skip steps.

What we’ve changed

We now run reference checks earlier in the process, rather than at the end. We require HR contacts with corporate email addresses from a candidate’s last two to three positions, effectively piggybacking on background checks other companies have already done. We now also run our background checks using Certn, a software that provides fast, online background screening and criminal record checks for employment, which are very accessible and user-friendly. 

We prepare our interview questions in advance, and run them through AI chatbots first, ourselves. We want to know what a generated answer looks like. In our case, the interviewee’s responses matched the structure, framing, and phrasing that the chatbot produced.

And for final rounds, we bring candidates for an in-person interview, even though we’re a fully remote company. Not all startups are privileged enough to do that, so we also recommend incorporating small in-video verification rituals to rule out AI face-swapping, such as asking them to hold three fingers in front of their face. And we’ve added language such as “bots don’t apply” to our job postings that quietly signals to automated applicants and the people deploying them that we’re paying attention.

The fraudsters are getting better. That means we have to get better first.

At Quill, staying ahead of how AI is reshaping communication and the risks that come with it is core to what we do. The talent landscape has changed. Hiring processes need to change with it.

The opinions and analysis expressed in the above article are those of its author, and do not necessarily reflect the position of BetaKit or its editorial staff. It has been edited for clarity, length, and style.

Feature image courtesy Unsplash. Photo by Compagnons.

0 replies on “We almost hired an AI candidate. Here’s what saved us”