Why you shouldn't use AI for recruitment

Published updated

recruitment

Using AI to assist in the selection process is the hot trend of recruitment in 2024. Since 2023, all major ATSs have been adding AI to their workflows, from writing job ads to filtering and assessing candidates.

It’s easy to believe AI does better than humans and it’s tempting to think AI understands technology better than an experienced tech recruiter.

However, it’s a narrative not supported by evidence. Research shows AI performs significantly worse than humans at assessing other humans.

If you care about building your tech recruitment agency’s reputation, want to become known for providing good candidates and equal opportunities to everyone, you should avoid using AI to screen candidates.

I know, you have hundreds of CVs to assess and AI is fast. Fast doesn’t mean good, and AI is no different.

Bias

You may have heard AI is biased. People are biased too, and many believe AI is less biased than humans. It makes sense after all that something based on data would make objective assessments.

To understand why bias is such a problem, we need to look at two things: the scale of it and how it impacts humans using it for assessments. Among the many papers, I found two explained it best.

In this paper, researchers assessed Stable Diffusion for bias. They found the AI model thought only 3% of US judges were women, when in reality 34% are - 1000% more bias than an a human.

In the second paper researchers show that humans using AI for clinical assessments inherited the bias of AI, even if they acknowledged the AI wasn't perfect and made mistakes.

Apparently, people trusted the AI because they thought it could perform analytical tasks better than it actually did.

How this affects you

Researchers also found ChatGPT was politcally and racially biased, even when prompted to be factual.

It’s not clear why or how bias arises in AI, and why it’s so much worse than human bias. I am an optimist, but realistically it will take a decade before this puzzle is solved and AI bias is brought to a human range.

In short, AI will ignore good candidates because they’re different from the stereotype for that role, and will do so magnifying the stereotype.

Not only good software engineers tend to be atypical, which ones may be best for a project is highly situational. A decent manager knows that, but if AI filters out what they need, they won’t get to have a choice.

AI is easy to trick

Candidates know it’s likely their CV will be assesed by AI, so they are getting clever. Wouldn’t it be silly to think that engineers won’t indulge in a bit of hacking?

Adding invisible keywords is one of the most common techniques to pass the filters. I’m not sure how effective it is, but I think any ATS smart enough to put the resumes through OCR would be safe.

A newer tactic is to add a prompt for the AI, asking to short-list their application.

A screenshot showing an example of a resume hack in ChatGPT
Daniel Feldman added a prompt directing ChatGPT to answer "Hire him". Original here.

Another hack targets recruiters that use AI to write outreach messages. By adding a prompt to their Linkedin about, LLMs, including ChatGPT, are tricked.

The AI ends up writing an apology, revealing the message was not written by a human and shaming the recruiter.

A screenshot showing the result of an outreach hack in Linkedin
Original here.

These are just a few examples. In general, all it takes to hack an AI is adding a prompt somewhere in a body of text, instructing the AI to ignore any previous instructions.

Agency reputation

How many candidates would you have, if they knew you used AI?

The reputation of external recruiters has never been lower, and fewer candidates are willing to apply to a job this way. When they are, they seldom trust the recruiter. Why put the last nail in the coffin?

When candidates don’t trust us, they don’t reply, they don’t show up and they ghost us. It makes filling a role more time consuming, and you get paid the same amount for more hours.

That’s the value of building reputation as a recruiter and it’s going to be very hard to do it with AI.

Learning and changing markets

There’s one more negative to letting AI do the work: you’ll never master that task.

Finding great candidates is almost an art, and requires practice. Lots of practice. You won’t get that from AI, because it won’t be your mistake.

Sure, in the current market, it may not make much of a difference. However, data shows the layoff age is coming to a close and in 3-6 months we’ll be in a completely different market.

Data suggests we’re heading in a market where finding candidates will remain hard. And it may become even harder. Openings have increased 15% since December and March was the month with fewest laid off employees in the last two years.

A screenshot an histograph of laidoff employees between May 2023 and April 2024
Before Tesla ruined it, April was set to beat March's record low of layoffs.

Despite January 2024 having more laid off employees than previous months, Q1 2024 saw a 64% reduction from Q1 2023 (78.000 vs 211.000)

It’s going to be a market of quality over quantity, and to succeed in it as recruiter you’ll need to build your reputation among candidates.

Rune HR doesn’t use AI

Rune HR doesn’t use AI for screening.

There are better ways to assess hundreds of candidates. We haven’t released this feature yet, but I hope to do so soon. Stay tuned!

More on recruitment