The Downsides of Using AI Recruitment Tools
Explore the potential pitfalls of leveraging AI in recruitment. Understand how it may inadvertently lead to biased hiring, lack of human touch, and privacy concerns in the hiring process.
Join 2000+ tech leaders
A digest from our CEO on technology, talent and hard truth. Get it straight to your inbox every two weeks.
No SPAM. Unsubscribe anytime.
Artificial intelligence is used in recruitment automation with the aim of relieving human recruiters of the burden of routine and time-consuming screening work and allowing them to concentrate on the job of building relationships with potential hires.
AI in recruitment allows humans to quickly and efficiently screen resumes, and hire people in less time and at a lower cost.
Just a few examples:
- Microsoft uses augmented writing software powered by artificial intelligence to create attractive job ads for different geographic areas. AI makes sure the correct professional and regional vernaculars are used wherever necessary.
- A provider of AI resume screening software called Pymetrics has job applicants play a specially designed video game that measures risk aversion and other personality traits based on the moves they make in the game. Another application called HireVue studies body language cues on self-shot videos uploaded by candidates.
- AI can screen out applicants who are grossly unqualified for positions a company may be interested in filling.
- AI is used to reduce the possibility of bias, conscious or otherwise, that may be applied by human recruiters to the choices they make. When AI is programmed to select candidates based on their qualifications, it doesn’t discriminate on the basis of race or gender.
As useful as AI may seem in the recruitment process, however, there are downsides to trusting the software. If you’re considering bringing AI recruitment software on board at your company or signing up with an AI recruitment services provider, it’s important to consider the following areas of potential complication.
How AI is Deciding Who Gets Hired
This video was published by Bloomberg (3M subscribers).
“The job hunt has changed as artificial intelligence scores resumes, runs interviews and decides who gets access to opportunity. Lawmakers and activists are now pushing back on the threat of computerized bias while others work to outsmart the machine.”
While AI avoids some bias problems, it can have others
A computer running artificial intelligence doesn’t make biased choices based on emotion. However, it can make biased choices based on facts. AI depends entirely on human operators to supply it with the facts that it bases its choices on.
Amazon at one point used hiring data from the past to create a new AI recruitment system. Unfortunately, since human hiring in the past had included bias against female applicants, the new AI system reflected this bias and therefore avoided choosing female applicants. This bias wasn’t deliberately programmed into it; the system simply taught itself that female hires were probably risky based on the data it was given.
AI screeners can have a hard time with unoptimized resumes
Resumes coming in that have unconventional formats can confuse AI software. AI can also fail to recognize that when a candidate doesn’t check all the boxes necessary, they can still be good hires if they are particularly strong in certain other important areas.
In some cases, AI has been seen rejecting candidates out of hand if they neglect to list a specific skill. For example, a candidate for a cleaning position who fails to list their floor buffing skills would be rejected by AI that was programmed to look for the skill. A human recruiter would be able to see through such an omission.
AI can be inept at evaluating soft skills
AI can be efficient when it comes to evaluating quantifiable data: the qualifications of a candidate, their years of experience, and so on. When it comes to evaluating personality, a good cultural fit, or soft skills, however, AI can only perform when you provide it with data from personality tests, which such screening systems often don’t include. When personality analysis is offered, it can be unreliable.
AI job posting systems need to be constantly updated
AI tools such as Textio do a good job writing job advertisements, but there is a problem. They need to be updated with the latest language used in a given industry, or they’re likely to sound outdated.
For instance, not long ago, tech companies posted job ads calling out to tech gurus and tech ninjas. These terms are no longer in current use, however. When you have an AI job posting to create, you need to update the software to make sure that it no longer uses outdated vernacular.
AI tools are often overhyped
Companies marketing AI tools often make claims about their abilities, that aren’t supported by actual evidence. For example, there are AI recruitment tools that claim to be able to accurately analyze the facial expressions of applicants, analyze their voices, and so on. All these tools do in reality, however, is to screen certain types of candidates out. Those with facial deformities or speech impediments, for instance, are often found by AI to be unsuitable.
There is another way in which voice analysis AI tools can discriminate: against minority candidates who adopt a practice known as code-switching. When someone of a minority culture makes use of code-switching, for example, they may use “white” speech styles and patterns because they fear they will be viewed as threatening if they use their own style of speech and be discriminated against. An AI voice analysis tool, however, may flag such behavior as inauthentic.
Rights organizations can sue the makers of AI tools
HireVue, a vendor of AI hiring tools, has been accused of using facial expression analysis to work out how suitable candidates may be, but may use it in a completely opaque algorithmic process that accepts or rejects candidates with no accountability. A complaint filed with the FTC alleges that opaque hiring processes are inherently unfair.
AI tools can reject candidates for unexpected, unfathomable reasons
AI tools have been caught rejecting candidates using different kinds of strange reasoning.
- Candidates have been rejected for not living close to the office, because AI tools believed that given their long commutes, such candidates would be unwilling to work late hours. This type of rejection largely discriminates against those who are poor and cannot afford city-center residences.
- Candidates are rejected when they don’t make eye contact. This type of rejection can discriminate against those who are shy, or against women from cultures that discourage eye contact for modesty reasons.
- Candidates are rejected because the company asks for someone with previous experience in a highly specific job role that is mostly not found anywhere else. Candidates usually don’t have such specific experience.
- Candidates with spouses in the Armed Forces are rejected because past records the AI software analyzes may reveal that such candidates tend to quit their jobs to join their spouses if they are stationed elsewhere.
At this point, AI recruitment tools are a work in progress, but the technology companies that make them, and employers that use them, tend to have overambitious expectations of them.
AI hiring tools can be useful at this stage in their evolution, however, if employers use them within their capabilities. For example, recruiters who use AI tools to strictly look at qualifications in resumes, often find that they end up considering and hiring people they wouldn’t otherwise have looked at. It’s only in cases of AI overreach that difficulties tend to arise.