Landing a new job requires more than skill, experience and a strong CV to catch the eye of a recruiter. Now, AI-driven hiring platforms are in charge of who gets a job and who doesn’t. And if you don’t meet their exact requirements, you may find yourself falling short.
According to a new survey by Resume Builder, 70% of companies will use AI in the hiring process by 2025 — despite most saying it’s biased.
Hiring new workers is a traditionally long-winded process, so it’s easy to see why organisations are keen for an easier solution. There are job adverts to write, CVs and cover letters to sift through, tests to be carried out and checked, as well as dozens of candidates to be interviewed at least twice.
But experts are warning that the uptake of AI by recruiters can lead to a number of problems. Not only can these systems lead to discrimination, they also fail at finding the best candidate for the job.
One of the key problems with using AI to hire workers is that many rely on pattern recognition, which can be limiting.
“It makes them adept at keyword matching or scoring resumes but less effective at evaluating candidates’ soft skills, the potential, or unique career paths,” says Dr Erin Ling, assistant professor in AI and the future of work at the University of Surrey.
“Moreover, AI systems can misinterpret context, and hence fail to understand diverse work experiences or career transitions that fall outside a predefined framework,” she says. “This limited scope means that AI can overlook or undervalue key qualities in job candidates, leading to inaccuracies in assessing suitability for roles holistically, which is often a complex and nuanced task.”
Because of this, AI can automatically leave excellent candidates on the cutting room floor. For example, women who have taken time off to have families, or those who have been away from work because of illness.
“Another key issue is the lack of transparency and explanation in the AI-enabled hiring process and the algorithm decision-making, which obscures discrimination towards candidates,” says Ling.
Another issue is that AI uses historical data. And with bias already a problem for many recruiters, this means mistakes that lead to discrimination are made time and time again.
“If the data disproportionately reflects a specific demographic or set of experiences, skills, and backgrounds, AI systems may unintentionally prioritise similar profiles and filter out otherwise qualified candidates,” says Ling. For example, she adds, some seek out certain schools or universities attended, which can sideline people from non-traditional backgrounds.
“Algorithms based on current employee data will risk excluding underrepresented groups if the existing workforce is not diverse,” she says. “In addition, online job platforms with biased system design may make superficial predictions based on who might be more likely to click on job ads, rather than who might be most successful in the role.”
In other words, AI hiring tools are only as good as the data feeding into them. “The primary challenge with AI in recruitment isn’t just its potential inaccuracy,” says Carl Benedikt Frey, associate professor of AI and Work at the University of Oxford.
“Given sufficient data, it can perform well — but rather its tendency to amplify existing societal biases on a large scale. That said, there are also many instances where firms don’t have enough quality data, leading to spotty performance.”
The tools we use to find new employees can also be based on problematic pseudoscience. For example, AI interviewing tools analyse tone of voice, facial expressions and body language, as well as what is said during an interview, to try to understand a candidate’s personality.
These factors can’t predict how successful we will be in a job, but screening people based on them can lead to job seekers being discriminated against. For example, neurodivergent candidates may struggle to maintain eye contact or may appear less confident in a traditional job interview — but this doesn’t mean they won’t excel in a role.
Research suggests AI hiring isn’t foolproof. Hilke Schellmann, journalism professor at New York University, tested video interviewing software as part of her research for her book, The Algorithm. She had received a high rating in one interview, despite speaking nonsense German when she was supposed to be speaking English. However, she received a poor rating for her relevant credentials on her LinkedIn profile. She was also found to be a close match for a job when she repeatedly used the phrase “I love teamwork”.
“We tend to think of this technology as neutral and of maths as fair, neither is true,” says Sandra Wachter, professor of technology and regulation at the University of Oxford. “Automating these hiring processes can give past problematic decision-making more legitimacy.
“What we should be doing is using AI as a diagnostic tool that shows us where the biases lie and use it as a starting point to rectify the inequalities in our society.”
“But if we do not do that, those who have already been disfavoured by the job market will experience even more inequality going forward, but in a less transparent, less obvious, and less tangible way.”
Read more:
Download the Yahoo Finance app, available for Apple and Android.