By Charlotte LeightonFeatures Correspondent
As firms increasingly rely on artificial intelligence-powered hiring platforms, many highly qualified candidates are finding themselves on the cutting room floor.
Body language analysis. Voice evaluation. Gamified test. CV scanners. These are some of the tools that companies use to screen candidates through artificial intelligence recruiting software. Job applicants are exposed to these machine prompts — and AI decides whether they’re a good match or not.
Businesses are increasingly relying on them. As revealed by an IBM survey of 8,500 global IT professionals at the end of 2023 42% of companies were using AI screening “to improve recruitment and human resources”. Another 40% of respondents were considering integrating technology.
Many leaders in the corporate world had hope. AI recruiting tech will eliminate bias in the hiring process.. Yet in some cases the opposite is happening. Some experts say these tools are improperly screening out some of the most qualified job applicants — and there are growing concerns that the software may be overestimating the best candidates.
“We haven’t seen a lot of evidence that there isn’t a bias… or that the tool picks the most qualified candidates,” says Light Shellman, US-based author of Algorithm: How AI Can Hijack Your Career and Steal Your Future, and Assistant Professor of Journalism at New York University. He believes that the biggest threat to such software jobs is not machines taking over workers’ positions, as is often feared – but rather them taking on a role. to stop.
Unspeakable loss
Some qualified job candidates have already found themselves at odds with these hiring platforms.
In a high-profile case in 2020, UK-based make-up artist Anthea Mairoudhiou said her company asked her to reapply for the role. After being discharged during the pandemic. They were evaluated by past performance and AI-screening program, HireVue. She says she ranked well in the skills assessment – but when the AI tool scored her body language poorly, she was out of the good job. (Higher View, Firm 202 removed its face analysis function1Shellman says other workers have filed complaints against similar platforms.
She adds that job candidates rarely know if these tools are the only reason companies reject them — and, by and large, the software doesn’t tell users how they’ve been evaluated. . Yet she says there are many clear examples of systemic flaws.
in one case, A user submitted the same request that was checked But changed the date of birth to make himself younger. With this change, he took an interview. At another company, an AI resume screener was trained on the CVs of employees already at the firm, giving people extra points if they entered “baseball” or “basketball” — occupations that are more Successful are attached to staff, often male. People who mentioned “softball” – usually women – were put down.
Shellman says that marginalized groups often “fall through the cracks, because they have different hobbies, they go to different schools”.
In some cases, the criteria for biased selection are clear – such as ageism or sexism – but in others, it is ambiguous. In his research, Shellman applied for a call center job, to be screened by AI. Then, he logged in on behalf of the employer. She received a high rating in the interview, despite speaking English, but received a poor rating on her LinkedIn profile for her actual relevant credentials.
He fears that the negative effects will spread like technology. “A biased hiring manager can hurt a lot of people in a year, and that’s not a good thing,” she says. “But an algorithm that could be used in all applications coming to a large company… that could harm millions of applicants.”
‘No one knows where the damage lies’
“Issue [is] Nobody knows where the damage is,” she explains. And, given that companies have saved money by replacing human HR staff with AI — which can process piles of resumes in a fraction of the time — Firms, he believes, may have little incentive to interrogate kinks in the machine.
From his research, Shellman also worries that screening software companies are “rushing” into the market to meet demand for underdeveloped, even flawed, products. “Vendors won’t come out publicly and say that our tool didn’t work, or that it was harmful to people,” and companies that have used them “are afraid of a lot of lawsuits against them.” A large class action lawsuit will be filed.”
It’s important to get this tech right, they say. Sandra WachterProfessor of Technology and Regulation at the University of Oxford’s Internet Institute.
“Having AI that is unbiased and fair is not only an ethical and legal imperative, but it’s something that makes a company more profitable,” she says. “There’s a very clear opportunity to allow AI to be applied in a way so that it makes fairer, more equitable decisions that are based on merit and that also increases the company’s bottom line.”
Wachter is working to help companies identify bias by co-creating Conditional Demographic Disparity, a publicly available tool that “acts as an alarm system that notifies you Whether your algorithm is biased. [hiring] “The decision criteria accounts for this inequality and allows you to make adjustments to make your system fairer and more accurate,” she says. I am among those who have implemented it.
Shellman, meanwhile, is calling for “deeper regulation” across the industry from governments or nonprofits to ensure the current problems don’t persist. If there is no intervention now, he fears that AI could make the workplace of the future more unequal than it already is.