Job recruitment tools that claim to use artificial intelligence to avoid gender and racial biases may not improve diversity in hiring, and could actually perpetuate those prejudices, researchers with the University of Cambridge argued Sunday, casting the programs—which have drawn criticism in the past—as a way of using technology to offer a quick fix for a deeper problem.
“By claiming that racism, sexism and other forms of discrimination can be stripped away from the hiring process using artificial intelligence, these companies reduce race and gender down to insignificant data points, rather than systems of power that shape how we move through the world,” Drage said in a statement.
Amazon announced in 2018 it would stop using an AI-recruiting tool to review job applicants’ resumes after it found the system was strongly discriminating against females. This is because the computer models it relied on were developed based on resumes submitted to the company over the past 10 years, which mostly came from male applicants.
Organisations have increasingly turned to AI to help manage job recruitment processes. In one 2020 poll of more than 300 human resources leaders cited by the authors of Sunday’s paper, the consulting firm Gartner found 86% of employers use virtual technology in their hiring practices, a trend that has accelerated since the Covid-19 pandemic forced many to shift work online. While some companies have argued AI can offer a more cost- and time-effective hiring process, some experts have found the systems have a tendency to promote—rather than eliminate—racial and gender biased hiring by replicating existing prejudices from the real world.
Several U.S. lawmakers have aimed to tackle biases in artificial intelligence systems, as the technology continues to evolve quickly and few laws exist to regulate it. The White House this week released a “Blueprint for an AI Bill of Rights,” which argues algorithms used in hiring have been found to “reflect and reproduce existing unwanted inequities” or embed new “bias and discrimination.”
The blueprint—which isn’t legally binding or an official government policy—calls on companies to ensure AI does not discriminate or violate data privacy, and to make users aware of when the technology is being used.
In a list of recommendations, the authors of Sunday’s Philosophy and Technology paper suggested companies that develop AI technologies focus on broader, systematic inequalities instead of “individualized instances of bias.” For instance, they suggest software developers examine the categories used to sort, process and categorize candidates and how these categories may promote discrimination by relying on certain assumptions about gender and race. Researchers also contend HR professionals should try to understand how AI recruitment tools work and what some of their potential limitations are.
The European Union has classified AI-powered hiring software and performance evaluation tools as “high risk” in its new draft legal framework on AI, meaning the tools would be subject to more scrutiny and would need to meet certain compliance requirements.
D.C. wants to lead the fight against AI bias (Axios)