Natalie Cramp, CEO of data science consultancy Profusion, warns that AI should not be seen as “infallible” in the hiring process.
Last week, the British data watchdog unveiled plans to investigate whether the use of AI in recruitment leads to prejudice.
The Information Commissioner’s Office (ICO) said it would conduct the investigation over allegations that automated recruiting software discriminated against minority groups by barring them from the recruiting process.
UK Information Commissioner John Edwards, his office, would look at the impact AI tools for applicant screening could have “on groups of people who may not have been part of the tool’s development, such as neurodiverse people or people of ethnic backgrounds.” minorities”.
AI can be used by companies and recruiters to take some of the hassle out of the hiring process. However, there have long been concerns that some people may be overlooked due to built-in biases in this technology.
In 2018, it was revealed that Amazon had to scrap its AI recruiting tool over allegations that it discriminated against candidates based on their gender.
The recruiting tool rated potential candidates with five stars, similar to how Amazon products are rated. However, partly due to the lack of women in the tech industry, the development of the tool was mainly determined by applications from men. So it trained itself to give preference to male candidates and sanctioned job applications with words like “woman” and “women.”
Amazon claimed the tool was not being used to hire people for positions at the company, but admitted recruiters looked at the recommendations. The company eventually got rid of the tool.
According to Natalie Cramp, CEO of data science consultancy Profusion, the way to prevent this kind of oversight is to increase public understanding of the root causes of the bias.
“What needs to be done is a better understanding of how the data used for algorithms can itself be biased and the danger that poorly designed algorithms increase these biases. Ultimately, an algorithm is a subjective view of code, not objective.”
“We need people who have a more fundamental understanding of AI. First of all, it’s not foolproof – the results are only as good as the data it uses and the people who make it.”
– NATALIE CRAMP
Cramp welcomed the ICO’s decision to investigate potentially discriminatory algorithms, calling it both “welcome and late.”
She said organizations need more training and education to verify the data they use and challenge the results of algorithms. “There should be industry-wide best practice guidelines to ensure that human surveillance remains an important part of AI. There has to be absolute transparency in how algorithms are used.”
She also recommended that companies keep their teams diverse and not “rely on one team or individual to create and manage these algorithms.”
“If the data scientists who create these algorithms and control the data that are used have more diverse backgrounds and experiences, they are more likely to identify biases at the design stage.”
There are researchers investigating how AI can be used for biased recruitment. Kolawole Adebayo, a researcher at Science Foundation Ireland’s Adapt Center for Digital Content, explores eliminating bias in various HR workflows using natural language processing techniques.
His project aims to implement AI models that can understand the content of HR documents to extract and remove information that can lead to unconscious bias and discrimination in the recruitment and selection phases of hiring.
Earlier this year, Adebayo said the project will use AI to assess a candidate’s suitability based on their skills. “Bias in hiring can lead to undue discrimination against quality candidates from disadvantaged or minority groups such as women, people of color and people in the LGBTIQ community,” he warned.
According to Cramp, the ICO investigation alone won’t address societal issues that lead to unequal recruiting practices — and tech can’t be blamed.
“We need people who have a more fundamental understanding of AI. First of all, it’s not foolproof — the results are only as good as the data it uses and the people who make it,” she said.
“Mandatory safeguards, design standards, human oversight and the right to challenge and question results are all essential to the future of AI. Without this safety net, people will quickly lose faith in AI and with it its enormous potential to revolutionize and improve our entire lives.”
10 things to know straight to your inbox every weekday. Sign up for the Daily overviewSilicon Republic’s summary of essential sci-tech news.