Recent research undertaken by the University of Cambridge's Centre for Gender Studies has suggested that using artificial intelligence (AI) to reduce bias in recruitment is counterproductive, on the basis that the technology outcomes are affected by a number of irrelevant variables including lighting, background, clothing and facial expressions.
This is particularly important as a study taken in 2020 revealed that 24% of businesses had already implemented AI for recruitment purposes and 56% of hiring managers planned to adopt it in the next year (i.e. by 2022) - and these organisations often purport to include AI in their recruitment process in order to "make recruitment fairer" by ensuring candidates are assessed "objectively" without reference to characteristics such as gender and race. This is against the backdrop of growing pressure on businesses to achieve their diversity and inclusion goals and establish meritocratic cultures.
However, the recruitment decisions made by AI technology are increasingly likely to be unfair if the technology is drawing "spurious correlations between personality and apparently unrelated properties of the image, like brightness". There is also a suggestion that these 'irrelevant variables' could then be manipulated and/or learned by individuals to increase their success rate in progressing through AI assessed recruitment stages. This undermines the purpose of implementing such technology to increase fairness.
Informed by their findings, the research team have recommended that:
- Developers shift their focus from individual instances of bias to the broader inequalities impacting recruitment processes
- HR professionals must understand the limitations of AI and require suppliers to explain where and how AI is being used in their systems to evaluate candidates
- There hasn't been enough scrutiny of HR AI tools from AI ethicists, regulators and policymakers