Skip to content
Thomson Reuters

AI-enabled anti-black bias in recruiting—new study finds

Image Credit: REUTERS/Fabrizio Bensch

Too often, the biases that professionals from minority groups experience in the real world are replicated in the artificial intelligence (AI)-enabled algorithms used in training and recruiting.

Without human intervention, it is easy for algorithms used in the recruiting process to reproduce bias from the real world, according to a 2019 study conducted by Harvard Business Review (HBR) with professionals from Northeastern University and the University of Southern California.

Since then, it is questionable whether the situation has improved despite the emergence of artificial intelligence as a powerful tool in the evolving 21st century business landscape and its ability to learn and identify trend. A new report entitled The Elephant in AI, produced by Professor Rangita de Silva de Alwis founder of the AI & Implicit Bias Lab at University of Pennsylvania (UPenn) Carey Law School, looks at employment platforms through the perceptions of 87 black students and professionals coupled with an analysis of 360 online professional profiles with the goal of understanding how AI-powered platforms “reflect, recreate, and reinforce anti-Black bias”.

Key findings from the research

The new report explored a range of AI-related employment processes from job searches, online networking opportunities, and electronic resume submission platforms. More specifically, key findings include:

  • In an analysis of job board recommendations of those surveyed, 40 percent of respondents noted that they had experienced recommendations based upon their identities, rather than their qualifications. Moreover, 30 percent also noted that the job alerts they had received were below their current skill level.
  • Almost two-thirds (63 percent) of respondents noted that academic recommendations made by the platforms were lower than their current academic achievements. This was a finding of particular disappointment as the survey highlights the fact that black women are the most educated group in the United States (US).

For the most part, Silicon Valley, in California, is still prominently populated by white people, with men comprising the majority of leadership positions. It begs the question of how the technology industry can create fair and balanced AI for the masses if there are still diversity challenges within the very teams designing and implementing the algorithms upon which that AI relies. In fact, Amazon scrapped a recruiting tool in 2018 because of such bias.

Further, a 2019 study from the U.S. National Institute of Standards and Technology that examined 189 facial recognition algorithms from 99 different developers found that a majority falsely identified non-white faces. Although commonly used by both federal and state governments, facial recognition has raised concerns over AI-enabled bias and have led agencies such as the Boston and San Francisco police departments to ban its use.

AI-enabled biases in recruiting and testing

As with the case of facial recognition, long-known hiring discrimination processes are often increasingly AI-enabled. The UPenn report notes that Black professionals in today’s employment marketplace continue to receive 30 to 50 percent less job call-backs when their resumes contain information tied to their racial or ethnic identity.

With AI being developed as an employment tool meant to help provide equality of opportunities, the survey asked about employers empowered with AI-based recruiting technologies and whether candidates feared being not considered for employment by those employers. Less than 10 percent said it would little cause of worry, yet more than 20 percent said that it would be of considerable worry to them. The report expands on hiring discrimination by exploring potential biases incorporated within pre-programmed ‘expected responses’, with researchers pointing out that these responses point to potential data inequity.

Other inequities centred around skill-based tests questions programmed into hiring platforms that have been known to be biased. Such questions, built upon US exams such as the Law School Admissions Test, are creating unfair screening assessments. And given the amount of research around these biases in standardised testing, using legacy assessment models continue to inhibit the success of Black and other minority groups from advancing in employment hiring pools.

In considering the use of AI platforms by employers, the report points both to the technical complexity of the AI behind the platforms as well as the limited knowledge of those in human resources or other hiring roles in understanding such complexities.

Actions for employers and developers

Until there are industry-wide best practices, the responsibility to ensure that the AI algorithms being used to promote equity falls upon the vendors that are building the tools and the employers that use them. According to the 2019 HBR study, employers using AI-enabled recruiting tools should analyse their entire recruiting pipeline—from attraction to on-boarding—in order to “detect places where latent bias lurks or emerges anew”.

Professor de Silva de Alwis calls for diverse teams to develop less biased models and algorithms and advises employers of software developers to leverage tools that will minimise biases, such as Microsoft’s Fairlearn, an open source toolkit that empowers data scientists and developers to assess and improve the fairness of their AI systems. InterpretML, also a Microsoft creation, is another tool that helps AI-model developers assess their model’s behaviour and de-bias their data.

Employers should also use a ‘second look’ at the resumes and Curriculum Vitae of underrepresented minorities to mitigate the biases that run the risk of being reproduced on a vast scale by AI-led recruitment platforms, says Eric Rosenblum, Managing Partner at Tsingyuan Ventures, the largest Silicon Valley venture capital fund for Chinese diaspora innovators.


Dawn Zapata is a Senior Content Producer  at Thomson Reuters.

In this role, Dawn develops high-quality, industry-relevant content, specifically in the area of diversity and inclusion. She also helps oversee the execution of the Transforming Women’s Leadership in the Law programmes in the US and UK, an initiative which strives to encourage further dialog around women’s opportunities in the legal industry.

Dawn holds a B.A from the University of Illinois, an M.A. from Harvard University, and Master’s Certificate in Project Leadership from Cornell University.


Boosting productivity: Unleashing the power of AI-Assisted Research on Westlaw Edge UK with CoCounsel Understanding corporate clients’ key strategic priorities in 2024 Introducing AI-Assisted Research on Westlaw Edge UK with CoCounsel  3 reasons legal professionals need Westlaw Edge UK with CoCounsel Advanced CLM: Unlocking the future with AI and Document Intelligence AI-powered contract analysis Three reasons why generative AI will not take over lawyer jobs Generative AI for in-house counsel: What it is and what it can do for you Legal AI tools and AI assistants: must-have for legal professionals AI’s impact on law firms of every size