Skip to content
Thomson Reuters
Innovation

AI bias and data transparency for lawyers—part one

Nayeem Syed

22 Mar 2021

Image credit: REUTERS/Jason Lee

Syed notes that as regulators and policymakers debate AI bias, it is essential to recognise machines will, unless consciously addressed, mirror unconscious bias in human thinking that it tries to learn from or replicate. However, with AI, operators also have the opportunity to review and improve human assumptions and their model’s accuracy by recognising and addressing this.  Further, sufficient upfront investment in high-quality training data, trainers and validation, and robust governance of the underlying processes to comply with broader (analogue) legislative obligations can mitigate the risk of failure in AI. That will help inspire higher confidence from wider stakeholders, ultimately leading to the sector’s growth.

Artificial intelligence (AI) in software is now mainstream as organisations increasingly look to third-party providers to incorporate AI capabilities within their existing services and processes.  Providers seek to help their operator customers improve human decision-making and avoid human error, but as operators look to automate regulated processes, there are many calls for greater scrutiny and, potentially, further regulation. One issue being actively discussed among policymakers is bias and algorithmic transparency. As Vivienne Artz, Chief Privacy Officer at London Stock Exchange Group explains, “There are a myriad of AI guidelines and standards evolving globally, and the EU is set to establish a regulatory framework focussing on high-risk uses of AI.  This means that businesses need to be aware of rules, when they are relevant, and be prepared to demonstrate how they have addressed these issues.”

There is no consensus yet about whether lawmakers should extend sector-specific rules to address AI usage or introduce broader overarching guidelines. Therefore, as organisations apply AI to retail use cases, in particular, they will very likely need to prepare to review and enhance their data governance processes to be ready to respond to calls to show how they have mitigated the risk of embedded bias.

As the legal and regulatory landscape evolves, lawyers will increasingly need to weigh in on bias questions to help their clients prepare to detect, prevent and mitigate against errors and allocate responsibility for bad results—stemming from negligent design or negligent use.

We will focus on some of the discussion around bias and transparency across these two parts.

Black box versus transparency

In early 2021, the New York City Council debated a bill to regulate automated employment-decision tools used in recruitment. It seeks to require firms to inform candidates that AI was used to assess their applications. Also, it seeks to require the underlying provider to conduct an audit for bias annually.

Analogue anti-discrimination legislation has sought to prevent specific biases in hiring, so it is helpful that legislators are looking to ensure such underlying policy goals are applied to these new tools. However, it is worth noting that such recruitment software is often marketed as optimising hiring and the process potentially fairer by helping to mitigate cognitive biases. These include affinity bias (selecting a candidate with a similar background) or the halo effect (assuming a strong communicator will be good at everything else in the job description). Questions of fairness in hiring have and will likely continue to be challenging to define, so we must be cautious about what we expect from AI in this respect. However, firms will need to ensure they remain compliant with legislative obligations.

Therefore, legal advisors advising Human Resources departments adopting such selection tools must understand how the software will be incorporated into the various stages of the end-to-end process (curriculum vitae sourcing, screening by selection criteria, testing and interviewing, negotiation and appointment and background checking) and which suitability assessment steps will be modified or replaced.  They must advise how best to design the hybrid processes to ensure the firm can still follow any official guidance. In the United Kingdom, this would mean compliance with the Equality Act (2010) through the Employment Statutory Code of Practice. For example, advisors can advise how to approach record-keeping to be more helpful in an audit or subject access request under the Data Protection Act 2018 and manage the sharing of the data with different parties involved in the end to end process.

Beyond recruitment, for other regulated activities that are being augmented or displaced by AI, more legal advisors will need to become more tech-fluent and work closely with their clients to ensure compliance with existing rules designed for the pre-AI context. They will now need to understand and approve—before it is purchased or developed—how the features work, how much training is necessary to ensure the competent operation and how records are maintained to respond to transparency requirements.

For example, with credit-scoring or loan assessment, a financial regulator will require that the operator explain the AI deployment and the controls to prevent certain groups from being excluded incorrectly. When assessing groups with less available data (because they struggled to obtain credit historically), the operator can show if and how their models have overcome that data scarcity problem.

Flawed operator versus flawed design

Humans make many errors. They get tired, cut corners and often exercise poor judgment. AI software seeks to avoid such issues to help to achieve superior decision-making. However, the human operator could, of course, misdirect the AI agent or misapply the results. Equally, the resulting predictions could be wrong or dangerous because of programming errors due to flawed design logic and training data.

As we train algorithms to improve analysis, we must differentiate between errors in failing to follow the algorithmic results from intrinsic errors within the algorithm. The former stems from human susceptibility to incorrectly using the new technology. The latter refers to the risk that predictive models already contain errors in its designers’ underlying logic and the model is only repeating—and possibly magnifying them.

Therefore, there is a broader challenge of detecting errors or unintended consequences. How will we even know whether the human operator inputted imperfect instructions, or the software imperfectly interpreted the instructions?

We know straining for bias is complex. In 2018, a global online retailer scrapped their use of AI in recruitment when they found their results, prejudiced against women. They tried to refine the input criteria to correct that but, in the end even one of the world’s AI leaders decided to return to this challenge later, as the AI still found ways to favour men.

Unless consciously addressed, AI will mirror unconscious bias

This highlights a key AI challenge: what precisely is the operator looking for, and how much weight will the AI apply when self-learning based on the training data and the trainer and user feedback loop? How will career gaps be scored by AI when doing an initial trawl through thousands of resumes in the recruitment example?  Will it exclude results the operators wish to include, or will it have to ignore gaps?  The example shows how various innocent or logical criteria could result in highly qualified candidates being pre-screened out.

We have discussed how lawyers would increasingly need to help advise on bias in AI deployment.  The next part discusses how lawyers can help clients with AI development and straining human error to live up to their full potential and be more ready for an audit for bias.

A lawyer’s guide to legal tech: Optimising processes and delivering a superior client experience The company secretary of the future Podcast: what is blockchain and how will it impact the legal industry? Spotlight on AI: Tim Harty Technology in Westminster: what’s the next move? Legal AI tools and AI assistants: must-have for legal professionals Creating a seamless legal transaction management workflow has never been easier AI made big strides in 2023 – what does 2024 hold? AI for legal documents: Unlocking a competitive edge Everything you wanted to know about AI but were afraid to ask