Skip to content
Thomson Reuters
Innovation

AI bias and data transparency for lawyers—part two

Nayeem Syed

01 Apr 2021

Image credit: REUTERS/Kacper Pempel

In a two-part series, Syed notes that as regulators and policymakers debate AI bias, it is essential to recognise machines will, unless consciously addressed, mirror unconscious bias in human thinking that it tries to learn from or replicate. However, with AI, operators also have the opportunity to review and improve human assumptions and their model’s accuracy by recognising and addressing this. Further, sufficient upfront investment in high-quality training data, trainers and validation, and robust governance of the underlying processes to comply with broader (analogue) legislative obligations can mitigate the risk of failure in AI. That will help inspire higher confidence from wider stakeholders, ultimately leading to the sector’s growth. 

In the previous article, we discussed that as artificial intelligence (AI) uses increase, regulators and user groups ask questions about avoiding bias and maintaining compliance and fairness.  We now discuss how lawyers can help their clients with AI development and straining human error to live up to its full potential and be more ready for an audit for bias.

As the AI use case augments or displaces a regulated activity, technology lawyers will increasingly need to work with other specialist lawyers to advise clients.  These legal teams must fully understand the analogue processes and work with developers to design the functionality to ensure the software enables the operator clients to have sufficient control (to set criteria) to stay primarily responsible for compliance and demonstrate its approach in an audit.

Here are some practical issues for lawyers and developers to consider to help the development process so that the work is capable of audit scrutiny:

Start with what applicable regulators’ and impacted groups’ bias concerns might be

At the outset, providers must fully recognise the legitimate external concern regarding the dangers of unchallenged bias. With that recognition, providers can—when defining an AI process—work to identify and confront incorrect underlying assumptions and logic and redesign to avoid repeating human mistakes in addressing the wrong questions or the wrong data. If they try to understand the pitfalls better, operators can take the appropriate steps to prevent algorithmic bias.

Further, testing robustly to uncover biases present in the initial models and suggesting a method to ‘de-bias’ the cognitive embedding will be well rewarded. Sincere and sustained efforts to examine what is driving any bias and addressing it will help with an independent audit.

Separate what is required from the algorithm or of the operator

We must recognise that human and machines still approach thinking differently. Humans creatively apply their ability to think flexibly and make rapid connections to new situations. They apply less rigour but can discover valuable and novel insights that way. Narrow AI using a single proven framework to large data sets can reveal unexpected patterns and observations that are impossible except through enormous scale (and sufficient diversity).  Whilst strong AI tries to attempt human intelligence, providers must be candid about what is achievable, providing appropriate disclaimers and warnings.

Ensure to have trainers with the necessary domain insights

Providers must explain what is required so operators can obtain and apply the necessary specific domain knowledge to create a useful model.  Failing to do so will result in operators believing they can produce results that they cannot. We know that more significant upfront investment means more reliable and scalable processes, which holds for people and machines. Operators must understand what they are still responsible for in training the AI with their broader compliance obligations.

Seek out the most trusted and complete data

Different training data produce different conclusions, and this is true with both large and small data. Algorithms can only perform their operations within the defined boundaries in which they are set.  For example, when assessing a market index, we may fail to include those stocks that failed and only address those that survived to the end of the entire sample period. The result is that our conclusions over-reflect those stocks that survived, and we may observe inaccurate attributes or form incomplete understanding.  With AI, where the training data fails to fully reflect the real population data (convenient, inconvenient and incomplete), it is potentially more pernicious, as it permanently hard-codes the biases of the humans they are modelled on and then applies them at speed.

Strive to self-challenge at all stages and assume AI cannot automatically correct trainer or operator bias

Regulators and auditors do and will increasingly ask how the AI works with regulated activities, including the underlying decision logic and connected processes.  Ideally, providers must enable operators to demonstrate the outputs were without reference to immaterial factors and were proactively tested for and remediating bias.  Providers must build and train operators to able to explain how they sought to prevent undetected errors in the instructions, amplifying misunderstandings and going faster in the wrong direction.

Build or adapt ethics frameworks into existing governance

Most companies have policies regarding technology design, including required standards, necessary approvals, and governance processes for detecting non-compliance and remediation.  These can be leveraged and must be updated for AI usage.

They should set out a transparent process for identifying ethics concerns and issue notification and resolution.

Lawyers can help colleagues consider how the organisation’s services are regulated or supervised and ensure they can demonstrate compliance. They will also help them align with the organisation’s personal and non-personal data governance systems.

Repeating a mistake is not a mistake. It’s a choice.

The critical insight in all of this is that machines will unless consciously addressed, mirror unconscious bias in human thinking that it tries to learn from or replicate. However, operators have the opportunity to test for and improve the underlying human assumptions and therefore their model’s accuracy by recognising and addressing this.  Further, sufficient upfront investment in high-quality training data, trainers and validation, and robust governance of the underlying processes to comply with broader (analogue) legislative obligations can mitigate the risk of failure in AI. That will help inspire higher confidence from wider stakeholders, ultimately leading to the sector’s growth.

Therefore, there is now an ideal opportunity for providers, operators and policymakers to encourage AI to uncover undetected bias in existing human processes through review and challenge.  Providers can then seek to redesign these in their AI form to avoid misaddressing AI and perpetuating these mistakes.  Lawyers can help providers implement controls to improve the inputs and generate better outputs.  Lawyers can further help providers incorporate robustness in their development process to uncover and manage embedded human bias potential, ensure continuing compliance with existing legislation and be ready to demonstrate that in an audit readily.

The next article in this series will discuss how lawyers can help providers and operators think about and develop their internal governance frameworks for readiness to comply with the legislative, regulatory and contractual obligations.

Legal AI tools and AI assistants: must-have for legal professionals Creating a seamless legal transaction management workflow has never been easier AI made big strides in 2023 – what does 2024 hold? AI for legal documents: Unlocking a competitive edge Everything you wanted to know about AI but were afraid to ask How artificial intelligence, machine learning and human oversight shape Document Intelligence How AI contract review technology can ease the headache of tedious tasks Generative AI in legal tech is here: what now? HighQ provides legal professionals with the best value – and here’s proof Getting ahead in the UK legal market: Expert tips and strategies for law firms