Skip to content
Thomson Reuters
Courts

Examining the role of AI in the courts

Imagine a courtroom of the future in which a lawyer has all the capabilities, research depth and legal knowledge that artificial intelligence (AI) can muster. That would be impressive use of innovation and technology.

Now imagine that lawyer turning that AI-honed legal prowess on an indigent defendant, represented by a single legal aid solicitor who has none of those things.

Imagine an inmate being refused parole on the basis of an algorithm which determines that the inmate is more likely to re-offend, but what the defence team don’t know is that the algorithms draw their conclusions from a data set which is full of bias.

As the legal industry increasingly engages with efficiency driven solutions, in which AI and machine learning can hold tremendous sway in the power balance of how criminal and civil cases are heard and decided, where do the ethics of the law and the concept of justice come in? And if new legal technology can actually increase access to justice for citizens in countries like the UK and the US, how can we ensure that such access  results in equal justice for all? Would it be the case that the use of technology itself does not become a proxy for fairness, adhered to without question, or because of its design, not open to question?

It’s these types of questions, and many others concerning the impact that legal technology, AI and algorithms will have on issues of access to justice, legal ethics, human rights, and fair legal representation that the Law Society  is seeking to address with its new Public Policy Commission on Algorithms in the Justice System.

“There are really two broad buckets of interest in this area,” explains Sophia Adams-Bhatti, Director of Legal and Regulatory Policy at the Law Society. “One is around the practical implications of technology and the law, and how the legal practice will change. But, you also have to deal with the impact on the law itself as a result of developments in technology and AI. This is where the legal ethics questions become really very interesting”.

The Commission will focus its examination of the use of algorithms in the justice system in England and Wales, although it will “take appropriate account of international developments” as well, according to the Law Society website. One question at the heart of the matter will be what controls, if any, are needed to ensure that trust and basic human rights are protected in the justice system.

The three-person Commission will be chaired by the Law Society President Christina Blacklaws and includes Sofia Olhede, Professor of Statistics at University College London, and Sylvie Delacroix, Professor of Law and Ethics at the University of Birmingham.

Looking for evidence

Commissioners will be holding three Evidence Sessions at which a wide range of multidisciplinary experts will present evidence on the question of how algorithms and their use within the justice system impact on human rights, and what measures are needed as a consequence. “This was deliberately designed to be a conversation that we, as a Law Society, could curate, but through which we bring together the voices of the various stakeholders”, says Adams-Bhatti. “We’ve designed it so that we bring together the technology, the academic research community, the ethicists, the lawyers, the voice of the citizens, political science and the voice of the law enforcement agencies, all to the table to bring their perspectives”. The Commission decided to focus on criminal justice at the outset because of the vast amounts of research being done, the numerous practical and human rights applications available, and the implications that have yet to be really worked through, she adds.

Adams-Bhatti said once the Law Society began looking at the impact of technology on the law and on how people are interacting with the law and the courts, it became clear to the group that the focus of this exercise should be on the power of machine learning. “Machine learning and AI empowered processing is increasingly where the real potential benefits of technology and the law starts to bite,” she explains. “The ability to turn huge amounts of data into knowledge and understanding that we couldn’t do any other way—certainly, manually we couldn’t do this—it’s an opportunity there for the taking.”

For members of a government law enforcement agency, for example, the ability to harness the knowledge of what was previously massive amounts of unstructured data and mine it in a way to be able to deliver their public duty much easier with nominal costs compared to hiring 100 or 1,000 individual workers to transfer and sift through this data and do so at a greater speed and with greater efficiency, is a tremendous benefit for that enforcement agency, Adams-Bhatti says. “You can see the vast appeal of that”.

However, Adams-Bhatti argues that the benefit has to be balanced with the need to ensure that tech-enhanced prosecutorial power doesn’t come at the cost of the rule of law. It is vital that our justice system is transparent, understandable, allows individuals to defend themselves in court and upholds the core tenants of equality in the eyes of law, she explains. “We have to say, ‘Okay, if these tools are available, how do we ensure they don’t corrupt the very purpose for which these agencies are trying to deploy them?’” Adams-Bhatti says, giving the example of the advances in facial recognition technology and surveillance. She notes that while some make the point that facial recognition might be a great way of combating crime in certain areas, and in the right hands that might be quite useful, it does raise questions of why should the state be collecting data about where its citizens are traveling and so forth, for no reason other than collecting data? “Surely, as a libertarian society, people should be free to walk around without being surveyed”.

Indeed, it’s these sorts of questions and ethical dilemmas that legal experts have not had the opportunity to explore in a structured way—which underscores the importance of the Commission’s core mission. “It struck me that we had not only the opportunity to look at this from a human rights perspective, but to really focus in on the great opportunity, or at least the potential promise of AI in society as a whole and whether or not we can help settle some of the principle questions that sit at the heart of this debate,” Adams-Bhatti offers.

Those interested in making contributions, submitting written evidence, or applying to appear at one of the planned public evidence sessions taking place over the course of the next year, can contact the Commission via its dedicated email commission@lawsociety.org.uk.

The real cost of ‘free’ legal research An interview with His Honour Judge Mark Lucraft QC, Recorder of London The Hearing: Episode 77 – The impact of AI and algorithms on the fairness of our justice systems The Hearing: Episode 69 – Lady Hale Podcast—Remote hearings and the future of global arbitration Debating the future of the legal industry—The Uncertain Decade The Hearing: Episode 52 – Lord Neuberger AI for English Law—unlocking the potential Report: Cost of Compliance 2019—after 10 years of regulatory change, expect more change How could AI impact the justice system?