The use of artificial intelligence (AI) and machine learning is already imposing changes to market practice and service delivery in some parts of the legal sector. However, the role of efficiency driven solutions powered by AI—and the legal tech space broadly—is still evolving within the legal industry.
In the justice system specifically, AI has the potential to radically influence the way criminal and civil proceeding are heard and decided—though, there are many questions around its eventual application, and the necessity to consider the ethical implications of using such technology. Sylvie Delacroix, Professor in Law and Ethics at the University of Birmingham, spoke to Thomson Reuters Legal Insights Europe about her views on the subject, and the work that she has been doing in this area of the legal industry.
You are part of The Law Society’s new Public Policy Commission set up to look at ‘Algorithms in the Justice System’. What has that work involved, and what have the outcomes been so far?
The Commission was set up to examine the use of algorithms in the justice system in England and Wales and what controls, if any, are needed to protect human rights and trust in the justice system. Christina Blacklaws is chairing, and Sofia Olhede and myself have been taking evidence from a range of experts (tech, government, commercial and human rights) on whether algorithms and their use within the justice system should be regulated, and if so, how. There are two more upcoming evidence sessions (7 and 14 February 2019). We are keen to hear from a wide range of stakeholders and there is still time to submit your evidence, which will be taken into account when drafting the commission’s report (due this summer).
As the legal industry increasingly engages with efficiency driven solutions, including AI and machine learning—what controls, if any, are needed to ensure that trust and basic human rights are protected in the justice system?
I think it’s helpful to distinguish between two kinds of issues.
Within customer-facing solutions, we are going to see an explosion of ‘legal apps’. There will be cases (think parking fines) where there is little downside to the vital increase in affordability and accessibility that automation brings, provided transparency, accountability and privacy are safeguarded. Yet such clear-cut cases of unproblematic automation are not that common. Laudable as it may be, the drive to democratise legal expertise by distilling it into mass-market, problem solver apps can conceal issues that demand human input. As an example, an app that allows those who have recently been dismissed from their job to avail themselves of their right to severance pay (which may be opaque due to complex legislation) is commendable. Yet without a proactive referral system, such an app would fail its users. The vulnerability that is concomitant with finding oneself jobless cannot be addressed by algorithms, no matter how much empathy such apps may be able to display.
At a larger scale, there is a risk that a focus on efficiency. For instance—through increasingly performant prediction tools—it will make us blind to the fact that increased automation is changing the very nature of the legal system. Given their impressive accuracy, it is highly likely that lawyers will increasingly refer to prediction tools to advise clients on whether their claim is worth pursuing. This may seem like a welcome innovation, except for the fact that this will insidiously contribute to a growing degree of conservatism, since cases with a low success prediction are unlikely to be heard in court. This in turn makes organic changes within case law less likely. Shifts in case law often depend upon an accumulation of previous, unsuccessful cases that trigger a growing number of dissenting voices (both within and without the judiciary). There may be ways of developing tools that not only predict the chances of success in court, but also the likelihood that a particular case will eventually contribute to some organic evolution within case law, but commercial incentives for both the development and use of such tools will be low.
How do you envisage AI impacting the legal profession and the role of lawyers in the next five years?
There is little doubt that advancements with computer systems will play an essential role within the legal profession, and that this could transform it for the better. Automated document management (and discovery) is already becoming commonplace, saving lawyers a lot of dull workhours, but we are still a long way from harnessing the full potential of the data now available. Everything hangs on exactly how we harness that potential, whether we allow an instrumentalist logic to take over or whether the aims that preside over such data mining reflect what we want law for.
In terms of future roles for lawyers, again, there is no doubt that the nature of that role will change. In many areas we probably won’t need quite as many lawyers. Nobody will be surprised to hear that. What few people realise, however, is just how urgently we need lawyers trained in data governance. I believe that in the next five years we will see an increasing need for lawyers acting as intermediaries between data subjects and data controllers (both in GDPR countries and elsewhere). Law schools need to get their act together and urgently train future lawyers in data governance. This would ideally be within the context of inter-disciplinary degrees. We do need lawyers with some minimal training in statistics and computer science.
Tell us about the paper that you published earlier this year ‘Computer Systems Fit for the Legal Profession?’, and what inspired you to undertake this research?
I was struck by how easy it is to adopt a bluntly consequentliast outlook, according to which automation within the professions is both legitimate and desirable provided it improves the quality, accountability and accessibility of professional services. That this line of argument is so successful is in partly our fault, legal theorists/philosophers. I think we’ve failed to explain in a credible way what grounds the particular responsibility of professionals, and what distinguishes it from that of expert service providers in general. I tried to remedy that in an earlier paper I published, ‘A Vulnerability-based Account of Professional Responsibility’, explaining how, in many lay-professional encounters, it is our very commitment to moral equality that is at stake.
I believe this turns the case for wholesale automation on its head. One can no longer assume that, as a rule, wholesale automation is legitimate, provided it improves the quality and accessibility of legal services. The assumption, instead, is firmly in favour of designing systems that better enable legal professionals to live up to their specific responsibility.
Professor Delacroix focuses on the intersection between law and ethics, with a particular interest in Machine Ethics and Agency. Her research seeks to bridge the gap between ongoing work into the non-cognitive roots of ethical agency – including habits and the assumptions currently presiding over the design of both decision-support and `autonomous’ systems meant for professional or morally-loaded contexts. Delacroix is a Professor in Law and Ethics at The University of Birmingham, and is also a Fellow of the Alan Turing Institute. She also researches the effect of personalised profiling and ambient computing on our ability to trigger change in our social practices. Professor Delacroix’s work has notably been funded by the Wellcome Trust, the NHS and the Leverhulme Trust, from whom she received the Leverhulme Prize in 2010.
Professor Delacroix was one of three appointed commissioners on the Public Policy Commission on the use of algorithms in the justice system (Law Society of England and Wales), which released its report on 04th June 2019.