Skip to content
Thomson Reuters
Innovation

Automated legal advice: rules, responsibility and risk allocation

Nayeem Syed

03 Jun 2019

Nayeem Syed is a senior technology lawyer, currently working at Refinitiv, a global provider of financial markets information and infrastructure. Each month he writes about innovation, technology and regulatory issues in the legal industry for Thomson Reuters Legal Insights Europe.

Whilst fully automated legal analysis is still some distance away, a number of notable applications are already available—and some impressive commercially available functionality within a few years is highly realistic.  As law firms start to incorporate within their practices machine learning and AI tools, discussions between providers and clients will and should include questions around both reliability and liability. At the heart of this are the key fears around detecting, preventing and mitigating against errors and allocating responsibility for bad results—stemming from either negligent design or negligent use. What clients, firms and regulators will be concerned with is how to address responsibility to end clients. Here, we’ll start to discuss who is responsible for errors.

Algorithmic legal advice

Machine learning is a narrow form of artificial intelligence where a computer program uses an algorithm to search and analyse large data-sets against defined criteria, recognize relevant connections, and identify areas of specific interest. The algorithm ‘learns’ from the data and adaptively improves with greater amounts of training data.

Machine learning can help lawyers review a vast data-room and ensure it is equally examined, and all applicable references are flagged for further assessment according to statistically validated methods. It can also help lawyers prepare transaction documentation rapidly and consistently across a global organisation. Algorithmic results can help make better predictions: how will a particular judge react to that argument based on their past decisions; how likely is it that we will secure a patent; and, for how much should we settle? However, today, machine learning is still a computer program that has to be operated and therefore, cannot entirely eliminate errors in the legal analysis process.

Bad design versus bad operator

Lawyers and machines are both prone to errors. Those errors could be a result of incorrect use by the human lawyer (bad operator) resulting from asking the wrong question or making the wrong inference. Alternatively, mistakes could be because of embedded errors within the computer model (bad design).

As we train algorithms to help us improve legal analysis, we must differentiate between errors in failing to follow the algorithmic results and intrinsic errors within the algorithm. The former stems from human susceptibility to incorrectly using the new technology, while the latter refers to the risk that predictive models already contain errors in the underlying logic of its designers and the model is only repeating—and possibly magnifying them.

In the US, a lawyer’s duty of competence requires them to be aware of the benefits and risks of emerging technologies in delivering legal services, and given the speed of change, that duty must evolve with the evolution of those technologies. They must use reasonable care through an ongoing assessment of the risks in using that technology. In England and Wales, the Solicitors Regulation Authority does not distinguish between the use of complex algorithmic models and generic IT services. For now, broadly, the rules applying to human lawyers and their delivery of services apply equally whether supported by lower level or advanced technology. Therefore, a human provider should expect to take the same responsibility for the results of their advice howsoever arrived at or supported, unless they can expressly (and reasonably) exclude them.

Professional rules and risk allocation

Revised professional practise rules will provide lawyers and their clients with a clearer starting point in determining whether a standard has been breached, and in this way, fault or liability can be established for other legal and regulatory frameworks. However, regulators should arguably take a pragmatic approach to machine learning and AI, allowing the field to evolve, and adopt measures as functionality advances. However, greater international alignment is helpful and probably necessary. For now, firms and clients must recognise that their current professional liability must apply to enable these tools and new operating models to gain client acceptance.

In practise, as lawyers increasingly rely on advanced legal technologies, they will need to ensure they take care to thoroughly diligence and understand the inherent limitations of any technology in order to rely on it appropriately. For example, while their liability to their clients would seem to remain the same, they may have limited ability to effectively address that liability with technology providers. Providers will likely insist lawyers accept limitations of liability and insist on, in a business to business to context, securing a disclaimer of responsibility for errors or inaccuracies. They will further seek to make the lawyer assume sole responsibility for acting on the result of the technology and developing advice based on it and require the lawyer to use the service at their own risk.

The development and mass adoption of machine learning will likely give rise to questions of applicability of and cost of professional indemnity insurance. Who must seek to hold which types of insurance and the amounts needed will drive the allocation of risk along the legal services value chain. Indeed, insurers might explore introducing new insurance products for machine learning providers and law firms.

Conclusion

Machine learning is compelling proposition because of the potential to help automate certain repeatable processes and address large data-sets cost effectively. These capabilities will ensure lawyers will be able to work faster, more efficiently by building on the computer-generated work product, but lawyers will need to carefully evaluate their reliance on such emerging technology as they retain primary responsibility for resulting errors in their legal advice. They must have confidence they fully understand and have effectively addressed the risks of relying on algorithmic analysis.

Future-proofing client relationships in the legal industry with technology CoCounsel Drafting: Revolutionising the legal drafting process with AI Distracted drafting: How to stay focused while creating contracts A lawyer’s guide to legal tech: Optimising processes and delivering a superior client experience The company secretary of the future Podcast: what is blockchain and how will it impact the legal industry? Spotlight on AI: Tim Harty Technology in Westminster: what’s the next move? Legal AI tools and AI assistants: must-have for legal professionals Creating a seamless legal transaction management workflow has never been easier