Generative artificial intelligence (AI) is the hot topic of the moment on everybody’s lips and no wonder. Earlier this year ChatGPT became the fastest-growing consumer application ever launched, highlighting the strength of people’s interest in this potentially transformative technology that can create sophisticated new content such as text, images, videos, and audio.
Whether that interest is tinged with excitement or apprehension is a point for discussion: the debate is well and truly underway as to how to harness the power of this technology, while addressing the risks that it presents. Those questions are as pertinent in the legal sector as elsewhere – perhaps even more so in a profession which is so rooted in ethics, trust, and client confidentiality; and relies on professional integrity and human intellect.
We’re still at an early stage of the evolution and adoption of generative AI for legal professionals, but the likelihood is that things will move very quickly.
So, it would seem sensible for lawyers to keep abreast of developments, in order to monitor what opportunities and threats it could pose.
What is the regulatory position of generative AI?
Policy-makers are scrambling to get ahead of the game, and in March, the UK Government set out its stall by publishing a white paper entitled “A pro-innovation approach to AI regulation.” In it, it proposes a relatively flexible, iterative, principles-based approach, which aims to regulate the use of AI, rather than the technology itself, relying on existing regulators to do so. The regime will be, “pro-innovation, proportionate, trustworthy, adaptable, clear and collaborative.”
The EU has chosen a more prescriptive approach. The EU AI Act “says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users.” In doing so, it provides for a more formal, risk-based framework of obligations and requirements, which includes some quite stringent standards for high-risk uses and bans some uses altogether.
Despite the disparate approaches, regulation is needed – and fast. “We need standards,” says Kriti Sharma, Chief Product Officer for Legal Technology at Thomson Reuters. “Bringing in regulation on the fundamental principles of how AI interacts with humans will drive more innovation and adoption.”
What does generative AI mean for legal tech?
Whether these regulatory regimes put sufficient guardrails around this “brave new world” remains to be seen, but demand from businesses for lawyers to help them navigate this challenging and fast-changing landscape will doubtless be high. Just as importantly, how will the advent of generative AI affect lawyers themselves?
Many legal professionals are already using AI-powered technologies to automate processes or aid in data analytics to increase efficiency and effectiveness, and/or deliver deeper insights. But there is clear potential for generative AI to take this further and change (and improve) how lawyers operate in even more ground-breaking ways. What’s important is that, as this happens, it has a positive impact on the practice of law and on legal careers, adding value to businesses and lawyers alike.
What can generative AI be used for in the legal industry?
As legal professionals weigh up the pros and cons of deploying generative AI, they need to have a sufficient understanding of what it is, how it works, how to leverage it in the right way to enhance quality and speed of work, and how to use it ethically to benefit clients and foster talent.
“Generative AI could be one of the most revolutionary technologies to impact the legal industry for decades – I think it could be as revolutionary as the spreadsheet was for accounting or web browsers were for information retrieval,” says David Wong, Chief Product Officer at Thomson Reuters. “So much of the conversation around generative AI is about how will it disrupt the profession, but there’s a huge opportunity to use it to solve two basic problems. It’s great at finding and synthesizing information and delivering automated work products.”
Lawyers may want to think about the use-cases they could explore, and whether generative AI could be used for legal work such as document drafting, research, and knowledge management, as well as non-legal work – for instance, customer service chatbots or internal reporting. It will be important to consider carefully whether it is appropriate to use firm data to train AI models or to use publicly-available data; how to make sure that there are suitable parameters in place to ensure the right prompts are used to generate the right responses; and check for “hallucinations” that can occur when an AI model comes up with false information.
What should generative AI legal tech tools look like?
Research by the Thomson Reuters Institute suggests that the biggest concerns around generative AI centre on issues such as accuracy, privacy, confidentiality and security. Therefore, any generative AI technologies used should be designed specifically with the requirements of the legal sector in mind. That means marrying the appropriate data (which is essentially the input that informs the output), with subject matter expertise provided by human professionals (to train the AI, validate its performance and outputs, and spot errors), and robust technological architecture designed by AI scientists with a solid understanding of the relevant legal domains.
While there’s a healthy amount of scepticism around the legitimacy and practicalities of employing generative AI in the law, there’s also a genuine fascination with what it can do. Some pioneering law firms and in-house legal teams are already starting to experiment with and even adopt it, and inevitably more will follow, once they feel comfortable about the risks and rewards of doing so. It may take time to get to that stage, however, as with any new innovation that enters the market, there’s potential to get left behind, as competitors find ways to use it to their advantage.
Learn about the Thomson Reuters Institute’s work on AI and future technologies.