Whatever you think about generative AI, the signs are that it is here to stay and is likely to have a major impact on the legal profession.
There are many valid concerns over its use, not least how it will be regulated, how issues such as privacy and accuracy will be assured and what it will mean for legal careers. However, there is also a huge amount of curiosity as to how this technology might enhance the provision of legal services in ever more powerful ways and actually help lawyers do their jobs better.
This conundrum is summed up in the recent finding from the Thomson Reuters Institute in its report “ChatGPT and Generative AI within Law Firms” that, while 82% of law firm respondents said that generative AI can be used for legal work, only 51% said that it should. There’s much at stake, and the best approach may be an open-minded yet cautious one. After all, there is a difference between using AI to automate processes and analyse data as many are already doing, and using generative AI in a more creative way, for example to provide answers and create text.
Lawyers will understandably want to fully assess all the risks, but also evaluate what the opportunities might be and how to go about taking advantage of them in a measured way.
So, how can legal professionals use generative AI intelligently? Here are some key factors to consider:
Explore and experiment
The best place to start is just to try it: without using it on any client-facing work, just become familiar with the technology, see what it can do and what its limitations are. This will give you a feel for how it works, how effective it could be (bearing in mind that its capabilities are likely to advance at a rapid pace), what use-cases it could have and how comfortable you feel using it. Give it a go: ask it a question or put in some prompts and see what happens.
Research potential use-cases and if you do decide it’s appropriate and helpful to use it in your work, start off with a small test project. This will help highlight the benefits and identify any potential problems in a controlled environment. As the findings above suggest, many people will feel more confident about deploying it for non-legal work initially, for example, for reporting or developing customer service chatbots, before moving on to things like research or document creation.
Build in human oversight
The technology may be clever, but there are limits to what even the most sophisticated machine-learning models can do on their own. Therefore, it’s vital that human oversight always overlays what generative AI produces, ensuring that a legal expert sense-checks the outputs, from looking out for any obvious “hallucinations” (where something is simply wrong), to spotting instances where legal nuances have been overlooked. When deploying generative AI tools for real (rather than just playing around with them), it’s important to choose solutions that have been developed specifically for lawyers, and which have human insight from real-life legal experts built in, so that the results are always 100% accurate. This human-centric approach is something that is core to the Thomson Reuters ethos.
Put parameters around it
Some people are already dabbling with using generative AI but in a work context. It’s important to have a policy around whether they can use it, and if so, what for, and in what circumstances. According to the Thomson Reuters Institute report, 15% of respondents said that employees had been issued with warnings against the unauthorised use of generative AI at work, but two-thirds (66%) said they had received no such warnings, with the remainder (19%) saying they did not know. Communicating guidelines at an early stage – even before you formally start investigating generative AI – will stop staff using it without your knowledge in untested ways for inappropriate purposes or employing unsuitable tools.
Consider the data
As with any technology, the outputs are only as good as the inputs which inform them. Generative AI synthesises data, trawling large troves of information to create text, images and so on. So, consider where that data comes from and whether it is right to use it. For instance, for reasons of confidentiality, many lawyers may feel more relaxed about using solutions that harness publicly-available third-party data rather than their own proprietary data. In time, however, it may be beneficial to train AI models using your own data for more accurate, targeted results, as long as privacy and confidentiality safeguards can be assured.
Keep talent in mind
People often fear what they don’t understand, so bear this in mind as you approach generative AI. Lawyers may be anxious about what it will mean for their jobs, therefore broach the topic sensitively, demonstrating awareness of concerns, while educating them about how it could have a positive impact on career prospects by enabling them to focus on higher-value work. Getting ahead of the game can help put you and your team in a stronger position professionally. As Shawn Malhotra, Head of Engineering at Thomson Reuters puts it: “Generative AI won’t replace lawyers, but those who don’t use generative AI may be replaced by those who do.”
However, this new technology develops, legal professionals will need to use their intelligence to work out how generative AI can be used safely and effectively to complement human brainpower and deliver even better service as we look ahead.
Find out more about what the advent of generative AI means for law firms in our post “Generative AI in legal tech is here: what now?”