Make sure you’re using professional-grade AI
It’s now a common scenario. A university professor quickly determines a student essay is the work of ChatGPT. Giveaways include stilted language, obvious factual errors, a lack of verifiable references, and jumps to unsupported conclusions. The student rewrites the essay or even fails the course.
Some lawyers fear using AI for a legal task will produce work of a similar calibre as this hypothetical student’s paper; that they’ll suffer the same fate as the many “ChatGPT lawyers,” whose output was inferior and riddled with errors, included hallucinated inaccurate data analyses, case summaries, and contains poorly argued briefs a judge finds baffling.
Based in part on the prevalence of these stories in the media, some managing partners and heads of law practices still think AI could turn out to be a fad. They believe it will never reach the stage where a legal professional will feel confident employing it for use in litigation, or to fulfil a client’s objectives. So why should they invest in it?
It’s not an ungrounded argument. It’s quite likely that public AI will never be up to the task of generating sophisticated legal work. That’s why it’s essential for a law firm to distinguish between publicly available, general-use solutions such as ChatGPT and a professional-grade AI legal solution, specifically designed for use by lawyers.
These two types of solutions already differ a lot in reliability and accuracy, and the gap is growing. It’s like the difference between searching Google to see what a spot on your arm might be and having that spot examined by a dermatologist.
Making an AI commitment
A legal professional risks making a serious mistake by dismissing AI as a fad. It’s now a matter of when, not if, your organisation is going to adopt it in some form. That’s why choosing the right AI tool for your firm is crucial.
The savings potential for a law firm using AI is vast. According to the Thomson Reuters 2025 Future of Professionals Report, it is estimated that AI will save professionals about five hours per week or 240 hours a year, up from 200 in 2024.
It’s important to stress that not all AI is created equal, and that these potential savings and efficiency improvements will greatly depend on the type of AI system your organisation chooses. Public AI, such as ChatGPT, is often enough for general consumers — it’s good at tasks such as creating a packing list for an overseas trip or compiling a holiday dinner menu. But there’s a good reason Google is free while law firms pay to access a proprietary resource like Westlaw.
When a lawyer needs to use the correct case citation, precisely define terms in a contract, or confidently rebut an opposing counsel’s claim, the lawyer requires a resource built on secure, up-to-date, and fully verified data. ChatGPT isn’t designed for these tasks, but the best legal AI solutions are. It’s time to find the optimal system for your organisation.
The AI paradigm shift
Timing is everything when it comes to legal practices. Firms that adopt AI-powered solutions quickly stand to reap substantial benefits in efficiency and productivity. In the years to come, they could pull so far ahead of their rivals that the latecomers will have a hard time catching up, losing clients and market share to their more tech-savvy competitors.
For one thing, AI tools automate and accelerate a wide array of tasks for lawyers, particularly time-consuming research and drafting work. By outsourcing these to AI, lawyers can devote more time to higher-value tasks, such as strategic planning and business development. Very quickly, AI becomes a force multiplier, giving smaller teams and law firms the power to punch above their weight. For example, a 40-lawyer firm armed with a professional AI system could do equivalent work, if not more, than a firm twice its size that was late to adopt new technologies.
As for top-level law firms, AI will enable them to undertake ever more complex work that, in the past, could have required additional hires and substantial lawyer hours dedicated to a high-level project.
Reducing time-consuming, repetitive grunt work also lets lawyers achieve a greater work-life balance and reduces the potential of burnout, particularly for junior lawyers. These lawyers often must sacrifice a weekend to pore through case histories and search for specific clauses in a great pile of documents. Now AI solutions can do much of that work in minutes. In the most recent Generative AI in Professional Services Report, 55% of respondents said they were excited or hopeful about AI technology. This represents a combined increase of 11% when compared to 2024. Respondents cited time savings, streamlined work processes, increased productivity and efficiency, and new opportunities for innovation and growth as the top reasons for being excited.
There’s another thing for reluctant lawyers to consider — whatever your feelings are on the matter, your firm’s biggest corporate clients may expect you to adopt AI. Their logic is sound — if they’re using AI to enhance their operations, why isn’t their law firm? At some point, and possibly quite soon, these clients may no longer accept traditional hourly billing from their law firm for tasks the client believes could be done in a fraction of the time with fewer personnel.
“AI is an inevitable solution in my opinion, and resources that are not fit to be used will find themselves at the back of the line,” said one legal advisor interviewed for a previous Thomson Reuters Institute report. “There is great risk of loss of performance compared to the economy if we do not use it. It is the same issues as when Google released its famous white search box. And, if you don’t use a search engine, you’re out.”
Professional AI versus public AI: A tale of two different worlds
Legal professionals can gain a major advantage by using professional-grade AI tools. Legal experts design and constantly update these systems to address lawyers’ specific needs. They run on trusted, secure databases and provide transparent, well-sourced, and verified reasoning.
Within the category of professional-grade AI solutions, measurable differences exist among platforms. Recently, AI benchmarking company Vals AI surveyed leading-edge professional AI services, looking to see how they compared against each other in areas such as data extraction, document Q&A and summarisation, redlining, transcript analysis, chronology generation, and Electronic Data Gathering Analysis and Retrieval (EDGAR) Research.
1. Accuracy: On target with each shot, or firing wildly
Public AI. Say that a lawyer is searching for “most favoured nation” clauses within a set of trade agreements. Should they use a public AI solution, odds are the system will return an array of outputs of varying quality, some of which aren’t applicable in a legal context. Further, the system could have difficulty when attempting multi-clause extractions.
Professional AI. Professional-grade AI solutions offer greater clarity and reliability in terms of data extraction. In the Vals AI study, a set of professional AI systems were assessed to see how each identified and extracted information within the same document. Vals AI says, “All AI tools were given the additional instruction to include only the extracted text in their responses, not any additional explanation. In all cases, a good response required both the verbatim extraction of specified information from the provided document(s) and the source location for that information.”
The study found that professional-grade AI tools performed this task “reasonably well,” particularly with short documents and a single specific clause to extract. There were notable differences, however, when the systems were asked to extract multiple clauses for a single question — some systems stopped at one relevant clause without providing a substantial answer.
The most challenging question asked for 40 fields to be extracted across three credit agreements. CoCounsel from Thomson Reuters was the top performer in that task. CoCounsel and one competitor also performed above “lawyer baseline” — the proven performance for an experienced human lawyer doing the same task.
2. Authoritative content: You get what you pay for
Public. An AI platform like the public model ChatGPT is built on a foundation of opaque sources, ranging from public domain books to material scraped from the internet, some of which may be copyright-protected or inaccurate. A lawyer who asks a public AI system for a list of cases relevant to their current task will have to go over the information they receive with a fine-tooth comb — is it derived from Wikipedia? Out-of-date legal references? News articles from generalist sources?
Professional. A professional-grade AI platform draws its content from industry-respected and thoroughly vetted sources, including statutory legislation and case histories. When integrated with legal research tools such as Westlaw, a professional-grade AI solution gives users access to a verifiable and consistently updated database of case law, statutes, and legal opinions. This allows professionals to work with confidence, knowing their queries and research are based on authoritative content.
Further, a system like CoCounsel has a dedicated method of benchmarking any new large-language models (LLMs). Each new LLM is run through a series of public and private legal tests to assess its aptitude for legal review and analysis. Further test cases are created by the CoCounsel Trust Team to evaluate output that comes from the LLM’s early integration with existing systems and, if the results are promising, additional manual reviews are done by a team of lawyers.
3. Expert insights: Thousands of credentialed experts, or a faceless crowd
Public. In public AI solutions, it can be difficult to determine how the system prioritises which sources to use to answer a query. There’s a valid concern that the AI will assign a trusted information source, such as a news organisation like Reuters, the same weight as it does a financial blog written by a layperson, not updated since 2015.
Professional. Developers of a professional-grade AI model will explain to users how their system works, the processes the model uses to answer queries and conduct data searches, and all steps the provider takes to refine its AI’s capabilities. A professional-grade AI provider will be doing regular corrective measures to avoid data drift — any changes in input data statistical characteristics over time that degrade the model’s performance.
Hence the importance of having a wide-ranging number of experts on hand. These experts keep the AI solution abreast of any new regulations, laws, or case interpretations, and help ensure that its output is consistently accurate. An AI system is only as strong as the people whose knowledge it draws upon.
4. Data security and privacy: Fort Knox or open access?
In the 2025 Thomson Reuters Institute report on generative AI in professional services, about 68% of all respondents said data security was a top barrier to adopting AI. This is another critical distinction between public- and professional-grade AI products.
Public. Using a public product like ChatGPT to analyse propriety information means you’re often taking risks with your data. It can be unclear how these AI programmes use prompts and other user data — if the system deletes or retains the information for training purposes after a query. If this information is client-sensitive or copyright-protected, public AI products usually don’t possess adequate safeguards to protect it.
Also, some public AI providers make vague, even evasive claims about data usage, the ownership of client data, how often the data gets deleted, and which third parties will have access to it. Users may not have the option to exclude their input from being used in training the AI system.
Professional. A professional-grade AI provider must reassure users they’re taking necessary measures to safeguard customer data. For example, CoCounsel bases its cybersecurity programme on several industry standards, including the National Institute of Standards and Technology (NIST) Cybersecurity Framework and the Capability Maturity Model Integration (CMMI).
It’s essential that users of professional AI systems retain ownership of all input and output data. The provider should describe how its model uses data for systemic improvements, what the options are for data anonymisation, and how clients can opt out of the training process.
Further, AI providers must establish a well-defined data deletion process with specific timelines to ensure user privacy and compliance. For example, all data is deleted after 30 days unless otherwise specified. They should also offer transparent disclosure of any third-party data sharing. In the event of a data breach, the platform should have a well-honed incident response plan, including communication protocols and guarantees of timely notifications to all relevant parties.
5. Training: Professional classroom, or DIY
Public. Learning to use a public AI system tends to be a do-it-yourself affair. While these systems are generally user-friendly and intuitive, there’s a learning curve in mastering query refinement to receive the best results.
However, it’s becoming an increasing challenge to keep up with AI’s rapid pace of adoption, and to keep abreast of changes in AI systems. For example, the public version of ChatGPT was unveiled in November 2022, and has already gone through multiple revisions.
More than half of respondents in the recent Generative AI in Professional Services Report said while they’ve had some AI training, it’s only on an annual basis, or even less frequently.
Professional. A professional-grade AI provider should prioritise regular, intensive user training efforts for their system. Ultimately, an AI platform must be intuitive and supportive, so lawyers feel confident in their ability to use it effectively. This includes ensuring their queries are accurately articulated and tasks are handled efficiently.
CoCounsel, for example, offers pre-recorded training modules meant to address typical user concerns and real-time connections with experts to clarify any aspect of the system.
Sidebar: Getting AI right in the courtroom
You’ve decided to use an AI programme to assist in trial preparation. However, as one judge said in a previous generative AI in professional services study, “I think AI has the capacity to increase the efficiency of some processes and thereby possibly enhance access to justice, but it has just as much potential to undermine the justice system if it is misused.”
To ensure compliance and avoid getting chastised by the courts, it’s vital to verify AI outputs by ensuring citations are accurate, quoted language is from a legitimate source, and double-checking any statutes listed. Using AI doesn’t relieve a lawyer of their professional and ethical obligations. Instead, it’s crucial to verify any AI-generated information before submitting it to the court, just as any lawyer must review work done for them by a paralegal or junior associate.
As guidance published by the Courts and Tribunals Judiciary states, “All legal representatives are responsible for the material they put before the court/tribunal and have a professional obligation to ensure it is accurate and appropriate.”
This is why using only professional-grade AI systems matters. A solution like CoCounsel only uses trusted sources and databases, overseen by legal experts, to ensure reliability. It automatically generates citations and links for all findings, enabling you to quickly verify the output.
The value of going professional
Your organisation will need to draw upon AI as a resource, and sooner than you might think. More important, you need a professional-grade AI solution, one that places your lawyers on a different tier from the so-called ChatGPT lawyers” of the world.
It’s all about finding the right tools to do a complex job. CoCounsel leverages the expertise of Westlaw and Practical Law to provide a robust solution you can rely on.
Request a free demo of CoCounsel today.