WHITE PAPER

How to avoid misinformation with legal AI

Navigate AI risks and maximise benefits with professional-grade legal technology solutions

In summer 2025, lawyers who used AI tools to prepare their legal arguments were criticised by the High Court for citing cases that don't exist. The judge said, “Artificial intelligence is a powerful technology. It can be a useful tool in litigation, both civil and criminal.” But they also warned, “freely available generative artificial intelligence tools, trained on a large language model such as ChatGPT are not capable of conducting reliable legal research.” The news followed several recent cases in the U.S. where law firms used fictitious citations generated by consumer-grade artificial intelligence (AI) tools in court.

What lessons can be learned? These lawyers recognized that AI is a powerful tool — one that enhances legal workflows and improves processes at law firms and in-house legal departments. When AI has been developed specifically for the legal industry, it can streamline legal workflows, trim time-consuming tasks, and free lawyers to focus on higher-level, more valuable, and fulfilling work.

However, it’s essential to recognise the real risks of using public AI tools, especially when results aren’t verified for accuracy. These tools need to be used responsibly, because the consequences of misuse can be serious.

In this white paper, we will outline how to make the most effective use of these tools, whether you work in a law firm or in-house.

How should legal teams view AI?

Before we explore how to use AI in your legal workflow, it’s worth considering how this technology should be viewed. Having the right mindset will help any legal professional develop the proper perspective and approach.

Since its arrival, AI has consistently impressed with its ability to generate content in seconds. This technology is so powerful that you might think it can handle anything. But it can’t, and recognising its boundaries is essential for professional success. Here are three questions to keep in mind when evaluating any AI tool and its output:

  • Where does the tool get its information? Free, public AI tools pull information from across the internet, mixing fiction with fact. This is partly why popular, public AI tools can generate false information. In contrast, professional-grade legal AI tools are tailored for the legal industry, crafted by experts who understand the field, and rely on curated, high-quality datasets when generating answers. With stricter guardrails and higher standards, these tools deliver more reliable, trustworthy results. Understanding where your AI tool sources its information is vital, as it determines whether you can act on its output with confidence.
  • What is the request? Not every AI use case carries the same risk. When drafting a fun invitation to your child’s birthday party, no harm is done if the output is off the mark. But in a legal context, the stakes are much higher. Using AI for legal work should be done thoughtfully and its product reviewed carefully.
  • How much review is needed? It’s easy to assume an AI output is client-ready the second it’s produced. It is not. Think of it this way — even the brightest associate must have their work reviewed before presenting it to a client. AI is no different. Even the output of a legal-specific AI tool needs careful review from a trained and experienced legal professional. While AI tools can produce high-quality output, they can’t replace your thoughtful judgment, understanding of the client, and experience.

What are the potential risks of using AI?

As we alluded to above, using AI tools carries some risk. Professional-grade legal AI tools present significantly less risk than free public tools, but no AI tool is completely risk-free. Here are three common risks of using AI tools that legal professionals should be aware of:

  • Prompting errors. AI tools can only work with what you give them. They interpret questions or commands based on their straightforward, literal meaning. Sometimes, users enter a vague or non-specific prompt, unaware this can lead to output that doesn’t precisely meet their expectations. A targeted input can enhance the alignment with the desired result. When using AI tools, legal professionals should always consider whether the results match the prompt. AI prompting is a skill developed over time by trying new techniques and assessing what went well and what can be improved.
  • Bias. It’s sometimes referred to as “the helpfulness bias.” AI prioritises responding to the question posed, even when the available information is insufficient or inaccurate. Because AI has this tendency, output can be incomplete or non-specific. If the response produced makes you sceptical, it’s a good indication you should further scrutinise the answer.
  • Hallucination. AI accuracy keeps improving, but it can still generate fabricated responses. Researchers differ in their perspective, but some believe that AI will always have the potential to hallucinate. When using AI, legal professionals should always check for hallucinations — for example, checking citations to ensure they are real and accurate.

How can AI be used most effectively?

Having developed a comprehensive perspective of AI and its inherent risks, it’s time to share pointers on using AI to achieve desired outcomes.

  • Prompt thoughtfully. Prompting, or entering a command to instruct your AI tool, is an often-overlooked aspect of maximising the tool’s effectiveness. After years of using search engines, it’s common practice to enter incomplete or general information. AI tools can work with limited information, but their output will not be optimised. The clearer and more intentional the prompt, the more streamlined and efficient the process becomes — saving time and effort, which is a key benefit of using AI.
  • View the output as a first draft. It’s rare for a writer’s first draft to be spot on. Many pieces of writing move through stages on their way to completion. Working with AI is similar. Even if the initial output appears to be accurate, it’s not a ready-made solution. You are the legal expert, not your AI tool; clients rely on your judgment, experience, and training. By applying these skills to refine AI’s initial output, you can enhance the results for a better outcome.
  • Recognise AI’s limits. Even the most formidable AI tool is not suitable for every situation. While AI tools developed for the legal industry offer enhanced security compared to public AI, exercise caution before entering highly confidential or sensitive information. In some circumstances, limiting access to such data is crucial, and organisations should establish policies for its use. There are also sensitive or personal situations in which AI-generated content lacks the necessary human touch. Imagine expressing condolences to the surviving spouse of a medical malpractice victim, for example. Such a delicate situation calls for the kind of personal and human element AI cannot provide.

What are best practices for using AI output in legal work?

Understanding the correct perspective on AI and its outputs, as well as recognising common risks, sets the stage for effectively using AI in a legal context.

  • Check citations. Do you remember the lawyers mentioned at the beginning of this white paper? They probably could have avoided trouble had they done their due diligence and checked citations — which every lawyer has a responsibility to do with work they didn’t personally prepare. Had they done so, they would likely have spotted that those cases didn’t existent. Any legal professional should know that case citations are of the utmost importance, so care should always be taken to check them rigorously.

    This recommendation is not unique to AI tools. Again, it is comparable to work product by a bright but inexperienced associate; investing time in evaluating the work-product is always wise.
  • Evaluate tone. As a legal professional, you know your intended audience better than any AI tool does. AI tools developed for the legal industry produce more contextually appropriate material than public options. Even so, the nuanced nature of tone often requires calibration and customisation.
  • Ask “What’s missing?” AI tools aren’t perfect. Just like humans, they make mistakes. AI output can often be enhanced with your contributions, such as adding paragraphs, refining sentences, using more active verbs, or incorporating additional legal precedent. It’s not a matter of accepting or rejecting the output; your expertise can make even strong results great.

Balancing innovation and oversight

Legal professionals must adopt a proactive approach in overseeing their AI tools, so they can harness their full potential while minimising risks. By understanding the nuances of AI, lawyers can enhance their practice and deliver superior work-product.

As AI continues to evolve, its potential to optimise legal practice becomes increasingly evident. From drafting documents to streamlining research, these tools offer unprecedented efficiency when used thoughtfully, responsibly, and with care.

It’s also important to understand that while AI can act as a valuable assistant, it is not a substitute for legal judgment, ethical standards, or time-honoured experience. The best outcomes arise when lawyers treat AI output as a starting point — carefully reviewing, tailoring to the legal and factual context, and ensuring it meets professional standards.

As with any powerful technology, the key to successful and effective use is balancing innovation with oversight. With diligence, critical thinking, and a commitment to excellence, legal professionals can integrate AI into their workflows to deliver faster, smarter, and more effective client service.

Learn more about professional-grade AI tools — discover CoCounsel Legal.

AI lawyers swear by

CoCounsel Legal UK

CoCounsel Legal UK is the only legal solution that combines advanced AI with Practical Law guidance and Westlaw authority — bringing research, drafting, and analysis together