In recent months, legal professionals across the United States have found themselves increasingly scrutinised for their reliance on AI-powered tools like ChatGPT, with several attorneys facing penalties for submitting filings containing what courts have termed “bogus AI-generated research.” The trend is troubling: lawyers are integrating AI into their workflows, often with the intention of saving time or enhancing efficiency. However, the technology can produce “hallucinations”—fictitious case citations or other misleading information that can have serious consequences.
The crux of the issue lies in a significant lack of understanding among many attorneys regarding the nature and functioning of large language models (LLMs). A notable incident involved a lawyer who believed ChatGPT was akin to a "super search engine." Only when the attorney's filing included nonexistent citations did the reality become apparent—that these AI models often generate seemingly plausible yet ultimately false content. The challenges of navigating this new landscape are compounded by the pressures of high-volume caseloads, which can lead to hasty decision-making and inadequate verification of citations.
Among the legal community, opinions on the use of generative AI tools are mixed. Andrew Perlman, dean of Suffolk University Law School, acknowledges that while AI hallucinations are a real concern, many lawyers are successfully using these tools without mishap. He believes that generative AI has the potential to transform legal services for the better. Supporting this view, a Thomson Reuters survey conducted in 2024 found that 63% of lawyers surveyed reported having used AI in their work. Many indicated they harness AI for tasks such as summarising case law or researching statutes, with half expressing a desire to explore further AI implementations in their practice. Perlman maintains that generative AI should not replace lawyers' judgment or expertise but can significantly bolster their capabilities.
However, the legal landscape is rife with cautionary tales. In one recent instance, lawyers representing journalist Tim Burke submitted a motion to dismiss against a case concerning First Amendment rights. The filing was found to contain multiple inaccuracies attributed to AI-generated content, prompting Judge Kathryn Kimball Mizelle to strike it from the record. She highlighted the severity and potential repercussions of relying on unreliable AI-generated information. Similarly, a federal judge in Alabama reprimanded the law firm Butler Snow after discovering false citations in their submissions related to a complex inmate safety case. Partner Matt Reeves took responsibility for the errors, admitting to using ChatGPT without proper verification.
The prevalence of AI-generated misinformation is not confined to the legal profession. In a separate but thematically linked instance, an election campaign in Philadelphia recently faced backlash after it was revealed that over 30 positive articles about a political figure had been generated by ChatGPT, leading to concerns about the implications of AI in political discourse and public trust.
Aside from legal consequences, there is an ethical dimension. The American Bar Association has underscored the necessity for lawyers to maintain a baseline level of competence regarding these technologies. This includes understanding their capabilities and limitations, particularly in relation to accuracy. Given the grave potential for misinformation, the ABA has advised that lawyers must verify AI outputs, reinforcing the idea that relying solely on AI without human oversight is not acceptable.
Moreover, advancements are being made to address the issue of AI hallucinations. Researchers have developed a method to distinguish accurate from inaccurate AI-generated information with a remarkable degree of success, yet these techniques are not yet ready for widespread application—and critics remain cautious about overreliance on these evolving systems.
The conversation around AI in the legal field is only beginning, but its implications are profound. While Perlman and others are optimistic about the transformative power of generative AI, others urge restraint. Judges caution against the uncritical outsourcing of research and writing tasks, highlighting that no technology can fully replace the nuanced understanding and analysis that experienced attorneys provide. As the legal profession grapples with these challenges, it seems clear that a balanced approach will be essential—leveraging the efficiencies of AI while safeguarding the integrity and accuracy of the legal practice.
Reference Map:
- Paragraph 1 – [1], [2]
- Paragraph 2 – [1], [4], [6]
- Paragraph 3 – [1], [3], [7]
- Paragraph 4 – [2], [4], [6]
- Paragraph 5 – [6], [5]
- Paragraph 6 – [1], [2], [7]
- Paragraph 7 – [5], [6]
Source: Noah Wire Services