A barrister who acted as a lay advocate and later represented herself in a family welfare dispute has reported herself to the Bar Standards Board after including four cases in a skeleton argument that the court found did not exist and which she says were generated by an artificial intelligence tool. According to reporting by Legal Futures, Layla Parsons, an unregistered barrister who had been offering paid legal services to members of the public, withdrew the applications that relied on the spurious authorities.
Recorder Howard, sitting at Bournemouth Family Court, decided to name Ms Parsons in his ruling despite her having self-referred to the regulator and her objection that publicity would expose her to harassment. The judge said her self-reporting was “the responsible” course of action but concluded there remained a public interest in identifying her because of the risk she might again offer legal services.
The ruling records the judge’s concern that, notwithstanding her legal qualification, Ms Parsons “still does not really acknowledge or accept that her actions in not checking the citations and propositions she included in her skeleton argument were serious.” The judge treated her as a litigant in person for procedural purposes but emphasised that those who represent themselves are bound by the same duty not to mislead the court.
The decision also notes evidence that Ms Parsons had been available to purchasers of legal document packages from an unnamed website, reinforcing the judge’s assessment that there was “a real and not fanciful possibility that Ms Parsons will in the future offer legal services to members of the public”. He said that factor, combined with what he described as her failure to grasp the seriousness of including unverified authorities, was “a strong and overwhelming factor in favour of naming Ms Parsons”.
Beyond the individual case, legal regulators and tribunals have issued warnings about the risk that AI tools will produce fabricated authorities if outputs are not checked against reliable legal databases. The Upper Tribunal (Immigration and Asylum Chamber) has previously reprimanded a barrister after a fictitious judgment generated by ChatGPT was relied on in submissions and has urged practitioners to verify every citation to avoid regulatory referral or worse. Industry training guidance is increasingly stressing competencies in verifying AI outputs and documenting AI-assisted work.
Recorder Howard said he had tried to limit publication of personal details to what was strictly necessary and rejected Ms Parsons’s argument that criticism of AI use risked discouraging disabled litigants in person from using assistive technologies. The case illustrates the tension courts face between protecting access to justice and ensuring the integrity of proceedings when litigants rely on novel tools. Local family law practitioners said such matters underline the importance of careful case management and verification.
The judge concluded that naming Ms Parsons was necessary and proportionate, finding that the public interest outweighed the potential privacy risks. The episode adds to mounting judicial and regulatory guidance that legal professionals and lay representatives must exercise due diligence when using AI, and that failure to verify authorities can carry professional and regulatory consequences.
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
- Paragraph 1: [2], [3]
- Paragraph 2: [2]
- Paragraph 3: [2]
- Paragraph 4: [2], [3]
- Paragraph 5: [3], [6]
- Paragraph 6: [2], [4]
- Paragraph 7: [2], [3], [6]
Source: Noah Wire Services