A legal scholar has described an unusual dilemma after using generative AI to compare two 19th-century transcripts of Aaron Burr’s treason trial: whether the resulting 22-page memo should remain an internal research aid, be posted online as a draft, or be treated as publishable scholarship. The document, produced after dozens of rounds of prompting, was designed to test whether his earlier 2021 article on the privilege against self-incrimination still held up when measured against a second transcript. According to his account, the core substantive points matched closely, with only small discrepancies one might expect from independent recorders of the same proceedings.
The method behind the memo was far more involved than a single prompt. He said he spent hours refining Claude’s output, correcting pagination problems, pushing the model to compare equivalent passages, and insisting on direct quotations and page references. At one stage, he said, the model could only complete about a third of the requested comparisons correctly, and he eventually discovered that screenshot-based side-by-side checking improved reliability. He also asked the system to identify any arguments or legal authorities that appeared in one transcript but not the other, and to assess whether his earlier summary had missed anything.
That experience left him with a second, more difficult question: what is this document, exactly? He suggested that it could be treated as a private research memo, used only as a guide before he does the comparison himself by hand. Alternatively, he considered placing it on SSRN as a draft without pursuing journal publication, so that researchers interested in the Burr trial could find it. A third option would be to seek publication, on the theory that the memo has enough scholarly value to warrant a formal home.
The author’s uncertainty is reflected in a wider legal debate over AI-generated work. One recent SSRN paper proposes a special framework for protecting AI-created outputs through limited terms, registration and notice requirements, and a public-interest funding mechanism. Another argues that legal scholarship risks losing more than efficiency gains if writing itself is reduced to an instrumental exercise, warning that generative AI may erode the intellectual value of the drafting process. A separate paper by Andrew Perlman says the rise of generative systems forces legal academia to confront authorship in a new way, since the production of scholarship itself is changing.
Other scholars have gone further in asking whether AI can itself be treated as an author. One SSRN paper by Cheng Lim Saw and Duncan Lim argues that copyright doctrine should, in some circumstances, recognise AI authorship. Michael D. Murray, by contrast, focuses on AI as academic support, describing it as a tool that can explain and summarise material more quickly than traditional methods. And a recent paper by David M. Pereira suggests authorship should not be treated as a simple yes-or-no label, but as a qualitative threshold that may still be met by a human when AI is operating under that person’s intellectual control.
For now, the scholar behind the Burr trial comparison says his instinct is to decide first whether doing the work himself would take too long. If it would not, he would prefer to discard the AI memo as a publishable object and write the comparison in his own words. If it would, he says he may adopt a “prompter-director” role: write an introduction explaining how the project was made, attach the AI-generated memo, and make clear that the machine produced the underlying text. That leaves the larger issue unresolved, but it also captures the central tension in AI-assisted legal writing: the line between assistance, direction and authorship is becoming harder to draw.
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
- Paragraph 1: [2], [3]
- Paragraph 2: [2], [4]
- Paragraph 3: [3], [4]
- Paragraph 4: [2], [3], [4]
- Paragraph 5: [5], [6], [7]
- Paragraph 6: [3], [4], [7]
Source: Noah Wire Services