Artificial Intelligence & Robotics
Latest ‘Bluebook’ has ‘bonkers’ rule on citing to artificial intelligence
If you are unsure about how and when to cite content generated by artificial intelligence, a new citation rule is unlikely to clear up the confusion, according to experts who spoke with LawSites. (Photo by Howchou, PD ineligible (books), via Wikimedia Commons)
Updated: If you are unsure about how and when to cite content generated by artificial intelligence, a new citation rule is unlikely to clear up the confusion, according to experts who spoke with LawSites.
The 22nd edition of The Bluebook: A Uniform System of Citation, released in May, includes a new Rule 18.3 for citing output from generative AI. Critics argue that the new rule “is fundamentally flawed in both conception and execution,” LawSites reports.
Critics include Susan Tanner, a professor at the University of Louisville’s Louis D. Brandeis School of Law, who called the new rule “bonkers” in a post at Medium.
The rule requires that authors citing output from generative AI, such as ChatGPT conversations or Google search results, save a screenshot of that output as a PDF. The rule has three sections—for large language models, search results and AI-generated content—and has slightly differing citation rules for each.
One problem, Tanner said, is that the rule treats AI as a citable authority, rather than a research tool.
“What would a sensible approach to AI citation look like?” Tanner wrote. “First, recognize that in 99% of cases, we shouldn’t be citing AI at all. We should cite the verified sources AI helped us find.”
In the rare case in which an AI output should be cited, the author should remember that the citation is documenting what was said by generative AI, not the truth of what was said, Tanner said. She provides this example: “OpenAI, ChatGPT-4, ‘Explain the hearsay rule in Kentucky’ (Oct. 30, 2024) (conversational artifact on file with author) (not cited for accuracy of content).”
Jessica R. Gunder, an assistant professor of law at the University of Idaho College of Law, provided another example of an appropriate citation to generative AI in her critique of Rule 18.3 posted to SSRN.
“If an author wanted to highlight the unreliability of a generative AI tool by pointing to the fact that the tool crafted a pizza recipe that included glue as an ingredient to keep the cheese from falling off the slice, a citation—and preservation of the generative AI output—would be appropriate,” she wrote.
Cullen O’Keefe, the director of research at the Institute for Law & AI, sees another problem. The rule differentiates between large language models and “AI-generated content,” but content generated by large language models is a type of AI-generated content.
In an article at the Substack blog Jural Networks, he suggested that one interpretation of the rule governing AI-generated content is that it applies to things such as images, audio recordings and sound.
He also sees inconsistencies about whether to use company names along with model names and when to require the date of the generation and the prompt used.
“I don’t mean to be too harsh on the editors, whom I commend for tackling this issue head-on,” O’Keefe wrote. “But this rule lacks the typical precision for which The Bluebook is (in)famous.”
Updated Sept. 25 at 2:34 p.m. to accurately cite Cullen O’Keefe’s point about large language models. Updated on Sept. 27 at 8 a.m. to correct Gunder’s title.
Write a letter to the editor, share a story tip or update, or report an error.