Attorneys representing Anthropic implement new verification procedures after filing declaration containing AI-generated hallucinations

Issue 31
The latest tale of AI-generated hallucinations in court filings has a practical takeaway for lawyers.
On May 15, 2025, a Latham & Watkins attorney representing Anthropic filed a declaration in Concord Music Group, Inc. et al. v. Anthropic PBC, as ordered by the Court to address an error in a previously filed declaration.[i] The declaration at issue was filed on April 30, 2025 by an Anthropic data scientist in support of Anthropic’s proposal to produce a statistically significant sample of Claude.ai’s prompt and output records for purposes of discovery in the Concord Music Group case.[ii]
Anthropic’s attorney, Ivana Dukanovic, explained in her May 15 declaration that while preparing the data scientist's declaration, she asked Anthropic’s AI tool Claude.ai to generate a properly formatted citation for an article that supported the data scientist’s testimony.[iii] The citation generated by Claude.ai provided an incorrect title and incorrect authors for the article, which was not caught before the declaration was filed.[iv] Ms. Dukanovic also disclosed two additional citation errors generated by Claude.ai in the declaration.[v] Ms. Dukanovic characterized her law firm’s failure to catch the hallucinated citations before filing as “an embarrassing and unintentional mistake.”[vi] Ms. Dukanovic further explained that the firm had added new verification procedures involving multiple levels of review to ensure that it does not occur again.[vii]
The following day, plaintiffs responded to Ms. Dukanovic’s declaration, arguing that the reliability of the data scientist’s declaration and Anthropic’s position in the discovery dispute have been undermined, and that the declaration should either be excluded or stricken.[viii] At the time this issue of the newsletter was prepared for publication, the Court had not yet responded.
This is just one of many similar stories from the past few years where lawyers have filed court documents containing hallucinations. What’s notable here is the action that Latham & Watkins took to mitigate the risk of such an embarrassing and unintentional mistake occurring again: adding new procedures involving multiple levels of review of AI-generated output.
As I’ve written before in this newsletter, law firms and other organizations that use AI to prepare court filings can reduce the risk of filing a document with hallucinations by going further than merely reminding lawyers that AI tools can hallucinate. Leaders should consider drafting and implementing written procedures that attorneys are expected to follow to verify the accuracy of AI-related output.
If you are interested in exploring ways you can reduce your risk of an AI-related mishap, you can sign up for my free resource, A Lawyer’s First Three Steps to Reduce AI Risk here.
Thanks for being here.
Jennifer Ballard
[i] Declaration of Ivana Dukanovic Related to ECF No. 365 at 1, Concord Music Group, Inc. et al. v. Anthropic PBC, No. 5:24-cv-03811 (N.D. Cal. opened Jun. 26, 2024).
[ii] Declaration of Olivia Chen in Support of Anthropic’s Sampling Proposal in Connection with Joint Discovery Dispute at 1, Concord.
[iii] Declaration of Ivana Dukanovic Related to ECF No. 365 at 2, Concord.
[iv] Id.
[v] Id.
[vi] Id.
[vii] Id. at 3.
[viii] Publishers’ Response to Declaration of Ivana Dukanovic Related to ECF No. 365 at 1-2, Concord.
Stay connected with news and updates!
Join our mailing list to receive the latest legal industry AI news and updates.
Don't worry, your information will not be shared.
We will not sell your information.