Minnesota court rejects expert testimony tainted by AI-generated citations
Court excludes Stanford professor's expert testimony on deepfakes after discovering fake citations created by GPT-4 AI tool.
A Minnesota district court has excluded expert testimony from a Stanford University professor in a case challenging the state's deepfake law, after discovering the expert's declaration contained fake academic citations generated by artificial intelligence.
According to court documents filed on January 10, 2025, Professor Jeff Hancock, Director of the Stanford Social Media Lab, admitted using GPT-4o to help draft his expert declaration supporting Minnesota Attorney General Keith Ellison's defense of the state's law regulating deepfake content in political campaigns.
The case, Kohls v. Ellison, involves a First Amendment challenge to Minnesota Statute § 609.771, which prohibits the dissemination of deepfakes intended to injure political candidates or influence election results. District Judge Laura M. Provinzino called the situation "ironic," noting that Hancock, a credentialed expert on AI and misinformation, "has fallen victim to the siren call of relying too heavily on AI—in a case that revolves around the dangers of AI."
Court records show Hancock's declaration included citations to two non-existent academic articles and incorrectly attributed authors in a third citation. These errors emerged from his use of the GPT-4o AI tool during the drafting process, which generated fake citations that Hancock failed to verify before submitting his declaration under penalty of perjury.
The court highlighted a particularly troubling aspect of the incident: Hancock typically validates citations with reference software when writing academic articles but did not apply the same rigor to his court declaration. "One would expect that greater attention would be paid to a document submitted under penalty of perjury than academic articles," Judge Provinzino wrote.
Attorney General Ellison's office maintained it had no knowledge of the fake citations and attempted to remedy the situation by requesting leave to file an amended declaration. However, the court denied this motion, stating that the original declaration's errors "shatter his credibility with this Court."
The decision adds to growing judicial concern about AI-generated content in legal proceedings. The court cited several recent cases where attorneys faced consequences for including AI-generated citations, including Mata v. Avianca, Inc. (Southern District of New York), Park v. Kim (Second Circuit), and Kruse v. Karlan (Missouri Court of Appeals).
Judge Provinzino suggested that going forward, attorneys should specifically ask witnesses whether they used AI in drafting declarations and verify any AI-generated content. The court emphasized that while AI has potential benefits for legal practice, practitioners must maintain independent judgment and critical thinking skills rather than relying uncritically on AI-generated content.
The court did accept testimony from a second expert, Professor Jevin West from the University of Washington, finding his declaration provided reliable background information about AI, deepfakes, and their impacts on speech and democracy. Unlike Hancock's declaration, West's testimony contained properly verified sources and citations.
This ruling emerges as courts nationwide grapple with maintaining evidentiary standards in an era of increasing AI use in legal practice. The Minnesota court's decision establishes clear consequences for submitting AI-generated content without proper verification, even when submitted by highly qualified experts.
The underlying case continues as the court considers the constitutional challenge to Minnesota's deepfake law. The exclusion of Hancock's testimony may impact the state's ability to defend the statute's constitutionality, though the acceptance of West's testimony provides some expert support for the state's position.
This incident demonstrates the complex challenges facing courts and legal practitioners as they navigate the integration of AI tools while maintaining traditional standards of evidence and professional responsibility. The court's ruling serves as a warning that regardless of their expertise or standing, participants in legal proceedings must verify the accuracy of all content, especially when AI tools are involved in document preparation.