AI Irony: Misinformation Expert Delivers Testimony Laced with AI Misinformation
Stanford professor Jeff Hancock is a recognised expert on misinformation, and charges £600 an hour for that expertise. So it's more than a little awkward that an affidavit he drafted with ChatGPT-4o turned out to contain fabricated citations about misinformation itself.
What happened
Hancock submitted the affidavit in a Minnesota court case concerning the state's 2023 ban on using deepfakes to influence elections. Its purpose was to show how deepfakes amplify misinformation and erode trust in democratic institutions. Plaintiffs' attorneys spotted the problem: two cited articles simply didn't exist, and a third misattributed the authorship of a real study.
How the errors crept in
Hancock later admitted he had used GPT-4o and Google Scholar to assist with research and drafting. He had included placeholder tags ("[cite]") as reminders to insert correct references, but inadvertently allowed GPT-4o to generate fabricated citations in their place. "I express my sincere regret for any confusion this may have caused," he wrote, adding that he stood by the substantive points of the affidavit.
The irony is hard to miss
A misinformation expert's affidavit, about deepfakes, no less, undermined by misinformation. This isn't merely embarrassing; it's a clear warning about unchecked AI reliance in high-stakes environments. When AI hallucinates citations in a case designed to address digital deception, the consequences go well beyond irony.
This isn't an isolated case
Hancock's mistake is not unique. Many professionals have integrated AI into their workflows only to find it confident, capable, and occasionally wrong in ways that matter. In legal documents and expert reports, those errors carry serious repercussions.
Transparency and accountability
The case sparked a wider debate: should experts disclose when AI assists in drafting authoritative documents? The answer is yes, and a recent legal mandate now requires clear disclosure of AI involvement in expert opinions. As AI becomes embedded in professional workflows, robust guidelines and ethical frameworks are essential, not optional.
The teaching angle
Hancock teaches "COMM 1: Introduction to Communication" and "COMM 324: Language and Technology" at Stanford, where he emphasises proper citations as a means of broadening representation in communications. His students noted the irony of their professor being caught out by AI-generated fabrications. Even experts are not immune to AI's limitations.
The takeaway
Hancock's experience is a vivid illustration of AI's complex role in professional life. As AI becomes a standard part of how we work, transparency and clear disclosure aren't just good practice, they're necessary safeguards. The balance between innovation and responsibility demands honest conversation about where AI falls short.

