Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Reason
Reason
Eugene Volokh

Apparent AI Hallucinations in AI Misinformation Expert's Court Filing Supporting Anti-AI-Misinformation Law

Minnesota recently enacted a law aimed at restricting misleading AI deepfakes aimed at influencing elections; the law is now being challenged on First Amendment grounds in Kohls v. Ellison. To support the law, the government defendants introduced an expert declaration, written by a scholar of AI and misinformation, who is the Faculty Director of the Stanford Internet Observatory. Here is ¶ 21 of the declaration:

[T]he difficulty in disbelieving deepfakes stems from the sophisticated technology used to create seamless and lifelike reproductions of a person's appearance and voice. One study found that even when individuals are informed about the existence of deepfakes, they may still struggle to distinguish between real and manipulated content. This challenge is exacerbated on social media platforms, where deepfakes can spread rapidly before they are identified and removed (Hwang et al., 2023).

The attached bibliography provides this cite:

Hwang, J., Zhang, X., & Wang, Y. (2023). The Influence of Deepfake Videos on Political Attitudes and Behavior. Journal of Information Technology & Politics, 20(2), 165-182. https://doi.org/10.1080/19331681.2022.2151234

But the plaintiffs' memorandum in support of their motion to exclude the expert declaration alleges—apparently correctly—that this study "does not exist":

No article by the title exists. The publication exists, but the cited pages belong to unrelated articles. Likely, the study was a "hallucination" generated by an AI large language model like ChatGPT….

The "doi" url is supposed to be a "Digital Object Identifier," which academics use to provide permanent links to studies. Such links normally redirect users to the current location of the publication, but a DOI Foundation error page appears for this link: "DOI NOT FOUND." … The title of the alleged article, and even a snippet of it, does not appear on anywhere on the internet as indexed by Google and Bing, the most commonly-used search engines. Searching Google Scholar, a specialized search engine for academic papers and patent publications, reveals no articles matching the description of the citation authored by "Hwang" that includes the term "deepfake." …

This sort of citation—with a plausible-sounding title, alleged publication in a real journal, and fictitious "doi," is characteristic of an artificial intelligence "hallucination," which academic researchers have warned their colleagues about. See Goddard, J, Hallucinations in ChatGPT: A Cautionary Tale for Biomedical Researchers (2023) ….

I also checked the other cited sources in the declaration, and likewise couldn't find the following one, which was cited in ¶ 19:

De keersmaecker, J., & Roets, A. (2023). Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Misinformation Acceptance. Computers in Human  Behavior, 139, 107569. https://doi.org/10.1016/j.chb.2023.107569

Indeed, a cautionary tale for researchers about the illusion of authenticity (though an innocent mistake, I'm sure). I e-mailed the author of the declaration to get his side of the story; he got back to me to say that he will indeed have a statement in a few days, and I will of course be glad to update this post and likely post a follow-up when I receive that.

The post Apparent AI Hallucinations in AI Misinformation Expert's Court Filing Supporting Anti-AI-Misinformation Law appeared first on Reason.com.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.