The article is here; the Introduction:
As articulated by Justice Brandeis in Whitney v. California (1927), a foundational assumption of First Amendment jurisprudence is that the best remedy for potentially harmful speech, including false and misleading speech, is "more speech, not enforced silence." This extended Oliver Wendell Holmes' "free trade in ideas" model of speech in which the ultimate good is reached when people are free to exchange ideas in a marketplace without fear of government punishment (Nunziato 2018). However, the idea that an unregulated marketplace of ideas leads to the greatest public good has been increasingly challenged as our politics has become more contentious, polarized, and burdened with conspiracy theories that could potentially spread unimpeded through online networks (e.g., Sunstein 2021).
The January 6 Capitol riot provides the most striking example of this current state of affairs: Supporters of the sitting president, believing conspiracy theories about a stolen election (many of which were transmitted through social media), attacked the Capitol to disrupt the certification of the 2020 election. Of course, this is not an isolated incident—believers of conspiracy theories have been linked to numerous instances of societal harm. Supporters of the conspiracy theory-laden QAnon movement have engaged in harassment, kidnappings, domestic terrorism, and killings (Bump 2019). Those exhibiting beliefs in COVID-19 conspiracy theories—of which there are many—refuse social distancing, masking, and vaccination (Romer and Jamieson 2020), allowing the virus to spread unhindered. If conspiracy theories are causing people to engage in violent or otherwise harmful actions, doesn't the government have the responsibility to prevent those harms by limiting the reach of conspiracy theories?
It is clear that conspiracy theories (and other similarly dubious ideas) are subject to existing jurisprudential doctrine regarding defamation, imminent lawless action, threats, and false statements (Han 2017, 178). Indeed, one could argue with relative ease that at least some conspiracy theories serve no purpose in contributing to the marketplace of ideas, promoting healthy democracy, or aiding in the search for the truth, and that any personal or societal harm stemming from such conspiracy theories outweighs the merits of protecting them. But as with all other forms of speech, circumstances matter, and under current legal frameworks, only particular conspiracy theories—those that fall into one of the categories of low-value speech listed above—will be denied constitutional protection. The result is that most conspiracy theories, even those that are intentional lies, will constitute protected speech.
Anxiety about the role that conspiracy theories have played in recent unlawful and normatively undesirable actions like those described above has prompted some legal scholars to argue that these theories should receive less protection under the First Amendment than they currently do (Sunstein 2021; Han 2017; Hay 2019; Waldman 2017; Schroeder 2019; Thorson and Stohler 2017). Their claim is that existing doctrine is antiquated and unsuited to ameliorating increasingly dire social ills in an era in which ideas can travel farther and faster than ever before.
This also appears to be the position of many policymakers (e.g., Klobuchar 2022). In recent years, the U.S. president and members of Congress have publicly browbeaten social media companies for promoting conspiracy theories (and other dubious ideas) on their platforms, calling for these companies to take "additional steps" and admonishing them for "killing people" (Bose and Culliford 2021). Congress has held hearings addressing the scope of conspiracy theories online, resulting in a number of proposals at the national and state levels to curb this type of potentially harmful speech vis-à-vis content moderation and legal penalties (Walker 2020; Riggleman 2020; Heilweil 2020). For example, Sen. Amy Klobuchar, D-MN, sponsored a bill that would remove the protections afforded by Section 230 of the Communications Decency Act if health misinformation, as defined by the Department of Health and Human Services, were algorithmically promoted by a platform (MacCarthy 2021).
In this paper, we argue that, from a normative perspective, laws restricting the dissemination of conspiracy theories should be permissible only if two conditions can be met: 1) "conspiracy theory" can be specifically defined, and ideas can be, with minimal error, classified as conspiracy theories; 2) the causal impact of conspiracy theories on unlawful and otherwise dangerous behavior can be empirically demonstrated. Satisfying the first condition prevents the limitation of speech based solely on the ideology of the ideas being expressed (i.e., viewpoint discrimination); satisfying the second condition ensures that there exists a reasonable societal interest in preventing the speech.
Drawing on an interdisciplinary body of literature about the basic nature, epistemology, and correlates of beliefs in conspiracy theories, we demonstrate that neither condition can be satisfied. Indeed, the concise definition of "conspiracy theory" is prevented by centuries-old epistemological quandaries, the accurate categorization of ideas as conspiracy theories is prohibited by a combination of definitional challenges and human psychology, and researchers' ability to either explain or forecast unlawful, dangerous behaviors using the communication of or belief in conspiracy theories is extremely weak.
Further, we challenge the premises underwriting the desire to construct a new legal framework for dealing with conspiracy theories. Specifically, we argue that conspiracy theories do not pose greater problems today than in the past, that social media and other new communication technologies have not ushered in an increase in conspiracy theorizing, and that the dangers that do spring from conspiracy theories are most realized when political leaders, rather than private citizens, traffic in them.
Finally, we argue that conspiracy theories oftentimes possess the qualities of protected speech; namely, they can and, historically, have promoted democracy and a search for the truth. Not only does this evidence preclude the construction of a new legal framework designed to limit conspiratorial speech but it showcases how other proposals in this vein would capriciously censor ideas based on personal viewpoints, cause a severe chilling effect, ensnare more speech than stated intentions claim, and do little to stymie the harms to be prevented.
The post Journal of Free Speech Law: "What's the Harm?," by Profs. Adam Enders & Joseph Uscinski appeared first on Reason.com.