[UPDATE 10/19/2023, 7:57 pm EST: The AG's office has withdrawn its request as to Rumble, writing: "While we wholeheartedly disagree with your analysis and the contentions in your letter, in view of your statement that Rumble has 'already provided its content-moderation policies' to the Attorney General's Office, and in the interest of avoiding any unnecessary dispute, this Office hereby withdraws the voluntary requests
set forth in its October 12, 2023 letter to Rumble."]
From a letter sent by the New York Attorney General to "Google, Meta, X (formerly Twitter), TikTok, Reddit, and Rumble" on Oct. 12:
Re: Removing Calls for Violence on the [name of platform] Platform …
In the wake of the horrific terrorist attacks in Israel, there have been reports of growing antisemitism and Islamophobia, including threats of violence against Jewish and Muslim people and institutions. We have also become aware of press reports that terrorist groups and individuals that sympathize with them are disseminating calls for violence and other materials that may incite violence against Jewish and Muslim people and institutions on social media platforms. We are deeply concerned about this activity in light of the tragic history of such calls for violence. We would like to better understand how [your platform is] ensuring that its platform (the "Platform") is not being used to incite violence and further terrorist activities, including by describing how the company is identifying, removing, and blocking the re-uploading of such content.
We request that you promptly respond to the following questions by October 20, 2023.
- What actions, if any, has the company taken to address the recent calls for violence against Jewish and Muslim people and institutions and the possibility that the Platform may be used to plan, encourage, or disseminate those acts?
- Describe in detail the public-facing terms of service, community rules, or other policies that prohibit users from using your Platform to disseminate calls for violence, including copies of any documentation or policies.
- Describe in detail the company-facing policies that govern the determination of whether content is a call for violence that should be removed, including copies of any documentation and policies.
- Describe in detail the company's process for reviewing and removing calls for violence in response to reports from Platform users, including copies of any internal documentation.
- Describe in detail the company's process for identifying and removing calls for violence other than in response to reports from Platform users, including copies of any internal documentation.
- Describe in detail the company's methods for identifying and removing additional copies of content that has been removed as a result of the processes described in response to Question 4 and Question 5.
- Describe in detail the company's methods for blocking the re-posting of content that has been removed as a result of the processes described in response to Question 4, 5, and 6.
- Describe in detail the company's policies regarding disciplining, suspending, and/or banning users for posting content that has been removed as a result of the processes described in response to Question 4 and Question 5.
- Describe in detail whether content that has been removed as a result of the European Union's Digital Service Act is removed globally or whether such content remains accessible to users in the United States.
- Explain whether the company has used the Global Internet Forum to Counter Terrorism hash-sharing database to identify content on the Platform calling for
- For all questions above, please identify the company or affiliate group responsible for such processes, including but not limited to the organizational chart for such group and the number of full and part-time employees in such group.
The accompanying press release seems to me to make clear that this isn't mere research, but rather an attempt to get the platforms to "prohibit the spread of violent rhetoric that puts vulnerable groups in danger."
Here's an excerpt from the letter that FIRE (which is representing Rumble and me in Volokh v. James, a challenge to a New York law that would require platforms to, among other things, post policies on how they deal with supposed "hate speech") just sent in response; note that in that letter, FIRE is just representing Rumble and not me, since the AG's office only sent its letters to a limited group of platforms:
As the Attorney General's Office knows, Rumble, in addition to Locals Technology Inc. (Locals) and Eugene Volokh, is a plaintiff in a lawsuit against Attorney General James challenging New York's Online Hate Speech Law. The law targets "hateful" speech across the internet—defining "hateful" content as that which may "vilify, humiliate, or incite violence against" ten protected classes—requiring websites to develop and publish hate-speech policies and reporting mechanisms, and to respond to reports of hate speech. But Judge Andrew Carter of the U.S. District Court for the Southern District of New York enjoined enforcement of the law, including the law's investigation and enforcement provisions. Volokh v. James, No. 22 Civ. 10195, 2023 WL 1991435 (S.D.N.Y. Feb. 14, 2023).
The October 12th letters "request" information about the Investigated Platforms' editorial policies, processes, and decisions for content that "may incite violence." At a minimum and on their face, the letters plainly seek to allow the Office to "take proof and make determinations of fact" under the Online Hate Speech Law. And according to your October 13th press release, the letters go further by demanding that the Investigated Platforms disclose their actions to "stop the spread of hateful content" and "violent rhetoric," in a transparent effort to get them to "remove" protected speech. Because these demands, compounded by their vague references to hateful or violent speech, are within the scope of the Online Hate Speech Law's investigation provision, they violate the district court's injunction. Volokh, 2023 WL 1991435 at *1.
Rumble abhors violence, antisemitism, and hatred, and is horrified by the October 7th Hamas attacks on Israeli civilians. However, federal court orders and the First Amendment prohibit any investigation under the Online Hate Speech Law or any attempt to burden the protected speech of the Investigated Platforms and their users. Rumble, therefore, demands that the Attorney General rescind the October 12, 2023 investigation letters immediately. And in any event, in response to a similar June 2022 letter from the Attorney General's Office, Rumble already provided its content-moderation policies, which expressly prohibit the posting of content that promotes violence, illegal activities, and harm or injury to any group, including antisemitism. The policies, which are available online, speak for themselves, and Rumble respectfully declines to respond further at this time….
Further, the October 12th investigation letters unconstitutionally burden the Investigated Platforms' publication of First Amendment-protected content and the protected speech of third-party content creators. The First Amendment protects the rights of internet platforms as publishers of third-party content. Like newspapers and bookstores, websites have a First Amendment right to maintain the autonomy of their editorial judgment and discretion "in the selection and presentation of" content provided to the public.
Similarly, the First Amendment protects content creators and users from governmental burdens that are likely to chill their speech—exactly what the State's investigation letters seek to accomplish with their broad, vague, and inherently subjective language, coupled with references to removing content and "disciplining, suspending, and/or banning users." As the Court said in Volokh, "the state's targeting and singling out of [particular] speech for special measures certainly could make social media users wary about the types of speech they feel free to engage in without facing consequences from the state." See also Bantam Books, Inc. v. Sullivan, 372 U.S. 58, 68 (1963) ("People do not lightly disregard public officers' thinly veiled threats to institute … proceedings against them if they do not come around.").
The State's investigation letters—though styled as information requests—nevertheless signal that the Investigated Platforms should remove "hateful content" and ban "violent rhetoric." The letters, after all, are titled "Removing Calls for Violence on the [company] Platform." The word "remove," or variations of it, appears 10 more times in the two-page letter, with repeated references to "identifying" and "blocking" disfavored content.
Combined with the Attorney General's October 13th press release and her past rhetoric, the Investigated Platforms can draw only one conclusion: The chief law-enforcement agent in New York, with vast resources at her disposal, wants what she determines to be "hateful content" and "violent rhetoric" related to Jewish and Muslim people removed from online platforms—and there may be legal consequences if platforms do not do so to the State's satisfaction. This conclusion requires the Investigated Platforms, among others, to "steer far wider" in their content removal than they otherwise would to avoid the legal threat.
The investigation letters are centered around the dissemination of "calls for violence" and "other materials that may incite violence," exacerbating their First Amendment burdens by their use of these phrases and similarly overbroad, vague, and inherently subjective language. The State fails to define these phrases or terms, and thus fails to inform the Investigated Platforms of the type of conduct or speech that could be encapsulated by them. We are therefore forced to wonder, would a video created by a pro-Israeli activist calling for bombing Gaza qualify as a "call for violence"? Is a news report including a quotation from a pro-Palestinian protestor defending Hamas attacks on Israeli military equal to "disseminating calls for violence and other materials that may incite violence"? Do the statements of American elected officials, such as Representative Cori Bush's October 9th call to "end[] U.S. support for Israeli military occupation and apartheid" or Senator Lindsey Graham's October 10th statement that "[w]e're in a religious war … do whatever the hell you have to do to defend yourself. Level the Place!" qualify as speech that "may incite violence" or "encourage" violence?
Just as [the Online Hate Speech Law] was preliminarily enjoined as likely to violate the First Amendment—because it inhibits protected expression with viewpoint-discriminatory, overbroad, and vague speech regulations—so too does the State's investigation impinge the free publication and creation of protected speech on Investigated Platforms' websites….
Again, this is FIRE's letter on Rumble's behalf, not on mine, and I wasn't involved in its drafting; but I thought this was an important controversy, which was worth noting.
UPDATE: See also FIRE's press release discussing this.
The post [UPDATED] New York AG Tells Platforms to Disclose What They Are Doing About "Calls for Violence and Other Materials That May Incite Violence" appeared first on Reason.com.