Yoel Roth, who used to be in charge of "trust and safety" at a social media company that used to be called Twitter, is worried about "coercive influences on platform decision making." But his concern is curiously selective. Unobjectionably, he sees coercion when foreign governments threaten to arrest uncooperative platform employees. More controversially, he also sees coercion when Republicans criticize content moderation decisions. When Democrats in positions of power pressure social media platforms to suppress politically disfavored content, however, Roth sees no cause for concern.
Roth begins his confused and confusing New York Times essay on this subject by airing a personal grievance that the headline also highlights: "Trump Attacked Me. Then Musk Did. It Wasn't an Accident." When Roth worked at Twitter, now known as X, he "led the team that placed a fact-checking label on one of Donald Trump's tweets for the first time." After the January 6, 2021, riot by Trump supporters at the U.S. Capitol, Roth "helped make the call to ban his account from Twitter altogether."
Because of Roth's involvement in that first decision, Trump "publicly attacked" him. After Elon Musk acquired Twitter in 2022, prompting Roth to resign from his position, Musk "added fuel to the fire." As a result, Roth says, "I've lived with armed guards outside my home and have had to upend my family, go into hiding for months and repeatedly move."
It goes without saying that no one should have to worry about threats to his personal safety because he made controversial decisions as an employee of a social media company. And it is certainly true that Trump, both before and after the riot he inspired, has never shown any concern about the risk posed by his combustible combination of inflammatory rhetoric, personal attacks, and reality-defying claims. But Musk's role in all of this is more ambiguous, since the "fuel" he added to "the fire" consisted mainly of internal Twitter communications that he disclosed to several journalists, who presented them as evidence that federal officials had pressured the platform to suppress speech those officials viewed as dangerous. Although a federal judge and an appeals court saw merit in the claim that such meddling violates the First Amendment, Roth conspicuously ignores that concern even as he bemoans "coercive influences on platform decision making."
The real threat, as Roth sees it, is that platforms have abandoned their responsibility to police "misinformation" and "disinformation" in response to conservative criticism, intimidation, and political pressure. He presents his own experience as emblematic of that problem.
That experience began with Roth's decision to slap a warning label on a May 2020 tweet in which Trump claimed that mail-in ballots are an invitation to fraud and that their widespread use would result in a "Rigged Election." After senior White House adviser Kellyanne Conway "publicly identified me as the head of Twitter's site integrity team" and the New York Post "put several of my tweets making fun of Mr. Trump and other Republicans on its cover," Roth notes, the president "tweeted that I was a 'hater.'" That triggered "a campaign of online harassment that lasted months, calling for me to be fired, jailed or killed."
That result, Roth avers, was "part of a well-planned strategy"—"a calculated effort to make Twitter reluctant to moderate Mr. Trump in the future and to dissuade other companies from taking similar steps." The strategy "worked," he says, as evidenced by Twitter CEO Jack Dorsey's reluctance to shut down Trump's account (as Roth recommended) based on mid-riot tweets—including one criticizing Vice President Mike Pence for refusing to interfere in the congressional ratification of Joe Biden's victory—that egged on the rioters rather than trying to calm them. Trump "was given a 12-hour timeout instead" before he was finally banned from Twitter two days after the riot. Roth complains that Twitter also was unjustifiably patient with "prominent right-leaning figures" such as Rep. Marjorie Taylor Green (R–Ga.), who "was permitted to violate Twitter's rules at least five times before one of her accounts was banned in 2022."
Since Trump is a petty, impulsive man who has always been quick to lash out at anyone who irks him, the suggestion that he was executing "a well-planned strategy" probably gives him too much credit. And without minimizing his rhetorical recklessness, which was at the center of the case against him in his well-deserved second impeachment, it is fair to question the comparison that Roth draws between Trump's pique at him and cases in which government officials have explicitly used their coercive powers to impose their will on social media platforms.
"Similar tactics are being deployed around the world to influence platforms' trust and safety efforts," Roth writes. "In India, the police visited two of our offices in 2021 when we fact-checked posts from a politician from the ruling party, and the police showed up at an employee's home after the government asked us to block accounts involved in a series of protests." However unseemly and ill-advised, calling Roth a "hater" on Twitter is qualitatively different from deploying armed agents of the state to intimidate the company.
Roth offers another example of government intimidation that he sees as analogous to what Trump did to him: "In 2021, ahead of Russian legislative elections, officials of a state security service went to the home of a top Google executive in Moscow to demand the removal of an app that was used to protest Vladimir Putin. Officers threatened her with imprisonment if the company failed to comply within 24 hours. Both Apple and Google removed the app from their respective stores, restoring it after elections had concluded." Again, Trump's complaint about Twitter's warning label is not in the same category as threatening tech company employees with imprisonment.
Roth not only glides over the distinction between criticism and threats; he equates constitutionally protected speech with government bullying. "In the United States," he says, "we've seen these forms of coercion carried out not by judges and police officers, but by grass-roots organizations, mobs on social media, cable news talking heads and—in Twitter's case—by the company's new owner."
Before we get into Roth's beef against Musk, it is worth emphasizing that "grass-roots organizations," "mobs on social media," and "cable news talking heads" are not engaging in the same "forms of coercion" as cops dispatched to enforce the government's will. In fact, they are not engaging in "coercion" at all; they are exercising their First Amendment rights. Their criticism may be misguided, unfair, or overheated, but it is undeniably covered by "the freedom of speech" unless it crosses the line into a legal exception such as defamation or "true threats."
As for Musk, Roth complains that he disclosed "a large assortment of company documents"—"many of them sent or received by me during my nearly eight years at Twitter"—to "a handful of selected writers." Although the "Twitter Files" were "hyped by Mr. Musk as a groundbreaking form of transparency," Roth says, they did not reveal anything significant about how Twitter decided which kinds of speech were acceptable. He cites TechDirt founder Mike Masnick's judgment that "in the end 'there was absolutely nothing of interest' in the documents."
Although Roth assures us there is nothing to see here, fair-minded people might disagree. The Twitter Files showed, for example, that the platform eagerly collaborated with the Biden administration's efforts to suppress "misinformation" about COVID-19, which sometimes involved truthful statements that were deemed inconsistent with guidance from the Centers for Disease Control and Prevention (CDC). This month the U.S. Court of Appeals for the 5th Circuit concluded that such automatic deference to the CDC, which also was apparent at other platforms, qualified as "significant encouragement" of censorship by a government agency, "in violation of the First Amendment."
The platforms "came to heavily rely on the CDC," the 5th Circuit noted. "They adopted rule changes meant to implement the CDC's guidance." In many cases, social media companies made moderation decisions "based entirely on the CDC's say-so." In one email, for example, a Facebook official said "there are several claims that we will be able to remove as soon as the CDC debunks them" but "until then, we are unable to remove them."
The 5th Circuit said the CDC's role in content moderation, although inappropriate, was "not plainly coercive," mainly because the agency had no direct authority over the platforms. But when it came to pressure exerted by the White House, the court saw evidence of "coercion" as well as "significant encouragement." According to the 5th Circuit, the administration's relentless demands that Facebook et al. do more to control "misinformation," which were coupled with implicit threats of punishment, crossed the line between permissible government speech and impermissible intrusion on private decisions.
Roth does not even mention that decision, even to criticize it. But if President Trump was abusing his bully pulpit when he called Roth a "hater," what was President Biden doing when he accused social media companies of "killing people" by allowing speech that discouraged vaccination against COVID-19?
Surgeon General Vivek Murthy lodged the same charge while threatening Facebook et al. with "legal and regulatory measures" if they failed to do what the administration wanted. Other administration officials publicly raised the prospect of antitrust action, new privacy regulations, and increased civil liability for user-posted content. Meanwhile, behind the scenes, White House officials were persistently pestering social media companies, demanding that they delete specific posts and banish specific users while alluding to Biden's continuing displeasure at insufficiently strict speech regulation.
Roth portrays Trump's whining, conservative criticism, and Musk's avowed "transparency" as part of a coordinated "campaign" to discourage platforms from suppressing "misinformation." In his view, these are all "coercive influences on platform decision making." But he evidently sees nothing troubling about the Biden administration's crusade against "misinformation," which the 5th Circuit thought plausibly amounted to "coercion" because it was backed by implied threats of government retaliation. That definition of coercion seems a lot more reasonable than Roth's.
Speaking of definitions, Roth seems confident that "misinformation" can be readily identified, although that category is vague and highly contested. Even if he trusts the Biden administration to decide which speech qualifies as "misinformation," he should be concerned about how a second Trump administration—or the foreign authoritarians he mentions—might apply it.
Roth's double standard is also apparent when he decries the intimidating effect of congressional inquiries into the alleged anti-conservative bias of major social media platforms. He does not acknowledge that congressional pressure on tech companies is a bipartisan phenomenon, with Republicans arguing that platforms discriminate against right-wing speech and Democrats arguing that they should be doing more to suppress "misinformation" and "hate speech."
Members of both parties want to override the editorial judgment of social media platforms, which is supposed to be protected by the First Amendment. Their competing demands suggest the wisdom of a general rule against government interference with content moderation decisions, regardless of the ideological motivation behind it. But Roth is so focused on the people who have wronged him and the social media users who offend him that he cannot see the merits of that approach.
The post A Former Twitter Executive's Highly Selective Concern About 'Coercive Influences' on Social Media appeared first on Reason.com.