A U.S. Supreme Court decision allowing government officials to speak to social media companies about content moderation doesn't come soon enough to help stop the spread of political disinformation before the November election.
Why it matters: Disinformation campaigns targeting the 2024 U.S. elections are expected to reach further and outnumber what's been seen in past elections, experts warn.
- Nation-state hackers are already using AI-enabled tools in their disinformation campaigns, and geopolitical tensions have made Russia and China more invested in November's presidential vote.
Driving the news: The Supreme Court ruled Wednesday in Murthy v. Missouri to allow the Biden administration to keep talking to social media platforms about content moderation issues, including foreign influence operations.
- The decision concluded a years-long legal battle that at times prohibited certain government agencies from talking with social media companies about these issues.
The big picture: Even with the court's decision, the legal battles have already had a detrimental impact on the broader ecosystem's ability to study and respond to disinformation.
- GOP lawmakers used the court case to target disinformation researchers at Stanford University so ferociously that the school appears to have collapsed their program.
- Civil society groups have warned that social media companies have pulled back on moderation policies ahead of the 2024 vote that they say are necessary to fight extremism and disinformation online.
- And the FBI only recently resumed its outreach to some American tech companies about disinformation after a more than six-month break, NBC News reported.
- "We have lost some valuable time that we should have been able to be working in a much more robust manner with the platforms and with state and local election officials," Suzanne Spaulding, a former Department of Homeland Security undersecretary who led the agency that later became the Cybersecurity and Infrastructure Security Agency, told Axios.
Flashback: In 2020, the FBI and CISA both played roles in alerting social media companies to potential mis- and disinformation on their platforms.
- CISA ran a "switchboarding" operation in which it relayed reports of online misinformation about the voting process from election officials to social media platforms.
- Academic researchers regularly published reports detailing the ways disinformation was spreading online — which the federal government used to help inform its work.
Between the lines: The biggest long-term impact from the Murthy case is the ripple effects it's had on the broader disinformation research community, former CISA director Chris Krebs, who infamously left his role after the 2020 election, told Axios.
- Researchers have been the target of House committee inquiries and the "Twitter Files" exposés.
- Part of the goal of this research was to figure out who was behind the spread of falsehoods online, how it moved on social channels, and what the life cycle looked like, Krebs added.
- "We'll certainly miss it in terms of understanding what's happening in near real time, and we're going to be worse off because of that," he said.
What they're saying: "The networks spreading misleading notions remain stronger than ever, and the networks of researchers and observers who worked to counter them are being dismantled," Renée DiResta, former research director of the Stanford Internet Observatory, wrote in a New York Times' op-ed this week.
The intrigue: CISA had already decided before the Murthy v. Missouri case to end switchboarding during the 2024 election cycle, according to the SCOTUS decision.
What we're watching: Building that muscle memory among social platforms, academics and the federal government wouldn't take that long — it's just a matter of how they choose to do it, Spaulding said.
- The University of Washington's Center for an Informed Public — which worked heavily with the Stanford program — also has 20 researchers dedicated to debunking election rumors for 2024.