Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - US
The Guardian - US
Technology
Kari Paul

US supreme court appears skeptical of social media content moderation laws

The US supreme court with rising sun
The US supreme court is hearing two cases pertaining to social media platforms and content moderation. Photograph: Kevin Lamarque/Reuters

Members of the United States supreme court expressed skepticism regarding two laws being debated in oral arguments on Monday, both of which deal with how social media platforms moderate content and could have broad implications for freedom of speech online.

Filed by NetChoice, an association representing the world’s largest social media firms, both cases challenge state laws blocking social media platforms from moderating certain user content or banning users. Arguments on Monday lasted longer than many experts anticipated, extending into a marathon four-hour session.

While justices did not seem convinced of the constitutionality of the laws, they were also “unpersuaded by the Internet companies’ broad arguments that almost everything they do is protected by the first amendment”, said James Grimmelmann, professor of digital and information law at Cornell University.

“I would predict that the court will issue relatively narrow rulings that make it clear that the most restrictive portions of the state laws are unconstitutional, and then let litigation play out to determine whether other provisions of these laws – or of other future laws – are constitutional,” he said.

The court first heard arguments in the case of Moody v NetChoice, which deals with a Florida law passed in 2021 that prevents platforms from “censoring” certain political candidates and media outlets by means of demonetization or removal. The law would also limit platforms’ ability to label and moderate misinformation from specific sources.

Justice Sonia Sotomayor stated during oral arguments that the Florida law is not specific enough, and could affect online services such as Etsy and Uber in addition to the social media platforms it was meant to target.

“This is so, so broad. It’s covering almost everything,” Sotomayor said, according to reports. “The one thing I know about the internet is that its variety is infinite.”

Justices also questioned whether removal of content by algorithms rather than by humans qualifies as censorship, with Amy Coney Barrett asking specifically about TikTok’s algorithm boosting pro-Palestine posts rather than pro-Israel posts.

“If you have an algorithm do it, is it not speech?” she asked.

Justice Samuel Alito asked of content moderation: “Is it anything more than a euphemism for censorship?”

The second case is NetChoice v Paxton, targeting a Texas law that broadly prohibits social media platforms from “censoring on the basis of user viewpoint, user expression, or the ability of a user to receive the expression of others”.

NetChoice, with members including Pinterest, TikTok, X and Meta, has argued these laws violate the first amendment right to free speech of the companies. The association argues that the law unconstitutionally restricts their ability to decide what content is published on their platforms.

Both cases situate themselves in a longstanding Republican argument that tech giants actively censor political speech that is conservative in nature. These claims, though debunked by experts repeatedly, have been aggravated by high-profile incidents like the removal of former president Donald Trump from Meta, X (then Twitter) and YouTube in 2021 after the January 6 Capitol riot. Trump previously filed a brief in support of the law at the center of NetChoice v Paxton, urging the court to uphold it.

Free speech and civil rights advocates have spoken out against the laws being evaluated, saying that they will significantly hinder social media firms’ ability to moderate content at a time when such practices are critical.

“In this pivotal election year, social media is already having a significant impact on our democracy,” said Nora Benavidez, senior counsel at social media watchdog group Free Press. “While we believe that the platforms should strengthen their content-moderation policies, the first amendment is clear: it’s not the government’s role to impose rules on how companies like Meta and Google should accomplish this.”

These cases are two of several the supreme court has agreed to hear this session that potentially threaten protections extended to social media companies under section 230 of the Communications Decency Act, which maintains that platforms cannot be held legally liable for how content generated by their users is or is not moderated. Another case will determine whether the White House can legally contact social media companies and request that content be removed.

The court in May 2023 failed to rule on two cases that would have affected section 230 protections and potentially upended the way the internet is moderated. With more internet freedom cases on the horizon, it is important to note that in the past justices have noted their own lack of knowledge on the topic, said Eric Goldman, a Santa Clara University law professor. Justice Elena Kagan previously said members of the court were not “the nine greatest experts on the internet”.

“Nevertheless, these non-experts will determine the Internet’s fate,” Goldman said. “The supreme court will be taking more internet law cases in the next few years, each of which has the potential to radically reshape the Internet. The odds the current Internet will survive this multi-staged review intact is very low.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.