If you've ever grown angry at the content circulating — or blocked — on the internet, you might see a change soon. The Supreme Court on Tuesday heard the first arguments challenging a law that has protected social media companies for decades in what could be the year free speech and liability get a drastic makeover on social media, rewriting the rules for players like Facebook, Twitter, Snap, Google's YouTube and TikTok.
Lawmakers and judges are closing in on fundamental changes to Section 230 of the Communications Decency Act. The law has shielded social media companies from liability for content on their sites for nearly three decades.
Congress has tried over and over to revise or repeal Section 230 without success. But now the matter goes to the Supreme Court in two separate cases. And how justices interpret the 1996 law — sometimes referred to as "the 26 words that built the internet" — could lead to the most significant makeover ever of the doctrine governing online speech.
The court will look at whether the statute, written before the internet became a part of daily life, is too generous to social media companies. In the crosshairs are Facebook owner Meta Platforms, Twitter, the YouTube unit of Google-parent Alphabet, Snapchat owner Snap and China-based TikTok.
But the fallout could extend far beyond giant social media companies. A host of online ventures, from restaurant review sites to community message boards, could feel the impact.
"Section 230 is the nightmare for Big Tech," Wedbush analyst Dan Ives told Investor's Business Daily. "Wall Street is laser-focused on the Supreme Court, which is a whole other ballgame (than Congress) and comes with much more sharper risks if they act against Big Tech."
Supreme Court Review Of Section 230 Cases
The high court heard oral arguments Tuesday in the first case, Gonzalez v. Google. Justices reportedly held a hearing lasting nearly three hours, with some expressing concerns over a wave of litigation against social media operators over content control.
"You are creating a world of lawsuits," Justice Elena Kagan said, according to CNN. "Really, anytime you have content, you also have these presentational and prioritization choices that can be subject to suit."
The Gonzalez case involves the family of an American woman killed in the 2015 terrorist attacks by Isis in Paris. The family claims that Google, which owns YouTube, bears responsibility for the automated process that recommends videos. That includes those that could contribute to radicalization.
On Wednesday, the high court is scheduled to hear arguments in Twitter v. Taamneh. The case examines whether internet firms should be liable under a federal anti-terrorism law when they fail to purge terrorist content from their site.
The Taamneh case involves family relatives of a Jordanian citizen killed in a 2017 Isis terrorist attack in Istanbul. The family sued Twitter, claiming it failed to remove Isis propaganda on its platform, thereby providing material support to terrorists in violation of the federal Anti-Terrorism Act.
"The Supreme Court's decisions will lay the foundation for a new and different internet, and it almost certainly will be worse than the internet we have today," Eric Goldman, a law professor and co-director of the High Tech Law Institute at Santa Clara University School of Law, told IBD. "I'm very nervous about the internet's future."
Proposed Laws Affecting Section 230
The fight against Section 230 reached a boiling point when Twitter, Facebook and Google's YouTube banned former President Donald Trump after Trump supporters stormed the U.S. Capitol on Jan. 6, 2021.
Because of their size and reach, Meta, Google and Twitter have come under attacks by politicians and special interest groups. Some scholars and judges — notably Supreme Court Justice Clarence Thomas — suggest that many people read the law's liability protections too broadly. They call for narrowing its scope.
Over the past several years, Congress has introduced multiple bills about Section 230 that range from reforming the law to repealing it. Also, many state legislators have sought to override the statute. But partisan politics have stymied efforts to repeal or rewrite it.
Republicans believe Meta, Google and Twitter, before Elon Musk bought it, methodically censored right-wing voices that have a right to be heard.
Democrats, on the other hand, call for blocking content promoting hate, racism, misinformation, violence and more. Conflicting views on what comprises such content drives the debate.
"Both parties in Congress agree that changes to the bill are needed," said Caitlin Vogus, deputy director of the Free Expression Project at the Center for Democracy & Technology, in an interview with Investor's Business Daily. "But they don't agree on how that should be done. There's been a lot of discussion about Section 230 but we've not seen a lot of changes."
What Google Says About Section 230
Not surprisingly, Google and Facebook-parent Meta have a different point of view. Twitter and Snap did not respond to requests for comment.
"The stakes could not be higher," Google General Counsel Halimah DeLaine Prado said in a recent blog post. "Without Section 230, some websites would be forced to overblock, filtering content that could create any potential legal risk, and might shut down some services altogether."
"A decision undermining Section 230 would make websites either remove potentially controversial material or shut their eyes to objectionable content to avoid knowledge of it," Prado went on to say. "You would be left with a forced choice between overly curated mainstream sites or fringe sites flooded with objectionable content."
Meta says the mass of user-generated content online requires it to manage and prioritize it. Further, its aim is to best serve all users, the company says.
"The sheer volume of user-generated content on the internet means that online companies have to make decisions about how to organize, prioritize, and deprioritize this content in ways that are useful to people and advertisers," Meta Chief Legal Officer Jennifer Newstead wrote in a recent blog post. This affects how Meta enforces policies against terrorism and other harmful content.
Meta Platforms Calls On Congress To Act
"Meta has invested billions of dollars to develop sophisticated safety and security systems that work to identify, block, and remove terrorist content quickly," Newstead said.
In the third quarter of 2022 alone, Meta blocked or removed nearly 17 million pieces of third-party content, she added. Further, Facebook deemed the content violated its terrorism policies, and the website identified 99.1% of that content on its own. If such content evades Meta's first-line defenses, it has ways to mitigate the risk of showing it to others, Meta said in a "friend of the court" brief submitted to the Supreme Court on the Google case. But it offered no examples of how it would mitigate that risk.
Newstead calls for Congress to decide the matter.
"If Section 230 is truly to be converted into a regime at such profound odds with Congress' express findings and purposes, that decision should come from Congress, not this Court." Newstead wrote. "If online services risk liability for disseminating content but not for removing it, the only rational reaction is to err on the side of removal."
The 26 Words That Built The Internet
Section 230 says: "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
The statute arrived as the internet was just beginning to walk. Lawmakers adopted it during the era of online message boards from bygone tech stars such as AOL, CompuServe and Prodigy.
With the protections Section 230 provided, the internet caught fire. A flurry of initial public offerings in tech stocks came during what eventually became the dot-com era.
The law protected a rapidly emerging cluster of upstart companies trying to get off the ground. Further, it aimed to shield them from a bombardment of litigation. Twitter, for example, was not legally responsible for the tweets of its users. Likewise, review site Yelp was protected if one of its users trashed a business.
The law benefited publishers such as Craigslist, Reddit and bloggers. Further, its protections extended to fringe sites such as 8chan, known for hosting hate speech and violent content.
Impact On Other Social Media Companies
A Supreme Court decision to alter or strike down Section 230 could rock the legal landscape for many companies. Among them are review sites like Yelp and neighborhood groups like Nextdoor.
"Section 230 is talked about as providing protections for Big Tech," said the Free Expression Project's Vogus.
"But it's also about providing protection for users and their free expression rights," Vogus said. "It hurts internet users and the public who could lose the ability to publish information or access it online."
Federal legislation to change Section 230 includes H.R. 2154, the Protecting Americans from Dangerous Algorithms Act. Another bill is H.R. 3421, the Safeguarding Against Fraud, Exploitation, Threats, Extremism and Consumer Harms Act.
State Politicians Target Section 230
Legislative battles are also underway at the state level, most notably in Texas and Florida.
In May 2021, Florida Gov. Ron DeSantis signed legislation that focused on what he sees as the suppression of conservative voices. DeSantis is a prominent Republican and prospective presidential candidate.
"Many in our state have experienced censorship and other tyrannical behavior firsthand in Cuba and Venezuela," DeSantis said at a signing ceremony for the law known as Senate Bill 7072. "If Big Tech censors enforce rules inconsistently, to discriminate in favor of the dominant Silicon Valley ideology, they will now be held accountable."
A few months later, Republican Texas Gov. Greg Abbott signed the state's House Bill 20. Called the Stop Social Media Censorship Act, Texas proposed it shortly after social media sites suspended former President Donald Trump in the wake of the Jan. 6 insurrection at the U.S. Capitol.
The bill seeks to bar social-media platforms from blocking, removing or "demonetizing" content based on user views.
Conservative Viewpoints Silenced On Social Media?
"There is a dangerous movement by social media companies to silence conservative viewpoints and ideas," Republican State Sen. Bryan Hughes of Texas said at the bill's signing. "That is wrong, and we will not allow it in Texas."
The bills in Florida and Texas remain blocked for now, awaiting the Supreme Court's ruling on the Google and Twitter cases.
The debate over content moderation among Big Tech companies has long simmered. Trump made several attempts to dismantle the law. Though he did not succeed, his actions made it a presidential election issue.
President Joe Biden also has called for change. Biden believes social media companies aren't doing enough to weed out misinformation, violent content or offensive speech.
Please follow Brian Deagon on Twitter at @IBD_BDeagon for more on tech stocks, analysis and financial markets.