On tech's biggest platforms, efforts to limit undesirable content are splintering as corporate priorities change.
Why it matters: Major online platforms that once competed to display their vigilance against misinformation, abuse and hate speech are now choosing decidedly different roads on how to police their content.
Driving the news: The Oversight Board that handles appeals of Facebook's content decisions announced Tuesday it would speed up some of its processes and take on more cases.
- The changes, which promise decisions "within days in urgent cases," could allow the independent organization to serve as more of a real-time participant in Facebook's enforcement of its rules.
- But the announcement is also another reminder that the company is increasingly willing to outsource critical decisions about its content policy.
The big picture: After the 2016 U.S. presidential election and Facebook's Cambridge Analytica controversy, large social media platforms all sought to show the public and lawmakers that they were cracking down on what critics identified as a deluge of misinformation and toxic posts.
- The companies, operating in parallel, tightened policies and hired legions of moderators in a campaign that continued through the COVID-19 pandemic, when platforms were flooded with medical misinformation.
But that consensus approach is ebbing today.
Meta's platforms, including Facebook and Instagram, increasingly rely on the Oversight Board to resolve or assist with the toughest questions they face — like the recent decision to allow former president Donald Trump back onto Facebook.
- The company still has large content moderation teams both in-house and through contractors. But its recent focus on building a metaverse has moved the attention of CEO Mark Zuckerberg and his key lieutenants away from content concerns.
Twitter under Elon Musk, meanwhile, has chosen a radically different course. The service recently offered a broad amnesty to accounts previously banned for violating its rules after a survey of Musk's Twitter followers supported the move.
- Musk has said he favors broad free speech principles, but critics have argued his changes have resulted in a surge of racist, anti-semitic and anti-LGBTQ speech and other kinds of extremism on Twitter.
- Musk's massive staff cuts at Twitter also decimated the teams that previously handled making and enforcing its content policies, particularly outside the U.S.
At Google's YouTube, recent layoffs included several managers and experts on content policy, per the New York Times.
- "Responsibility remains our top priority," YouTube spokesperson Elena Hernandez said in a statement. "We’ll continue to support the teams, machine learning, and policies that protect the YouTube community, and pursue this work with the same focus and rigor moving forward."
Between the lines: Platforms like Facebook, Twitter and YouTube increasingly rely on automated systems to flag content that might violate their policies.
- But the policies are set by people, and human moderators must still review decisions, handle tougher cases and resolve complaints.
Our thought bubble: It's not surprising that, at a moment when an economic slowdown is triggering widespread layoffs in the industry, companies would pick content moderation as a prime area for cutbacks. After all, these departments are not directly responsible for revenue.
- But they do play a big role in placating advertisers who don't want their messages to run next to hate-filled screeds. And even sites dedicated to "free speech" need to enforce laws governing underage users, terrorist content and more.
What's next: The advent of generative AI could lead to a new onslaught of automated social media posts that further test platforms' ability to protect their users' conversations.