
Once meant for connecting with close friends, social media has long since evolved into an amorphous blob of adverts, AI slop, and rapid-cycle trends - all driven by aggressively addictive algorithms.
Journalist Cory Doctorow coined a name for it: enshittification. It refers to the process by which digital platforms become worse and worse to use as their owners prioritise profit over people.
We’ve seen this in the rise in hate speech on X, following owner Elon Musk’s decision to relax moderation policies, while Meta’s business-driven focus has led to a swell of targeted ads, making connecting with other people even harder.
All of this suggests we’re reaching a tipping point, with many users and content creators turning to alternatives like Reddit and messaging apps in search of more meaningful engagement – or attempting to swear off technology in general.
And while we can expect artificial intelligence (AI) to ramp up personalisation and the streamlining of tasks such as content moderation, the balance between its helpfulness and harms hangs heavy in the air.
So, as 2026 unfolds – and a fresh new batch of viral moments await – here’s a closer look at some of the key trends and topics that we believe will redefine social media this year.
Age restrictions and AI regulation
2025 was a landmark year for social media regulation, with the rapid rise of AI and growing concerns over harmful content leading to calls for greater transparency and online safety.
Following Australia’s world-first social media ban for under-16s, protecting minors will continue to be a core legislative priority according to Paolo Carozza, member and co-chair of Meta’s independent Oversight Board.
“Understanding better the way that we should both protect young people and protect young peoples' freedoms to receive and impart information, that's a really difficult tension,” Carozza told Euronews Next.
“These [platforms] are important ways in which people connect and get basic information about the world and their lives and education and social connection. Reconciling that – those tensions specifically in the youth space – is a high priority for the Oversight Board and for so many legal and regulatory environments around the world right at this moment,” he continued.
Managing AI’s integration is another key focus, with the Oversight Board putting emphasis on Meta’s need for identification and transparency through methods such as labelling (something found to be ‘inconsistent’ last year).
“People need to be able to judge,” said Carozza. “Who are they talking to? Where is the information coming from?”
Providing users with more information and context allows them to make better personal judgements about what content to consume and share, according to Carozza.
“[That way] it's not simply a state authority or company on its own exercising a certain kind of substantive judgement.”
However, the sheer scale of AI-generated content across social media means that labelling alone won’t be enough, Carozza explained, with AI tools requiring more stringent moderation before reaching other platforms.
“We have to think about the ecosystem of content moderation as being broadened now beyond traditional social media platforms to the AI companies themselves," he said. "That's something that the board is going to try to work hard on in this coming year to develop principles and best practices for how those can be integrated.”
Advancing AI integration
From analytics to content creation to SEO, AI is now a fundamental part of how social media platforms and their users operate. Its capabilities and scale, however, are already evolving dramatically as companies invest ever-larger sums of money to stay ahead.
At the start of the year, Meta announced its purchase of the Singapore-based AI firm Manus, which it plans to utilise for supercharging "general-purpose agents" – artificial assistants that help with complex tasks – across consumer and business products.
Meanwhile, Musk’s controversial Grok chatbotis due an upgrade on X soon via the release of Grok 5, xAI’s most powerful model to date. Possessing a rumoured 6 trillion parameters, it promises enhanced reasoning capabilities and more nuanced responses.
However, upscaling AI technology to improve efficiency will remain in tension with protecting public safety for social media platforms – especially when it comes to tasks such as content moderation.
“AI allows us to moderate more effectively at scale; that can be a good thing. But we have to be really cautious, because by taking humans out of the loop, we are also putting certain things at risk in not having human judgements, especially on the difficult cases,” said Carozza.
As a recent scandal involving Grok generating thousands of sexualised fake images of women and children proves, guardrailing the inevitable dangers of AI will continue to be a hotly debated subject in 2026.
Alternative social media platforms
Around half of all global social media users want to spend more time on alternative, community-driven platforms, according to social media management company Sprout Social’s 2025 Pulse Survey.
It’s a shift that’s been noticeable since Musk’s takeover of X (formally Twitter) in 2022, after which users flooded to substitute platformsMastodon, Threads, and BlueSky. Since then, community-based services like Reddit, Discord, and messenger apps have all seen a significant surge in users, along with creator-driven platforms Substack and Patreon.
Fuelled by people’s desire for authenticity, niche subjects, and human connection, these spaces allow people to be more intentional about what they use social media for – and provide reprieve from the excessive advertorials and toxic feed clutter of Instagram, Facebook, and X.
“In 2026, social media will move decisively toward depth over scale,” Scott Morris, CMO of Sprout Social, told Euronews Next.
“As AI-generated content floods feeds, people are becoming far more selective about what earns their trust. Audiences are actively seeking informed dialogue, nuance, and shared understanding rather than passive consumption, which is why we are seeing conversation-led platforms like Reddit continuing to grow,” he added.
Morris said that a similar shift is happening with content creators, who in looking to escape the algorithm’s endless calls for content churn are turning to platforms that offer a slower pace and align with their expertise more specifically.
“Success in this era of social is driven by balancing visibility with meaningful engagement and an understanding of exactly where and how people want to participate,” Morris said.
Correction: This article has been amended to refer to Meta's Oversight Board as 'independent' rather than 'internal', to reflect the group's independent operations from Meta.