News that the world’s richest man, Elon Musk, is buying Twitter has sent the world into a spin, dividing those who advocate for unfettered free speech – like Musk – and those who believe that some platforms wield too much power and influence. But the ultimate outcome for Twitter may depend heavily on social media regulation.
Chris Philp, the UK minister for technology and digital economy, recently gave a speech about the government’s plan for digital regulation through its proposed online safety bill, promising a “light-touch” approach, but which “supports a thriving democracy”. Now the government has warned Musk that Twitter will have to comply with the new UK legislation.
This isn’t the only kind of regulatory challenge facing social media companies. After Facebook and Twitter removed Russian state media from their sites at the end of February, the Russian communications regulator Roskomnadzor blocked access to the platforms, citing discrimination.
With concerns mounting about state influence on media and information, we urgently need to understand what democratic social media regulation should look like. As philosophers in this field, our work looks at the theoretical foundations that underpin democracy.
The key insight at the heart of our ongoing research is that political freedom depends on public debate. We have spoken to policymakers, broadcasters, journalists, activists and regulators about how best to apply these insights and political theory to the public sphere.
Old media and new platforms
Democracy is much more than a system of regular multiparty elections. It also requires that citizens feel able, and have the resources, to challenge those in power. Ideally, it requires people to engage in honest discussion about the best ways to live and work together, and to assess each other’s contributions openly, inclusively and frankly.
This should result in people making political decisions based on reason rather than being swayed by power, bias or fear. Open debate is necessary for our choices at the ballot box to be well-informed, free choices.
In the philosophy of democracy there are two key conditions under which public debate counts as democratic: if citizens participate in it, and if it makes them more knowledgeable.
Traditionally the public sphere was made up of media like newspapers, radio and television. Now, social media platforms like Facebook, TikTok and Twitter are part of the fabric of our political debate too. And these new online communication methods have quite a mixed record when it comes to both knowledge and participation.
On the one hand, they enable certain audiences to hear voices that were historically excluded by media “gatekeepers” in the press and television, and in earlier days, newsreels. For example, the #MeToo and Black Lives Matter movements creatively use the internet and social media to spread their messaging.
Getting these messages out is a big advance both in knowledge and participation. But digital communication is also frequently blamed for online bullying, fake news, polarisation, loss of trust, foreign manipulation of national debates and new forms of silencing.
Traditional propaganda is now joined by attempts – as Donald Trump’s one-time adviser Steve Bannon put it – to “flood the zone with shit” as a way of spreading distrust. These things destroy knowledge and result in uneven participation, especially with some voices elevated by “bots” – computer programmes masquerading as humans.
Finding a way through
So how can we regulate this new, digital public sphere to support knowledge and participation? In our recent report, we suggest four basic norms that can act as guiding lights for policymakers:
1. Fair and equal access
Platforms, regulators and participants must ensure fair and equal access to public political discussion. This means giving everyone equal legal rights to participate in public debate, to stand for public office and to vote. But it also means making special efforts to elevate and highlight the contributions of those who are economically, socially or in other ways disadvantaged in public discussion.
2. Avoid obvious falsehoods
Platforms, regulators and participants must avoid posting or publishing things that are clearly not true. A claim or idea is obviously untrue when facts that falsify it are widely known. The kind of obvious falsehoods at stake include denial of firmly established verifiable facts, such as “the Earth is flat”, or “COVID-19 is caused by 5G”. Or denial of fundamental moral principles such as the equality of all humans regardless of race or gender.
3. Offer and engage with reasons
Platforms, regulators and participants must provide and engage with reasons: to explain why they take the positions they do, and to consider the reasons offered by others in a debate, adjusting their own views when appropriate.
4. Take time to reflect
Participants, platforms and regulators must support time away from new and unfamiliar viewpoints – time to reflect – to decide whether and how they should influence existing views. The goal is to take time to process new information so that we are better able to re-engage again afterwards.
Leading by example
We have stressed the importance of these norms to the UK government, and believe that future regulation should be guided by it. As underpinning norms, they don’t just support formal and legal regulation, and they shouldn’t be taken to justify censorship. Rather, they can influence how policymakers and digital platforms design the regulation that structures public discussion, and in turn guide our own public contributions as citizens.
These norms are especially important for people whose roles involve representing the public – such as politicians and journalists. They help determine what counts as ethical conduct in democratic public life and good journalism that serves democracy.
Some form of norms are always at play in guiding regulatory decisions, but they are not always made explicit. Our goal is to articulate norms which could support democracy, rather than undermine it. This is not easy to pursue, but if we are committed to democratic self-government, then we all need to keep them in focus as our digital public sphere evolves.
Rowan Cruft is a member of the DCMS College of Experts (unpaid). He is also principal investigator for a project called Norms for the New Public Sphere at the Universities of Stirling and Warwick, funded by the Arts and Humanities Research Council.
Natalie Alana Ashton is research associate on a project at the Universities of Stirling and Warwick funded by the Arts and Humanities Research Council.
This article was originally published on The Conversation. Read the original article.