When supporters of former Brazilian President Jair Bolsonaro stormed the country’s Congress, comparisons were made to the 2021 US Capitol Building riot. Like the American attack, the invaders were extremists acting in the name of an ousted right-wing president and — in both cases — the insurrections came following elections rife with misinformation. Regulating misinformation without suppressing free speech has been a persistent headache for governments around the world. Australia’s updated misinformation code still doesn’t capture all messenger services, while new laws in Turkey are “micro-managing and throttling social media” in the eyes of journalists. If there was hope social media platforms would step in where governments couldn’t, recent developments aren’t encouraging. When it comes to moderating misinformation, Facebook’s parent company Meta is slashing jobs after underwhelming earnings and might be stepping away from news altogether. The ascendant TikTok is rife with misinformation and Twitter has rarely escaped headlines since billionaire Elon Musk took the reins.
Daniel Angus, Professor of Digital Communication at Queensland University of Technology says Musk’s leadership has sent Twitter’s moderation backwards.
“He has trashed the platform’s online safety division and as a result misinformation is back on the rise,” he says.
"[Musk] looks to technological fixes to solve his problems. He’s already signalled to upping use of AI for Twitter’s content moderation. But this isn’t sustainable nor scalable, and is unlikely to be the silver bullet.”
Experts point out no single billionaire can take down misinformation — it’ll take a multi-faceted all-hands response from governments, businesses, civil society activists and consumers.
Cash in, cash out
Disinformation agents often do their damage by buying up advertising space to post misleading content. Publishers hosting the ads either don’t know (if the process is automated) or don’t care (high click rates make them money). Busting the cycle requires creative thinking.
The Global Disinformation Index, a not-for-profit group aimed at compiling data to help advertisers make informed decisions about where to place their brands, has found it effective to engage with legitimate brands whose products were appearing next to disinformation.
Co-founder Daniel J. Rogers, who is also an Adjunct Assistant Professor at New York University, says: “Advertisers end up with their brands appearing alongside unsuitable content, harming their reputation and costing them money.
“We seek to balance out that equation. Advertisers were missing data on where on the web disinformation was occurring. With that information they could avoid those platforms, safeguarding their brand and directing funds away from disinformation peddlers.”
The US Trust ratings agency Newsguard is also fighting disinformation and has audited most media outlets in Australia and New Zealand. The ratings are sent to advertising agencies and brands, urging them to only support outlets that provide online safety for readers and support for democracies.
Striking a balance
When misinformation spreads, it’s better to correct than let it fester. Anya Schriffrin, senior lecturer at Columbia University, says it’s crucial to catch false information before it escapes into the world.
“Corrections may also aggravate the problem: due to the exposure effect, audiences seeing something twice may believe it more; or corrections may only enhance distrust in the media,” she says. “Establishing more prevalent fact-checking also helps create and support a culture of truth and signalling, and may build relationships among journalists. The creation of global standards for truth could help advertisers make better decisions about who they support, and build trust back in the media.”
Teaching fact from digital fiction
Ensuring citizens are media literate is a strong protective mechanism to limit the damage. This could include widespread state programs that help equip news consumers with the skills to think critically about the content they’re exposed to and help discern the credible from the untrustworthy.
Tanya Notley, Associate Professor of Media at Western Sydney University says a media-literate citizen is a strong contributor in a democracy.
“A fully media-literate citizen will be aware of the many ways they can use media to participate in society. They will know how media are created, funded, regulated, and distributed and they will understand their rights and responsibilities in relation to data and privacy.”
Attack the causes
Misinformation and disinformation would be considerably defanged if people simply didn’t believe it. Conspiracy theories gained a foothold in many countries during the COVID pandemic. In Australia, the far-right movement latched on to conspiracies about widespread vaccine deaths which involved tech magnate Bill Gates and then-US Chief Medical Advisor Anthony Fauci.
Mario Peucker, senior research fellow at Victoria University, suggests understanding why some gravitate towards conspiracy theories is the key to unlocking a strategy to blunt their influence.
“Only then, can nuanced strategies be developed to prevent more people falling down misinformation rabbit-holes — and, possibly, restoring a space where robust public debate can replace ideologically parallel communities.”
PERSPECTIVES
Elon’s Twitter ripe for a misinformation avalanche Daniel Angus, Queensland University of Technology Seeing might not be believing going forward as digital technologies make the fight against misinformation even trickier for embattled social media giants.
Indonesia’s misinformation army ready for war in 2023 Ika Idris, Monash University Indonesia, Laeeq Khan, Ohio University, and Nuurrianti Jalli, Northern State University With an election looming and controversial law reform on its way, Indonesia’s government is set to ratchet up its well-oiled propaganda machine.
Cracking the code to cut back misinformation Anya Schiffrin, Columbia University Balancing freedom of expression with targeting misinformation is an ongoing challenge, and there's a variety of approaches taken thus far.
Journalists step in where platforms have no answers Eleonora Maria Mazzoli, London School of Economics and Political Science (LSE) Google and Facebook are hoping to improve their news algorithms to provide a more balanced view. Media industry initiatives could have the solution they need.
Defunding the disinformation money machine Daniel J. Rogers, New York University Disinformation is a profitable business, and one of the most effective ways to slow its spread to take away the advertising money unwittingly funding it.
Misinformation won’t go away, but media literacy can help fight it Tanya Notley, Western Sydney University Misinformation won’t disappear, but teaching the community to spot it can strip the falsehoods of their power.
Indonesia’s misinformation program undermines more than it teaches Ika Idris, Monash University Indonesia’s government has funded a thorough media literacy program. But rather than stopping misinformation, it serve to undermine independent thought.
Ubiquitous and mysterious, algorithms are ruling our lives Daniel Angus, Queensland University of Technology Algorithms hugely impact our consumption of news, media and much more but there is very little known about how they do that and how they influence what we read. Gates, Fauci and the NWO: inside Australia’s far-right silos Mario Peucker, Victoria University Misinformation has found a home with conspiracy theorists, some exploiting political divisions and others convinced of its truth.
India's Wire scandal a lesson for media Sukumar Muralidharan, O.P. Jindal Global University A scandal involving a fake email, two computer cryptographers and a politician is a lesson for all media before India’s run of elections.
Originally published under Creative Commons by 360info™.