The federal government this week introduced a new bill into parliament aimed at cracking down on the spread of misinformation and disinformation on the internet.
The government also this week announced plans to ban young people from social media platforms and improve privacy protections. These moves have been criticised by experts, who say bans are ineffective, and privacy reforms fall short of what is required in the digital age.
The government published a draft of the new misinformation and disinformation bill last year for public consultation. It received more more than 24,000 responses (including from my colleagues and me).
The new version of the bill suggests the government listened to some expert recommendations from the consultation process, but ignored many others.
What’s in the bill?
The government has adopted an “information disorder” definition of misinformation and disinformation.
Misinformation is content that contains information that is reasonably verifiable as false, misleading or deceptive. It’s spread on a digital service and reasonably likely to cause or contribute serious harm.
What makes disinformation different is the intent behind it. If there are reasonable grounds to suspect a person disseminating it intends to deceive, or if there is “inauthentic behaviour” such as the use of fake accounts, it may be disinformation.
Speaking to the ABC, Minister for Communications Michelle Rowland said the new bill:
goes to the systems and processes of the platforms and says they need to have methods in place to be able to identify and do something about [misinformation and disinformation].
The design of social media platforms means misinformation and disinformation can spread rapidly. The new bill, which is yet to be voted on, requires platforms to publish a report which assesses this inherent risk. It also requires them to publish a media literacy plan and their current policies about misinformation and disinformation.
The bill also provides stronger powers for the Australian Communications and Media Authority (ACMA). These powers would enable ACMA to make specific directives to platforms and impose penalties if they do not comply.
For example, ACMA could require platforms to implement media literacy tools and submit reports on their efforts to combat harmful content.
The new bill does not aim to regulate all misinformation and disinformation. Instead, its focus is on the kind of misinformation and disinformation which is “reasonably likely to cause or contribute to serious harm”.
The definition of serious harm includes:
- harm to the operation or integrity of the electoral or referendum process
- harm to public health
- vilification of a group or individual based on factors such as race, religion, sex or disability
- intentionally inflicted physical injury to an individual in Australia
- imminent damage to critical infrastructure or disruption of emergency services
- imminent harm to the Australian economy.
If a platform breaches the bill, it could face civil penalties of up to 5% of its annual global turnover. For a company such as Meta, which owns Facebook, this could easily run to billions of dollars.
What’s good about the bill?
It is good to see a focus on improving transparency and accountability for social media platforms. However, there is no explicit provision that data platforms share with ACMA be made available to researchers, academics or civil society.
This limits the potential for transparency and accountability.
One significant criticism of the draft legislation was that it had real potential to limit free speech. The bill remains cautious, with protections for political discourse and public interest communication. For example, there are protections for satire and humour, professional news content, and content for academic, artistic, scientific or religious purposes.
The reasonable application of these powers will also be reviewed regularly to assess the impact of the bill on freedom of expression.
Proposed limitations which would have meant the bill did not apply to electoral and referendum matters have also been removed.
This is a vitally important change. Misleading information played a significant role in the recent Voice referendum, and in other elections.
The bill also better addresses instances of coordinated activity under a definition of inauthentic behaviour. This begins to address circumstances where problematic activity is less about the truthfulness of the individual content, rather that it is part of a collective action to artificially amplify the reach of the content.
What’s bad about the bill?
The bill maintains a distinction between misinformation, which is spread by accident, and disinformation, which is spread deliberately.
As my colleagues and I argued in our submission to the government’s draft legislation last year, this distinction isn’t helpful or necessary. That’s because intent is very hard to prove – especially as content gets reshared on digital platforms. Regardless of whether a piece of false, misleading or deceptive content is spread deliberately or not, the result is usually the same.
The bill also won’t cover mainstream media. This is a problem because some mainstream media outlets such as Sky News are prominent contributors to the spread of misinformation.
Notably this has included climate change denial, which is a widespread and pressing problem. The bill does not include climate misinformation in its scope. This greatly diminishes its relevance in addressing the harm done by misinformation.
This bill makes many of the same mistakes as the government’s other recent attempts to reduce online harms. It goes against expert advice and neglects important issues. As a result, it’s unlikely to achieve its goals.
Daniel Angus receives funding from Australian Research Council through Discovery Projects DP200100519 ‘Using machine vision to explore Instagram’s everyday promotional cultures’, DP200101317 ‘Evaluating the Challenge of ‘Fake News’ and Other Malinformation’, and Linkage Project LP190101051 'Young Australians and the Promotion of Alcohol on Social Media'. He is an Associate Investigator with the ARC Centre of Excellence for Automated Decision Making & Society, CE200100005.
This article was originally published on The Conversation. Read the original article.