The announcement from the minister for communications last week about the planned introduction of a “digital duty of care” is a welcome development in online safety policy debate. Australia’s approach to regulating the online world has been mostly about dealing with the surface level problems of offending content. Embracing a digital duty of care signals the beginning of targeting the underlying stuff that truly makes digital platforms unwieldy — “systems” such as algorithms, “like” buttons, and endless scroll features.
A duty of care sets up a policy approach that combines communications regulation with something a bit closer to consumer protection, and moves the debate forward from piece-by-piece discussions about online material. But a systemic turn will not extinguish the need for online content regulation — and that’s a conversation Australians have not reacted well to.
The place for content regulation
If there’s been one lesson from the year’s events in online safety, it’s this: enforcing the regulation of content on big tech companies is both difficult and politically expensive. No-one is disagreeing with the need for rapid harm remediation when it comes to image-based abuse or child sexual abuse material. Victims confronting sexualised “deepfakes” made of themselves, or broadcasts of their home address, need that information taken off the internet as swiftly as possible.
These issues are relatively clear-cut, both in the sense that there’s very little substantive grey area when it comes to identifying the content, and few are going to argue for a public interest right to distribute such material. But judgment calls need to be made — such as the eSafety commissioner’s decision to classify a video of a young male stabbing a Wakeley bishop as “extreme violent content”. The request for removal of this material was complied with by most digital platforms, except for one which saw an opportunity for a bitter free-speech showdown.
One element that was desperately missing in the ferocious public debate around the video and the requests for its removal was that the decision was far from a subjective act by a sole individual but a reasoned regulatory conclusion relying on the same principles used in offline content regulation. Safe to say, the personnel making classification decisions in the offline world are not subjected to trans-continental acts of digital “dogpiling”, as the commissioner herself endured over several waves of the Wakeley litigation.
Content-based vs systemic regulation
With the Wakeley takedown case in recent memory, the government reintroduced the Combatting Misinformation and Disinformation Bill, which relies on some level of informal content classification for the purpose of framing big tech’s risk assessments (importantly, the bill does not compel takedowns, and any claims it would incentivise them are not grounded in empirical evidence).
Maybe it was the proximity to a takedown decision, maybe it was the “screenshot culture” of reacting to bits of the bill out of context, maybe it was just a generally low understanding of the regulatory framework the bill is intended to live in. Whatever the case, free-speechers went berserk, inflamed by a dodgy title and efforts to define what misinformation and disinformation even are.
Some of the critiques have been justified. Many of them are not. The key source of confusion and conflation has been, arguably, the mixing of a content approach with a systems approach. While the bill was readjusted to emphasise the role of big tech’s systems in promulgating misinformation and disinformation, there has been an immovable hurdle that involves a content-based test. This lead to the enduring question of “what is” misinformation and disinformation, and “who decides”.
Misinformation and disinformation are terms of art in digital platform regulation. They mean something quite specific in the regulatory context, and the more rigorous studies confine them to a precise technical meaning. Confusingly, they take on a much broader meaning in public debate. Politicians, campaigners, and activists alike use these words to characterise anything from robust political speech they disagree with to hostile campaigning tactics. In other words, the anxiety over how these wobbles on the internet would be governed are deeply connected to the imprecision with how a range of people talk about them.
The key point is, when you want to regulate a harm online, you have to define it. Even the most adamant free-speech advocates have admitted they have concerns about troll farms, bot armies, and acts of online interference at scale. These are the use cases for the bill, not your aunt posting on Facebook about the latest conspiracy.
The move towards systemic digital regulation will bring much-needed oversight to the tech features that are connected to keeping your kids up all night or scamming your dad of his savings. This embrace of digital regulation as online consumer protection is dearly necessary. But the spectre of online content — and how to classify it — won’t entirely disappear, and the policy debate would do well to find a more sensible way through than the reactive energy that has defined a lot of this year’s commentary.
Have something to say about this article? Write to us at letters@crikey.com.au. Please include your full name to be considered for publication in Crikey’s Your Say. We reserve the right to edit for length and clarity.