Twitter, TikTok and Google have been hit with legal threats from Australia's eSafety commissioner, who is demanding information on what they are doing to combat the vile trade in child exploitation material on their platforms.
Legal notices were issued to the companies, as well as Twitch and Discord, on Wednesday afternoon, along with a deadline of 35 days to respond or face daily fines of up to $700,000.
"We've been asking a number of these platforms for literally years: what are you doing to proactively detect and remove child sexual abuse material?" eSafety Commissioner Julie Inman Grant told the ABC.
"And we've gotten what I would describe, as, you know, not quite radical transparency."
It is the second time the commissioner has issued such legal notices, having pursued Microsoft, Apple and Meta — the parent company of Facebook, Instagram and WhatsApp — last year.
Ms Inman Grant said there were genuine concerns about how tech giants were monitoring harmful material on their sites — particularly platforms such as Twitter, which has been the subject of significant criticism since it was taken over by billionaire Elon Musk.
"This isn't a fishing expedition. There's been a lot of research and resources that go into this," she said.
"With the first set of basic online safety expectation notices … we had a lot of suspicions about what some of the big players like Apple, Microsoft and Meta were doing.
"This actually validated that we actually don't really understand the full scale and the scope of child sexual exploitation that might be on the common cloud services and email services we're using every day."
Microsoft not using its own detection tool
The commissioner said Microsoft had developed a tool, known as PhotoDNA, to detect and remove such material.
"They weren't even eating their own dog food, as we say in the tech industry," Ms Inman Grant said.
"They weren't using it on a number of their services like OneDrive, like Skype and some of the other platforms like Hotmail."
Microsoft was not the only target of the commissioner's criticism.
"Apple isn't scanning for iCloud, and they've got billions of handsets out there connected to iMessage and iCloud," Ms Inman Grant said.
"They reported to the [United States] National Center for Missing Exploited Children 800 instances of child sexual abuse material.
"By contrast, Meta reported about 29 million pieces. So to give Meta credit, at least they're scanning for it and they're finding it and they're trying to get it removed."
Federal Communications Minister Michelle Rowland argued Australia was a world leader in issuing such demands to tech companies.
"We should recognise that Australia really is a world leader in this area," she said.
"It has been an area of bipartisanship and we know that whilst we have been first movers, we also have other countries — particularly in our region — who want to do more in this area."
Algorithms found recommending sexualised content
Michael Salter, an associate professor of criminology at the University of New South Wales, said the problem was getting "worse every year".
"The major social media companies have developed their services and platforms with very little effective child protection measures in place," he said.
Dr Salter argued the tech companies were exacerbating the situation.
"Very often they are using algorithms to actively recommend this content, and we have had situations where social media company algorithms have been actively recommending sexualised content of children, sexual interest in children," he said.
"Although technology companies and social media companies will always say that they have a zero-tolerance approach to child sexual exploitation, the fact is that often we are not seeing them do the basics.
"They are not using algorithms and AI in order to, for example, detect grooming. And there are ways in order to automatically detect grooming in terms of the sorts of words that offenders are using, the sorts of signals that they're using."
'Zero-tolerance approach to predatory behaviour'
The ABC contacted the five companies involved for comment.
In a statement, Google's senior manager of government affairs and public policy said child sexual abuse material had no place on the company's platforms.
"We utilise a range of industry-standard scanning techniques including hash-matching technology and artificial intelligence to identify and remove child sexual abuse material that has been uploaded to our services," Samantha Yorke said.
"We work closely with the eSafety Commissioner, the US based National Center for Missing and Exploited Children and other agencies around the world to combat this kind of abuse."
TikTok also responded, and said it had a "zero-tolerance approach to predatory behaviour and the dissemination of child sexual abuse material, as well as other content that puts the safety of minors potentially at risk".
"We have more than 40,000 safety professionals around the world who develop and enforce our policies, and build processes and technologies to detect, remove or restrict violative content at scale," country policy manager Jed Horner said.
Discord confirmed it had received the notice from the eSafety commissioner and would be responding to the demand.
"We have zero tolerance for content that threatens child safety online, and firmly believe this type of content does not have a place on our platform or anywhere in society," a spokesperson said.
"This is an area of critical importance for all of us at Discord, and we share the office's commitment to creating a safe and positive experience online."