Elon Musk-owned X is catching flak from Australia for its inability to put a curb on harmful content that has been floating on its platform for a while now. Moreover, the country has accused X of failing to fully comply with a legal notice served to the platform.
In a recently released transparency report, the Australian eSafety Commissioner revealed that X (formerly Twitter) "has made deep cuts to safety and public policy personnel" since it was acquired by Musk in October 2022.
The report claims the social media giant's global trust and safety staff has been reduced by a third. This includes a staggering 80 per cent reduction in the number of safety engineers.
Moreover, X reduced Trust and Safety staff by 30 per cent globally and 45 per cent for the Asia-Pacific (APAC) region. Also, content moderators hired by the company were cut by 52 per cent, as per the report.
As if that weren't enough, the company has reduced its public policy staff by 78 per cent globally, 73 per cent in the APAC region and 100 per cent in Australia.
Is X failing to deal with matters relating to the safety of users?
"Adequate resourcing of trust and safety functions is important to ensure online safety. Companies with low numbers of trust and safety personnel may have reduced capacity to respond to online hate, as well as other online harms," eSafety said in its report.
As a result, the burden of safety falls on the user (or group) experiencing the abuse, while the platform should be taking responsibility for harmful content and conduct on their service, the report adds.
Responding to a question regarding whether X had staff dedicated to monitoring hateful conduct and removing such content, the company clarified that it neither had a full-time staff dedicated to hateful conduct issues globally nor a specific team for this policy.
"It (X) said that instead, a broader cross-functional team has this in scope and collaborates on a set of policies that are related to toxicity more broadly," the report added.
X responded to the legal notice by confirming that the company's Trust and Safety Council was disbanded in December 2022 and it "had not replaced the Trust and Safety Council" with another advisory body to address matters relating to the safety of users, including hateful conduct.
The Centre for Countering Digital Hate (CCDH) has previously reported that X, which is also facing a lawsuit over non-payment of annual bonuses, did not act on 99 per cent of content involving Twitter Blue accounts that it considered to be hate and reported to the platform.
When it comes to posts from Twitter Blue accounts, X claims it does not artificially or manually amplify any accounts.