Americans are facing another tragedy – this time in Buffalo – as 11 Black and two white victims were shot in a “racially motivated hate crime” suspected to have been perpetrated by a young man who spent much of his life online, where he appears to have been radicalised. He used the same digital playbook as the Christchurch terrorist: livestream footage of his attack and a “manifesto” that quoted the same “great replacement” and other white supremacist theories. He mentions the Christchurch terrorist by name in his manifesto.
In our most recent research report looking at anti-Muslim hate, we found that the big social media companies were collectively failing to act on 89% of posts that advocated the great replacement theory – despite pledges made in the wake of the Christchurch terrorist attack and subsequent, ideologically driven violent extremism attacks at the Tree of Life Synagogue in Pittsburgh; in Halle, Germany; and in El Paso. These are not “lone wolf” attacks. The individuals are connected through online communities on social media, where they share ideas, tactics and content.
In a joint statement in 2019, Meta, Twitter and Google committed to uphold the Christchurch Call to eliminate terrorist and violent extremist content online. They stated that they would be “resolute in [their] commitment to ensure [they] are doing all [they] can to fight the hatred and extremism that lead to terrorist violence”.
The failure of social media companies to act sufficiently on known racist content connected with terrorism is a violation of their own terms and conditions, the pledges made to an international community when the cameras were rolling, and the dignity that the victims of Buffalo were entitled to have – the right to life.
Social media and online spaces are often where people meet, seek information and become radicalised through a rabbit-hole of lies, hate and misinformation. Those with fringe beliefs will be exposed to increasingly more radical content as a result of recommendation algorithms. The failure of social media giants to effectively tackle online hate and misinformation has real-world impacts. Words can kill.
Perhaps the only thing that explains why – despite so many pledges, so many platitudes and commitments to voluntary frameworks – the social media platforms have failed to act is because of the memo that Andrew Bosworth, now chief technical officer of Meta, wrote to his fellow employees on their internal messaging board, called the Ugly Truth. In it he said: “So we connect more people. That can be bad if they make it negative. Maybe it costs a life by exposing someone to bullies. Maybe someone dies in a terrorist attack co-ordinated on our tools. And still we connect people.”
It is, quite simply, a bald statement of personal indifference to the grief of people, families and our nations.
It is time for governments to act – much as they are doing with the online safety bill in the United Kingdom and the Digital Services Act in the European Union – to increase safety, transparency and democratic accountability, and ensure that the responsibility for harm is borne by social media companies, not just by our societies.
The holdout remains the US, where there is no real urgency to the debate beyond flaccid claims that one side or another is being “cancelled”. The grim, inevitable, reductive, partisan politicisation of yet another important debate has slowed progress, and yet it has become impossibly clear, after 6 January 2021, after pandemic misinformation claimed tens of thousands of American lives, and now, after Buffalo, that if we leave things to social media companies, people all around the world will suffer. It is time for US politicians to take action.
Imran Ahmed is chief executive of the Center for Countering Digital Hate
Do you have an opinion on the issues raised in this article? If you would like to submit a letter of up to 300 words to be considered for publication, email it to us at guardian.letters@theguardian.com