Activity from terrorist organisations operating on Twitter has increased by at least 69% since multibillionaire Elon Musk took over the social network, according to researchers focused on online extremism. But these accounts have had to get creative to circumvent content moderation. From "sock puppet" accounts to coded messages and "broken" text, terrorist organisations have a wide range of new techniques at their disposal. They're putting them to use on Twitter, as well as other platforms like YouTube and Facebook.
Last March, Elon Musk proclaimed himself a "free speech absolutist". On Twitter, he put a poll out to his followers to determine whether they believed that Twitter "rigorously adheres" to the principle of free speech. More than 70% of respondents said "No".
But the billionaire's takeover of the social network on October 27 has rekindled fears that false information and hate speech – previously moderated to a certain extent – would come back in full force.
These fears are backed up by findings from researchers at the Institute for Strategic Dialogue (ISD), who found that content from terrorist groups and their supporters – particularly the Islamic State (IS) group – has exploded since early November.
Although this increase coincided with Musk's takeover of Twitter, similar surges in activity from accounts close to, and in support of, terrorist organisations also took place on Facebook and YouTube.
'What they will do is whitewash official content from Islamic State [group] channels'
Moustafa Ayad, executive director for the Institute for Strategic Dialogue, spoke to the FRANCE 24 Observers team:
In regards to the sort of unofficial support base across popular platforms such as YouTube, Twitter and Facebook: what we have been seeing is that there has been an increase in their ability to adapt to content moderation techniques. By shifting their strategies towards things like the setting up of fake media outlets. [...] They name those disinformation outlets or those media outlets a range of very generic names like Breaking News or Iraq News. And what they will do is whitewash official content from Islamic State [group] channels on Telegram through those channels, through those other sort of media outlets.
Tricking artificial intelligence by concealing recognisable content
In order to get past automated content moderation tools, these accounts must look at all of their content, whether visual, text or coded. This means hiding "what has now become synonymous with the Islamic State [group], but not necessarily its flag. So covering that up with emojis, for instance, or using after-effects to sort of draw or scribble on content in order to make it harder to find," Ayad explained.
The use of emojis, popular with unofficial supporters of terrorist groups, has also been used to spread propaganda in a way that makes it harder for artificial intelligence to flag terrorism content.
Some emojis have even been "coded" by some extremist groups to convey certain messages to their subscribers while making it more difficult for the average viewer – or a computer program – to decrypt.
'Broken text': a headache for content moderation
Disinformation shared on terrorist-related accounts comes in the most basic of formats. Sometimes, accounts use usernames to impersonate media outlets and disseminate information intended to glorify jihadist operations.
They're everywhere on social networks: pages named "Breaking News", "Iraq News" or "ISW News" use generic names like these to spread terrorist propaganda.
Others use logos from real media outlets to get their message across.
But a more complex technique poses a major challenge to each of these social media networks. "Broken text" manages to get by automated and manual content moderation.
They'll do this in English, Arabic, as well as other languages where sensitive words or comments will be broken up by punctuation or other symbols. So a word like 'jihad' would have a dot in between the letters or a slash they'll use, for instance.
Meta, the company that owns Facebook, claims to have applied various punitive measures to more than 16.7 million pieces of terrorism-related content between June and September 2022. This figure is significantly higher than that of the previous quarter (13.5 million).
As for YouTube, during the same period, the company claims to have removed 67,516 videos "promoting violence and extremism", a figure that is slightly lower than that of the previous quarter.
'Early adopters' of new platforms
The spread of terrorism, and more specifically online support for terrorism, is an ever-evolving activity. To circumvent moderation, whether it's manual or artificial, accounts associated with terrorist groups use techniques that have been refined over time.
Supporters of terrorist groups, particularly those of the IS group, are often among the first to test and seize the potential of new platforms and techniques to convey their propaganda.
Accounts related to terrorist organisations regularly test new techniques to promote and disseminate violent content online to an ever-growing audience. The strategy amounts to "trial and error", according to Ayad. "What succeeds ultimately becomes the standard bearer for how you share content."
'YouTube was able to take down content in 30 minutes, Twitter in six to 12 hours, and Facebook sometimes days or months'
In November 2020, ISD timed the ability of major platforms to suspend terrorist accounts after the release of a speech, highly anticipated by supporters, by a spokesperson for the IS group.
What we found was that YouTube was able to take down that content within 30 minutes. And this is brand-new content that was just released. That was followed by Twitter, which generally took about anywhere between six to 12 hours before it was flagged. And then Facebook came in last with the content sometimes lasting days or months, depending on how quickly it was flagged or if it was flagged automatically.
According to ISD, an archive worth approximately 2.1 terabytes (about 500 hours of high-definition video) of Islamic State group content in multiple languages, including French, German, English, and primarily Arabic, is believed to be readily available online today. It is continuously disseminated or promoted by the group's online supporters.
Moderation well below European requirements
Other platforms hardly moderate the content that circulates on them. The encrypted messaging application Telegram, for example, remains the preferred digital tool of terrorist groups.
A Europol campaign to remove terrorist content, conducted in October 2018 in several Western European countries and in partnership with Telegram, had only short-term results.
What happened was the Islamic State sort of supporters and support channels shifted to Tam Tam [a messaging app]. And when the heat died down on Telegram, [they] shifted back to Telegram without relinquishing sort of the foothold on Tam Tam.
Quarterly figures published by the major social media platforms show an increase in content promoting terrorism detected since July 2022, however, these reports remain opaque about the content moderation process. Ayad cites the need for more transparency about content flagging – whether it comes from artificial intelligence or other users.
These numbers also do not reflect the goals of the April 2021 EU Regulation Addressing the Dissemination of Terrorist Content. The regulation lays out a "one-hour rule" in which providers must stop the dissemination of harmful content within one hour, or face fines adding up to 4% of total revenue.
Another new European legislation, the Digital Services Act, came into force on November 16 and aims to "create a safer digital space where the fundamental rights of users are protected".
But Ayad maintains that the terrorism content social media platforms are regulating is only the tip of the iceberg:
I wouldn't place the blame solely on the platforms, but the platforms do have a responsibility in ensuring that this content does not survive or thrive. This requires an investment in moderation and ensuring that there is equity in moderation across different languages.
This is just the sharp tip of the spear of online harms. We aren't even talking about hate in these languages, which is likely even higher. So if we can't target that content, how are we doing on things like violent misogyny? Or harassment?