In the final sprint to the US midterm elections the social media giant TikTok risks being a major vector for election misinformation, experts warn, with the platform’s huge user base and its design making it particularly susceptible to such threats.
Preliminary research published last week from digital watchdog Global Witness and the Cybersecurity for Democracy team at New York University suggests the video platform is failing to filter large volumes of election misinformation in the weeks leading up to the vote.
TikTok approved 90% of advertisements featuring election misinformation submitted by researchers, including ads containing the wrong election date, false claims about voting requirements, and rhetoric dissuading people from voting.
TikTok has for several years prohibited political advertising on the platform, including branded content from creators and paid advertisements, and ahead of midterm elections has automatically disabled monetization to better enforce the policy, TikTok’s global business president, Blake Chandlee, said in a September blog post. “TikTok is, first and foremost, an entertainment platform,” he wrote.
But the NYU study showed TikTok “performed the worst out of all of the platforms tested” in the experiment, the researchers said, approving more of the false advertisements than other sites such as YouTube and Facebook.
The findings spark concern among experts who point out that – with 80 million monthly users in the US and large numbers of young Americans indicating the platform is their primary source of news – such posts could have far reaching consequences.
Yet the results come to little surprise, those experts say. During previous major elections in the US, TikTok had far fewer users, but misinformation was already spreading widely on the app. TikTok faced challenges moderating misinformation about elections in Kenya and the war in Ukraine.
And the company, experts say, is doing far too little to rein in election lies spreading among its users.
“This year is going to be much worse as we near the midterms,” said Olivia Little, a researcher who co-authored the Media Matters report. “There has been an exponential increase in users, which only means there will be more misinformation TikTok needs to proactively work to stop or we risk facing another crisis.”
A crucial test
With Joe Biden himself warning that the integrity of American elections is under threat, TikTok has announced a slew of policies aimed at combatting election misinformation spreading through the app.
The company laid out guidelines and safety measures related to election content and launched an elections center, which “connect[s] people who engage with election content” to approved news sources in more than 45 languages.
“To bolster our response to emerging threats, TikTok partners with independent intelligence firms and regularly engages with others across the industry, civil society organizations, and other experts,” said Eric Han, TikTok’s head of US safety, in August.
In September, the company also announced new policies requiring government and politician accounts to be verified and said it would ban videos aimed at campaign fundraising. TikTok added it would block verified political accounts from using money-making features available to influencers on the app, such as digital payments and gifting.
Still, experts have deep concerns about the spread of election falsehoods on the video app.
Those fears are exacerbated by TikTok’s structure, which makes it difficult to investigate and quantify the spread of misinformation. Unlike Twitter, which makes public its Application Programming Interface (API), software that allows researchers to extract data from platforms for analysis, or Meta, which offers its own internal search engine called Crowdtangle, TikTok does not offer tools for external audits.
However, independent research as well as the platform’s own transparency reports highlight the challenges it has faced in recent years moderating election-related content.
TikTok removed 350,000 videos related to election misinformation in the latter half of 2020, according to a transparency report from the company, and blocked 441,000 videos containing misinformation from user feeds globally.
The internet non-profit Mozilla warned in the run-up to Kenya’s 2022 election that the platform was “failing its first real test” to stem dis- and misinformation during pivotal political moments. The non-profit said it had found more than 130 videos on the platform containing election-related misinformation, hate speech and incitement against communities before the vote, which together gained more than 4m views.
“Rather than learn from the mistakes of more established platforms like Facebook and Twitter, TikTok is following in their footsteps,” Mozilla researcher Odanga Madung wrote at the time.
Why TikTok is so vulnerable to misinformation
Part of the reason TikTok is uniquely susceptible to misinformation lies in certain features of its design and algorithm, experts say.
Its For You Page, or general video feed, is highly customized to users’ individual preferences via an algorithm that’s little understood, even by its own staff. That combination lends itself to misinformation bubbles, said Little, the Media Matters researcher.
“TikTok’s hyper-tailored algorithm can blast random accounts into virality very quickly, and I don’t think that is going to change anytime soon because it’s the reason it has become such a popular platform,” she said.
Meanwhile, the ease with which users’ remix, record and repost videos – few of which have been fact-checked – allows misinformation to spread easily while making it more difficult to remove.
TikTok’s video-exclusive content brings up additional moderation hurdles, as artificial intelligence processes may find it more difficult to automatically scrape video content for misinformation compared to text.
Several recent studies have highlighted how those features have exacerbated the spread of misinformation on the platform. When it comes to TikTok content related to the war in Ukraine, for example, the ability to “remix media” without fact-checking it has made it difficult “even for seasoned journalists and researchers to discern truth from rumor, parody and fabrication”, said a recent report from Harvard’s Shorenstein Center on Media.
That report cited other design features in the app that make it an easy pathway for misinformation, including that most users post under pseudonyms and that, unlike on Facebook, where users’ feeds are filled primarily with content from friends and people they know, TikTok’s For You Page is largely composed of content from strangers.
Some of these problems are not unique to TikTok, said Marc Faddoul co-director of Tracking Exposed, a digital rights organization investigating TikTok’s algorithm.
Studies have shown that algorithms across all platforms are optimized to detect and exploit cognitive biases for more polarizing content, and that any platform that relies on algorithms rather than a chronological newsfeed is more susceptible to disinformation. But TikTok is the most accelerated model of an algorithmic feed yet, he said.
At the same time, he added, the platform has been slow in coming to grips with issues that have plagued its peers like Facebook and Twitter for years.
“Historically, TikTok has characterized itself as an entertainment platform, denying they host political content and therefore disinformation, but we know now that is not the case,” he said.
Young user base is particularly at risk
Experts say an additional cause for concern is a lack of media literacy among TikTok’s largely young user base. The vast majority of young people in the US use TikTok, a recent Pew Research Center report showed. Internal data from Google revealed in July that nearly 40% of Gen Z – the generation born between the late 1990s and early 2000s – globally uses TikTok and Instagram as their primary search engines.
In addition to being more likely to get news coverage from social media, Gen Z also has far higher rates of mistrust in traditional institutions such as the news media and the government compared with past generations, creating a perfect storm for the spread misinformation, said Helen Lee Bouygues, president of the Reboot Foundation, a media literacy advocacy organization.
“By the nature of its audience, TikTok is exposing a lot of young children to disinformation who are not trained in media literacy, period,” she said. “They are not equipped with the skills necessary to recognize propaganda or disinformation when they see it online.”
The threat is amplified by the sheer amount of time spent on the app, with 67% of US teenagers using the app for an average of 99 minutes per day. Research conducted by the Reboot Foundation showed that the longer a user spends on an app the less likely they are to distinguish between misinformation and fact.
To enforce its policies, which prohibit election misinformation, harassment, hateful behavior, and violent extremism, TikTok says it relies on “a combination of people and technology” and partners with factcheckers to moderate content.
The company directed questions to this blogpost regarding election misinformation measures, but declined to share how many human moderators it employs.
Bouygues said the company should do far more to protect its users, particularly young ones. Her research shows that media literacy and in-app nudges towards fact-checking could go a long way when it comes to combating misinformation. But government action is needed to force such changes.
“If the TikToks of the world really want to fight fake news, they could do it,” she said. “But as long as their financial model is keeping eyes on the page, they have no incentive to do so. That’s where policymaking needs to come into play.”