From vampires and wendigos to killer asteroids, TikTok users are pumping out outlandish end-of-the-world conspiracy theories, researchers say, in yet another misinformation trend on a platform whose fate in the United States hangs in the balance.
In the trend reported by the nonprofit Media Matters, TikTok users seek to monetize viral videos that make unfounded claims about the US government secretly capturing or preserving mythical monsters that include -- wait for it -- King Kong.
It is the latest illustration of misinformation swirling on the platform -- a stubborn issue that has been largely absent in recent policy debates as US lawmakers mull banning the Chinese-owned app on grounds of national security.
Often accompanied by spooky background music, the videos -- many of which garner millions of views -- feature imperious AI-generated voices, sometimes mimicking celebrities.
"We are all probably going to die in the next few years. Did you hear about this?" said a voice impersonating podcaster Joe Rogan in one viral video.
"There's this asteroid that is on a collision course with Earth," the voice claims, citing information leaked by a government official who stumbled upon a folder titled "keep secret from the public."
At least one account peddling that video appeared to be deactivated after AFP reached TikTok for comment.
Conspiracy theory videos, often posted by anonymous accounts, typically had the tell-tale signs of AI-generated images such as extra fingers and distortions, said TikTok misinformation researcher Abbie Richards.
Peddling such theories can be financially rewarding, Richards said, with TikTok's "Creativity Program" designed to pay creators for content generated on the platform.
It has spawned what she called a cottage industry of conspiracy theory videos powered by artificial intelligence tools including text-to-speech applications that are widely -- and freely -- available online.
A TikTok spokeswoman insisted that "conspiracy theories are not eligible to earn money or be recommended" in user feeds.
"Harmful misinformation is prohibited, with our safety teams removing 95 percent of it proactively before it's reported," she told AFP.
Still, tutorials on platforms such as YouTube show users how to create "viral conspiracy theory videos" and profit off TikTok's Creativity Program.
One such tutorial openly instructed users to start by making up "something outrageous" such as "scientists just got caught hiding a saber-toothed tiger."
"Financially incentivizing content that is both highly engaging and cheap to manufacture creates an environment for conspiracy theories to thrive," Richards wrote in the Media Matters report.
Such concerns, driven by rapid advancements in AI, are particularly high in a year of major elections around the world.
Last week, the European Union wielded its powerful Digital Services Act (DSA) to press several platforms including TikTok on the risks of AI -- including from deepfakes -- for upcoming elections in the 27-nation bloc.
In the United States, where the app has some 170 million users -- roughly half the country's population -- lawmakers last week overwhelmingly backed a bill to ban TikTok unless Chinese parent company ByteDance divested itself within six months.
The bill, which still needs to pass the more cautious upper house of the US Congress, risks riling young voters in a key election year.
US policymakers have repeatedly expressed concerns about TikTok's alleged ties to the Chinese government, user data safety and its apparent impact on national security.
According to a report from the US Office of the Director of National Intelligence, the Chinese government is using TikTok to expand its global influence operations to promote pro-Beijing narratives and undermine American democracy, including through disinformation.
"Disinformation should be part of the debate about TikTok," Aynne Kokas, a media studies professor at the University of Virginia, told AFP.
Many experts, however, as well as young users who rely on the app as their primary source of news, oppose banning TikTok, saying it's unfair to single out the platform.
"There's lots of misinformation on TikTok, just as there is on other social media platforms. Some of that misinformation is dangerous," Jameel Jaffer, director of the Knight First Amendment Institute at Columbia University, told AFP.
"(But) investing the government with the authority to suppress misinformation -- or to ban Americans from accessing platforms that host misinformation -- is not a sensible response to this problem. Nor would it be a constitutional one," he added.