Threads is leaving the knot tied in COVID-related searches for the foreseeable future.
The social media company has blocked terms including “COVID,” “vaccines,” and “long COVID” as it focuses its resources on fighting misinformation about the war in Israel and Gaza.
The search filters have been in place since mid-September. In a statement on the platform, Adam Mosseri, head of Instagram and Threads, acknowledged them, calling the block “temporary,” but said there was no timeline on when it would reopen searches on those terms. It could be “weeks or months” before that occurs.
“The reality is that we have lots of important work to do. The team is moving fast, but we’re not yet where we want to be,” he wrote.
The safety focus at present, he added in another post, was “managing content responsibly given the war in Israel [and] Gaza.”
Beyond that, the Threads team continues to work on several fronts to grow the platform, which has seen slowing user engagement since bursting out of the gate in July. Among those, said Mosseri, are deeper integrations of Threads into Instagram and Facebook, graph building, EU compliance, support for the Fediverse (a collection of social networks that lets them communicate with each other), and trending topics.
Meta came under fire for its role in spreading misinformation about COVID during the heart of the pandemic, but has been praised for its recent efforts to clamp down on it. Meanwhile, misinformation about Israel and Gaza is a problem for all social media networks. TikTok and Twitter/X have been called out for the amount of misinformation on those platforms.
Meta, last week, said it had started an operations center with experts who speak Arabic and Hebrew to combat the flood of misinformation and hate speech. It also temporarily lowered the threshold that triggers its technology preventing content that could violate its rules from being amplified.
“In the three days following October 7, we removed or marked as disturbing more than 795,000 pieces of content for violating these policies in Hebrew and Arabic,” the company wrote in a blog post. “As compared to the two months prior, in the three days following October 7, we have removed seven times as many pieces of content on a daily basis for violating our Dangerous Organizations and Individuals policy in Hebrew and Arabic alone.”