YouTube has become the latest social media platform to roll out labels that flag content on its website that contain artificial intelligence. The latest move, which YouTube announced on Nov. 14, aims to give users more transparency on the trustworthiness of the content they are viewing.
“We have long-standing policies that prohibit technically manipulated content that misleads viewers and may pose a serious risk of egregious harm,” said YouTube in a blog post. “However, AI’s powerful new forms of storytelling can also be used to generate content that has the potential to mislead viewers — particularly if they’re unaware that the video has been altered or is synthetically created.”
The social media giant said that in the next few months, it plans to require creators on its platform to disclose whether they used AI to alter their content. It also plans to give users the option to request the removal of videos that contain AI or other altered content that “simulates an identifiable individual, including their face or voice, using our privacy request process.”
YouTube is also rolling out the same protections for artists by allowing music partners to request deletion of AI-generated music content that “mimics an artist’s unique singing or rapping voice.”
Related: YouTube is cracking down on consumers’ favorite loophole
“In determining whether to grant a removal request, we’ll consider factors such as whether content is the subject of news reporting, analysis or critique of the synthetic vocals,” said the company in the blog post.
The announcement of these additional AI protections from YouTube come after other social media giants have doubled down on AI use on their platforms.
Last week, Meta, which owns Facebook, Instagram and Threads, announced that it is requiring political advertisers to publicly disclose any AI use in their ads. Also in September, TikTok also added a new label that discloses AI-generated content on their platform.
Social media platforms have been shoring up AI protections on their websites amid complaints from users and content creators about the spread of misinformation. These new protections also come before an upcoming presidential election season.
X not yet cracking down on A.I.
The only other large social media platform that has yet to crack down on AI use is X. Elon Musk, the owner of X, has recently expressed his support for AI regulation but said it will be “annoying.”
Regulation “will be annoying, it’s true,” Musk said on stage at the UK AI Safety Summit earlier this month, “but I think we’ve learned over the years that having a referee is a good thing.”