YouTube is launching a new feature that will let users create their own Charlie Puth, Demi Lovato, or T-Pain track and make it the background music for a Short. It is one of several new AI-powered tools Google is working on for its social video platform.
Developed by Google's AI lab DeepMind, the new music features will be powered by a new music generation model called Lyria that the company says "excels at generating high-quality music with instrumentals and vocals."
To see how well this works in the real world, YouTube has signed partnership deals with Alec Benjamin, Charlie Puth, Charli XCX, John Legend, and others to license their style and sound for text-to-music new, completely AI-generated creations.
How does it work?
When creating a new Short a creator will be able to select an artist from a carousel, enter a prompt, and have Lyria generate a 30-second soundtrack in the style of the selected artist. This will include lyrics, a backing track, and an AI-generated voice in the style of the artist.
To show how well it works, Google shared a 21-second generated track from Charlie Puth including the lyrics "Baby we've got nothing in common but I know that I wish you'd been wanting for so long now."
The voice is like Charlie Puth but with excessive autotune applied and a more than necessary amount of distortion. I’m sure it will improve over time, and its perfect for Shorts, but there is an uncanny valley issue at work.
When will it be available?
YouTube hasn’t given a timeline for launch, so at the moment, it isn't clear exactly when it will be available for Premium users. However, when it is made live it it is expected to be found through the experiments screen on the iOS and Android apps.
That is if they follow the pattern of the previous AI experiments involving summaries and organization of comments on high-profile videos.
This was just one of two new AI music features Google DeepMind revealed in a blog post. The other, also using Lyria, is a new suite of music AI tools that will be available to artists, songwriters, and producers in YouTube's Music AI Incubator program. For example, being able to sing a melody to create a horn line or transform MIDI chords into a vocal choir.
How are they protecting artists?
The other big question is: how do you protect artists from this project from having AI-generated versions of their music being created? For Dream Track, Google created licensing agreements with the artists involved.
This move comes off the back of the launch of the YouTube responsible AI policy, announced yesterday. That includes the launch of flags on YouTube videos created using AI.
DeepMind has also been at work on ways to watermark content made with AI using a new tool called SynthID. This is a sound running in the audio not audible to humans but can be picked up by players and computers to add labels or alert labels if Lyria was used to generate a song.