Get all your news in one place.
100’s of premium titles.
One app.
Start reading
MusicRadar
MusicRadar
Entertainment
Clovis McEvoy

“It's awful, it's illegal, and it's bad for our society”: Have AI music generators been trained on your music?

A human robot band.

If you haven’t yet heard about the explosive impact of generative AI, could you please share your rock’s location, so that we can all join you under there.

Yes, in a dizzyingly short time span, entire creative disciplines have been upended, new business empires have emerged, and an ethical can of worms has been opened. Most average observers have been left shell-shocked.

Recently, headlines have been grabbed by Suno and Udio, two companies that are currently leading AI music’s new frontier: full-length song generators. Sure, the tunes they spit out are fairly generic, and often unpleasantly lo-fi in their sound quality, but the ability to throw a handful of keyword prompts at a computer and conjure a complete piece of music in mere seconds isn’t just impressive, it’s kind of insane.

Large Language Models (LLMs), the technology that powers these marvellous feats, relies entirely on data. Image generators are ‘trained’ on images, text generators gobble up books and articles, and music generators analyse tens of thousands of musical works to extract meaningful patterns. Each model is only as good as the information that was fed into it during the training process – and it’s here that the first problems arise.

"Many of them don't want to admit they're using copyrighted material"

“One of the issues that we see across a lot of these generative AI companies is that they don't want to share their training data,” remarks Dr. Moiya McTier, senior adviser to the Human Artistry Campaign, and herself a science fiction author, astrophysicist, and scholar of folklore. “Many of them don't want to admit they're using copyrighted material. So we don't know where they're getting their data or even what all of their data is.”

Despite this uncertainty, the use of copyrighted content to train music models has become something of an open secret. Simply put: there is not a lot of free-to-use stock music out there, and what is available is not exactly inspiring. So, to generate a passable pastiche of pop music, you need a model built on large amounts of high-quality, commercial music.

According to the Recording Industry Association of America (RIAA), that's exactly what Suno and Udio did.

Representing the ‘big three’ labels - UMG, Sony and Warner Records – the RIAA filed a lawsuit in June of this year, alleging “wilful copyright infringement on an almost unimaginable scale.” Seeking damages of up to $150,000 for each infringed work, the suit is an existential threat for Suno, Udio, and other AI song generators.

It’s likely that music from emerging artists has been used for training data

The allegations made by the RIAA also seem to be well-supported. In a recent column, Ed Newton-Rex, the former VP Audio at Stability AI, and now CEO of the non-profit Fairly Trained, demonstrated how easy it is to generate near-clones of famous songs using the Suno platform. But it’s not just big-name artists who should be worried. Tools that scrape data from public websites are ubiquitous, and it’s likely that music from emerging artists, posted to YouTube, Soundcloud, or similar platforms, has been used for training data.

“There's always the possibility,” replies Samantha Tauber when asked if her music might have been sucked up into a large language model. Best known by her fictional alter-ego, VNCCII, Tauber is no stranger to cutting-edge technology; she’s spent much of her career embracing it. From 3D avatars, to the metaverse, to blockchain, Tauber has woven technology into her art, and into the fictional universe that surrounds VNCCII.

When it comes to AI, she makes clear that she has no beef with the technology itself: “I'm not anti AI, I'm pro AI – as long as it supports and respects creative integrity. From an artist's perspective, if someone does not grant explicit permission, and these models are being trained on work without consent, I don't think that's right.”

Suno begs to differ. In a bombastic legal filing the company put rumours to rest and all but admitted to building their service on commercial music, asserting that doing so constituted ‘fair use’.

“Fair use is a defence against a copyright infringement claim,” says music lawyer Jonathan Larr, who teaches papers in music business at UCLA. “What you’re saying is: ‘I definitely infringed, but my infringement is okay because it's fair use.’”

Broadly speaking, fair use exemptions are reserved for areas like education, news coverage, or academic research. In this instance, Suno’s fair use argument rests on the claim that their model does not copy musical works, it simply ‘listens’ to them and ‘learns’ from them -  in the same way that a fledgeling songwriter might draw influence and inspiration from the back catalogue of a favourite band before penning an original tune.

The outputs from AI song generators are positioned to compete commercially with the very songs they were trained on

A key sticking point here is that, in many instances, the outputs from AI song generators are positioned to compete commercially with the very songs they were trained on. The RIAA lawsuit cites instances where users are already “publishing Udio-generated outputs on music streaming services, where they will compete for plays against real, copyrighted sound recordings.”

Suno’s CEO, Mikey Shulman, responded in a statement to Billboard that the company's model is “designed to generate completely new outputs, not to memorize and regurgitate pre-existing content. That is why we don’t allow user prompts that reference specific artists.”

Opinion seems to be divided on whether Suno’s fair use legal strategy will work. Some foresee a reckoning for AI song generators, others think the company’s arguments have a real chance of success.

As Larr points out, even by the notoriously complicated standards of music copyright law, fair use claims are “squishy” and generally decided on a case-by-case basis. In Larr’s opinion, Suno’s case looks weak, but he hastens to add: “I have to acknowledge my own bias here; I work with artists, and I don’t want artist made music to be crowded out by AI generated music. Beyond my own bias, I'm really not sure – fair use cases are hard to predict.”  

This profoundly unsettled legal landscape certainly has not stopped companies pushing ahead – in truth, many see this uncertainty as a golden opportunity to get products to market before regulators can react.

Recently, Microsoft's CEO of AI, Mustafa Suleiman, compared creative content on the open web to ‘freeware’ and implied that, unless otherwise stated, anything publicly posted online falls under the umbrella of fair use. This viewpoint is notable given that Microsoft has now integrated Suno directly into their AI Copilot service. In a similar vein, the former Google CEO Eric Schmidt, speaking at Stanford, told students: “if it [the product] takes off, then you hire a whole bunch of lawyers to go clean up the mess. But if nobody uses your product, then it doesn't matter that you stole all the content.”

That sentiment, shocking when stated so plainly, is indicative of a tried-and-true business model that permeates Silicon Valley to its highest echelons. “I'm pretty concerned when I see people in leadership positions spreading that message, saying the quiet part out loud, and admitting that they're trying to move fast now to make a profit before laws and rules are put in place,” McTier says in response. “It's awful, it is illegal, and it is bad for our society.”

On the other side of the debate are those that say groundbreaking technologies like generative AI would be impossible to achieve in a more regulated environment that prohibits this kind of dubious behaviour.

However, as AI music trailblazers Holly Herndon and Matt Dryhurst point out, in the case of the EU at least, innovation for its own sake faces few legal barriers. “It is legal to train a model on anything on the internet for personal and creative use so long as you are not commercializing the model itself. So, there are few creative restrictions in place for people looking to make art or express themselves. The legitimate question is whether we impose common sense restrictions on private companies monetizing the data of people with no expectation of sharing in profits made.”

With projects like the groundbreaking Holly+ vocal model, Herndon and Dryhurst have spent years at the forefront of the AI music conversation. Their new exhibition, The Call, which opens in October at Serpentine Gallery in London, invites the public to interact with a vocal model trained on fifteen UK-based choirs. It’s not only innovative in its artistry, it hints at alternative solutions to the issues raised by AI models. 

Working with Serpentine Gallery, Herndon and Dryhurst established what they've described as a ‘Data Trust experiment’. The pair say they are now “working with the community of choristers to set up a precedent-setting data governance infrastructure that gives underrepresented creatives greater collective control and agency over their data.“

“We are less interested in what we can get away with,” Herndon and Dryhurst continue. “And more interested in demonstrating that AI training can be positive sum and consenting. We feel that, long term, there is no reason why we can’t have very powerful AI models that are trained on consenting data, are free to use, and return benefits to their training subjects.”

"There needs to be a licensing structure and musicians need to be paid for their music"

Looking ahead, Larr sees a possible repeat of the early battles over music piracy; where legal action against platforms like Napster eventually created a space for licenced streaming platforms to take over. “There needs to be a licensing structure and musicians need to be paid for their music,” he says. “I think a positive step would be if this results in a Spotify-esque, licensed equivalent that becomes the dominant AI music company. Of course there are lots of complaints about how Spotify compensates artists, but that’s a different conversation.”

The RIAA lawsuit stands as a watershed moment. Should AI music companies be given legal cover through a fair use doctrine to train their models on any and all content, it would have seismic repercussions across the music industry. At the same time, it’s important to remember that a win for the RIAA is not a magic bullet when it comes to AI song generators.

Just as major labels are suing AI companies for using copyrighted music without consent, they are simultaneously negotiating with other tech giants to licence music for very similar projects. Are major labels above generating songs from dead artists to compete with living ones? Would some labels licence music that they own the rights to for AI training without the consent of the artist? History tells us that, when money is involved, major labels rarely side with the little guys.

“I see a lot of the paths forward ending up in a similar place,” warns McTier. “That place is one where artists don't feel safe putting their work out there because it might be scraped by these generators. They might not have enough of a market to put their work out because it's been so saturated by AI generated content. And the listeners, the viewers, would ultimately lose a lot of trust in the music and art scene if they don't know whether something was made by AI or made by a human that they respect and admire.”

It's not a rosy picture of tomorrow, it’s also not unrealistic. Licenced or otherwise, AI song generators are here to stay, and we all need to figure out how to live with them. If artists and fans want a different future for music, they can’t expect that change to come from on-high; they will need to fight for it.

“This is my livelihood,” emphasises Tauber as she gears up to release her debut album. “As an artist, you just don't know what could happen to your art, but you have to continue to create, you can’t stop out of fear that your work will be used in training data. Creativity is one of the highest forms of intelligence in existence, if you lose that to AI, then what's our function, really, as humans?”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.