Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Jeremy Kahn

RIAA lawsuits against Suno, Udio and Perplexity's broken pledges show the perils of AI's 'move fast, break sh-t' culture

Guitarist and vocalist Billie Joe Armstrong brandishing a guitar on stage in front of a screen showing images of a large audience as he performs live with American punk band Green Day at the Isle of Wight Festival. (Credit: Dawn Fletcher-Park/SOPA Images/LightRocket—Getty Images)

This week two different stories hammered home the extent to which today’s generative AI startups have adopted a “move fast and break sh-t” approach and how this may come back to haunt them—and us.

The first story is that the Recording Industry Association of America (RIAA), which represents major music labels, including Sony, Warner Music, and Universal Music, filed copyright infringement lawsuits against two well-funded AI startups, Suno AI and Uncharted Labs, which both offer text-to-music genAI models. (Suno’s is eponymous, while Uncharted’s is called Udio.) The complaints in both suits contain a number of pretty damning examples in which the RIAA demonstrates that Suno and Udio can, when prompted with an artist’s name—or even just a simple description of the sort of vibe that artist is known for—and also given the lyrics to one of that artist’s songs, produce works that substantially overlap the original, copyrighted song in composition and rhythm.

Both companies have been evasive about what data they used to train their AI models. Unchartered has said that Udio is trained on “a large amount of publicly available and high-quality music” that was “obtained from the internet.” Suno CEO Mikey Shulman has said the company uses a “mixture of public and proprietary data” and, according to the RIAA’s complaint, told the association’s lawyers only that its training data was “confidential” while not denying the RIAA’s contention that it had hoovered up vast libraries of copyrighted songs. Shulman said in a statement issued in response to the RIAA suit that its AI model produced “transformative” works and noted that the company does not allow users to prompt the model with an artist’s name. (The RIAA showed that this guardrail was easily overcome by simply adding spaces between the letters in a name.) Shulman also said that rather than engaging in a “good faith discussion, [the record labels] have reverted to their old lawyer-led playbook.”

The problem for Shulman and his counterparts at Udio: That old lawyer-led playbook works. The RIAA has deep pockets and a strong track record of winning similar lawsuits against companies that also claimed that somehow copyright law didn’t apply to a novel technology. Remember, it was the RIAA that essentially drove Napster out of business in the early 2000s, and it eventually forced Spotify to strike a licensing and royalty deal for the music streamed across its platform. Those lawsuits essentially established the ground rules of a new technological age. These ones will likely help do the same for the genAI era.

While it is possible that a judge will rule that using copyrighted material to train an AI system is, as Suno and other AI companies such as OpenAI and Stability AI have contended, “fair use,” it is highly unlikely any court will rule that creating outputs that substantially plagiarize the original, copyrighted works does not constitute infringement.

The troubling thing is that it has been obvious to anyone who used Suno’s or Udio’s text-to-music generators that they were trained on copyrighted material and could generate infringing outputs. And yet the most prestigious venture capital firms and investors in Silicon Valley have chosen to reward this “better to beg forgiveness than ask permission” business culture by doling out huge amounts of money to these ethically challenged startups. In May, Suno announced a $125 million venture capital round led by Lightspeed Venture Partners, Founder Collective, Matrix, and Nat Friedman and Daniel Gross.

The core ethical principle here is consent. It is a point that Neil Turkewitz, a former RIAA executive who has emerged as one of the most passionate voices railing against tech companies hoovering up data without permission to train AI models, has made repeatedly. If generative AI is the platform of the future, do we want it to be built on a foundation of theft and nonconsensual use? Or would we rather put consent at the heart of the business models on which the AI age is being constructed?

The other AI news item that caught my attention this week also revolves around consent. An investigation by Wired found that popular genAI search engine Perplexity was almost certainly violating its own pledge to respect publishers’ wishes to prevent AI companies from scraping their content for free. The code, called robots.txt, is a way of asking bots used by specific tech companies not to scrape the data found on the page. The protocol is voluntary—just a courtesy, really—not something legally binding. But the point is that Perplexity said it would abide by the standard, and then hasn’t. In fact, Wired caught it using an IP address to troll websites that it had not published as belonging to one of its web-scraping bots—which certainly makes it seem like Perplexity knows exactly what it is doing. Perplexity told Wired in a statement that the publication’s questions “reflect a deep and fundamental misunderstanding of how Perplexity and the internet work,” but then it provided no further explanation of how, exactly, Perplexity does work.

This is in some ways even worse than what Suno and Udio have done. The AI music companies at least had the decency to stare awkwardly at their feet and mumble when confronted about taking copyrighted works without consent. Perplexity is looking us in the eye and saying, “of course, we won’t scrape your data (psych!).” Wired points out that summarizing news stories, which is what Perplexity does, is legally protected. (The story points out some other issues with Perplexity too, such as the fact its AI models sometimes summarize news stories incorrectly, and other times the summaries basically plagiarize the original work, which could be a copyright violation in some circumstances.) But I am more interested in the issue of ethics. And here, consent is key.

The corollary to Trukewtiz’s question is what happens if we don’t put consent at the heart of our data gathering practices? Well, what happens is that we all—as consumers or customers of this technology—become complicit. Silicon Valley’s louche ethics become ours. 

This collective complicity is why we should all insist on tech companies doing better—and since they are unlikely to clean up their act unless forced to do so, why we need regulation. The EU’s AI Act, which requires that companies disclose if they have used any copyrighted material in training, is a start. The U.S. should consider similar rules.

But we should also insist as consumers and customers that tech companies come clean about what they are doing. We should also require that companies certify that they have consent to use the data their AI models are ingesting. A few weeks ago in London, I met with Ed Newton-Rex, a composer and entrepreneur who for a time headed Stability AI’s music generation efforts, before resigning over the company’s approach to consent and copyright. Newton-Rex has launched a new venture, Fairly Trained, that seeks to be a kind of Good Housekeeping Seal of Approval for AI companies’ data procurement practices. So far, though, only about 14 companies’ AI models have been certified. More work is clearly needed.

With that, here’s more AI news. 

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

Before we get to the news...Join me and Fortune as we delve into AI's role in reshaping business operations and tackling challenges unique to Asian markets. We'll be discussing AI's impact on the bottom line and on everything from geopolitics to the environment at Fortune Brainstorm AI Singapore, July 30-31. Enjoy two days of dynamic interviews, fast-paced panels, live demos, and meaningful networking opportunities at the Singapore Ritz Carlton. Don’t miss out—reserve your spot today to stay ahead in the new race for AI. And for readers of Eye on AI, I can offer you a special discount code, good for 50% off the normal delegate's price. Just use the code BAI50JeremyK when you register here.

**Finally, a reminder to preorder my forthcoming book Mastering AI: A Survival Guide to Our Superpowered Future. It is being published by Simon & Schuster in the U.S. on July 9 and in the U.K. by Bedford Square Publishers on Aug. 1. You can preorder the U.S. edition here and the U.K. edition here.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.