At the start of 2023, Nick Cave got angry. Angry at artificial intelligence for making a mockery of his career. Songwriting has long been associated with the inherent ability to be soulful. But now one Google search will throw up endless AI capable of making lyricism universal: Audoir for song lyrics and poetry, Jarvis for banishing writer’s block, and ChatGPT for anything, everything and more. To put it plainly, the 65-year-old singer had had enough of being simulated.
“With all the love and respect in the world, the track is bulls*** and a grotesque mockery of what it is to be human,” he responded in a blog post to a fan who’d sent him song lyrics “in the style of Nick Cave”, generated by ChatGPT.
“Songs arise out of suffering… as far as I know, algorithms don’t feel,” he added. “Data doesn’t suffer. ChatGPT has no inner being, it has been nowhere, it has endured nothing.”
Cave isn’t the only artist to fall victim to AI imitation. Amy Winehouse, Nirvana and Jimi Hendrix are among the acts to have had their sound mimicked using technology. The Lost Tapes of the 27 Club – a project seeking to highlight the mental health crisis in music – used AI to produce "new songs" in the style of Winehouse, Hendrix, and Kurt Cobain, all of whom died at the age of 27. Their respective back catalogues were fed through neural networks designed to study and replicate the hooks, rhythms, melodies, and lyrics that make their music theirs. And it's scarily simple to do.
Ask ChatGPT to “write me a song in the style of Beyoncé,” and the software will do so quicker than you can listen to “CUFF IT”. “I walk into the room, lights shining on me. Everybody’s watching, I’m the Queen Bee. Got my hair done up, and my dress is tight. Got my heels on, and I’m ready to fight.” The software is an efficient and (fairly) convincing, if cringey, mimic.
But while artificial intelligence may seem like the future of music writing, language models like ChatGPT are a big contradiction – because modern technology can only ever look backwards. There is no human ingenuity. “AI music is essentially pastiche,” says machine-learning music specialist Professor Nick Bryan-Kinns. “It’s just copying what humans have done… AI’s not actually creating meaning itself.
“It’s not saying, ‘Here’s a song about my experience of being an AI stuck in a data centre in California somewhere – this is what it feels like,’” he adds. “There’s no innovation. I can make a Nick Cave song in the style of Abba [on ChatGPT] but that’s me, as a human, making that idea. AI is always looking back, generating stuff from what humans have previously made.”
“It’s not like AI went out, fell in love, had its heart broken, saw the sunrise, got drunk – it didn’t do that. So, it’s not able to convey the same emotional content,” he adds, echoing Nick Cave.
In 2018, Iranian electronica composer Ash Koosha appeared to contradict the emotionless AI narrative when he released an AI singer-songwriter called Yona whose lyrics (“The one who loves you sings lonely songs with sad notes”) shocked listeners with their vulnerability and emotion. “The one who loves your smile feels the storm coming through your eyes,” she wrote.
But even Yona’s feelings, produced from a language model like a “very early version of ChatGPT” were borrowed from humanity. “We trained Yona on author’s writings, poetry and Reddit threads – teenage lifestyle,” Koosha tells me. “All of this was to create phrasing in the language that would resonate with people.”
And AI is resonating with people, whether as an intriguing dystopian pop star or creative tool. Every year, countries compete in an annual AI Song Contest (the UK team, The Little Robots, came sixth in 2022). “I feel like we’re in the end of human art,” Grimes boldly claimed on Sean Carroll’s Mindscape podcast in 2020. “Once there’s actually AGI [Artificial General Intelligence – the point at which AI surpasses human intelligence in most tasks], they’re gonna be so much better at making art than us.”
The success of machines in artistic spaces has caused panic. If language models have the knowledge of the internet and the capability to convey our emotions – where does that leave human creativity?
“It’s scary, the idea of a machine writing a song,” says singer-songwriter Conal Kelly, who has been featured on Radio 1’s Future Artists with Jack Saunders. “I do worry AI will dilute people’s music tastes to the extent that it’s either impossible to tell what’s been written by a machine vs human, or the public will prefer AI songwriting. That’s a dangerous place for the world to be. For an artist, anyway.”
Kelly worries there’ll be a lack of motivation or excitement to become a songwriter at all. “The bar will be set so high with such little effort that for someone to be of an equal standard to technology they would need years of practice and experience to compete,” he says. “I imagine that idea will be too daunting for a lot of younger songwriters to see it through.”
If you’re a songwriter whose job in the studio is just to patch up and do a song like a collage of all the pop songs in the past, you’re done. Your days are numbered
New technology has always provoked fear. In the early days of the telephone, people speculated whether the device could communicate with the dead. When personal computers arrived in the early 1980s, so did “computerphobia” – the fear of touching, engaging with, or being replaced by the machine. And in 2023, the anxiety of the day, scaring everyone from data analysts to paralegals, is the idea that artificial intelligence is coming for their jobs – including hitmaking songwriters.
Bryan-Kinns predicts the worst-case scenario is that “there’s no chance for human musicians to actually put anything out there because the market’s totally dominated by AI music that’s responding to Twitter [trends]. A sort of dystopian view where AI is generating personalised, customised music, without relying on any human musicians”.
“If you’re a songwriter whose job in the studio is just to patch up and do a song like a collage of all the pop songs in the past, you’re done. Your days are numbered,” says Koosha. Yet, despite their concerns, both Kelly, Bryan-Kinns and Koosha all believe the future’s best-case scenario is that AI enhances and enriches the human creative process and pushes us to be better. “Maybe we should call it a creative support tool,” says Bryan-Kinns. “When a musician gets stuck on a line, AI could give them different metaphors. Sort of creating a superhuman.”
Stockport pop band Blossoms had a helping hand from AI when writing their hit song “Your Girlfriend”. “I turned the TV on and someone was saying they were in love with their friend’s girlfriend,” frontman Tom Ogden explained on TikTok. He then Googled “I’m in love with my friend’s girlfriend”, read a blog post by someone with the same issue, borrowed lines and turned them into lyrics (just like Yona), before asking an AI song name generator for a title. “Most of them were terrible but one of them was ‘Your Girlfriend is Ringing in My Ears’... I thought it was too much of a mouthful so I changed it to ‘Your Girlfriend’,” said Ogden. However, “Your girlfriend is ringing in my ears again” remained one of the key hooks of the song. Despite leaning on TV, Google, blog posts and AI for inspiration, though, the song remains a human creation.
They thought video and MTV would kill music. It didn’t happen. People just changed the way music is made and pursued
Yet, as the line between borrowing (like Blossoms) and mimicking (like Nick Cave’s fan with ChatGPT) becomes increasingly blurred, the music industry faces a new problem: music laundering.
“AI goes around the internet and scrapes all of the lyrics but never provides any credits,” says Bryan-Kinns. “If an AI makes a song, gets famous, went to No 1, who would get money from them? Certainly not the people in the huge dataset of millions of songs. It’s not going back to say, ‘This is 10 per cent, 20 per cent, 15 per cent.’ I better give them some money.”
Last summer, the government set out proposals to amend copyright laws that would allow AI creators to exploit musicians’ back catalogues without permission or compensation. Artists wouldn’t see a single pound for their work or have any creative input over songs made with their influence.
The former intellectual property minister, George Freeman, announced in February these proposals had been dropped. But the UK music industry still fears they could return, with many organisations joining the global Human Artistry Campaign earlier this month. Ultimately, technology has advanced countless times and hasn’t replaced human creativity, but artists need protection from being exploited.
“People worried about the gramophone 100 years ago,” says Bryan-Kinns. “They thought video and MTV would kill music. It didn’t happen. People just changed the way music is made and pursued.” David Bowie used a digital lyric randomiser in the Nineties. Ada Lovelace, one of the first computer programmers, said computers could make music in 1843.
Human creation has always adapted and overcome. When “Daddy’s Car”, an AI-written song intended to mimic The Beatles, was released in 2016, it didn’t outsell or delegitimise John, Paul, George, and Ringo. “Labels have tried to push songs that have repeated structure and form,” says Koosha. “But there’s no formula for success. There’s no data-driven approach.”
“In music, the person is the product,” concludes Koosha. “From Billie Eilish to Nick Cave… we care about their life story. Even if it’s the year 2200, we’ll still be looking for that person that has a story to tell. I wouldn’t bet on [music] labels being able to use AI to find the next The Weeknd. I believe in humans more than many marketing or tech companies would like to.”