The rapid online spread of deepfake pornographic images of Taylor Swift has renewed calls, including from US politicians, to criminalise the practice, in which artificial intelligence is used to synthesise fake but convincing explicit imagery.
The images of the US popstar have been distributed across social media and seen by millions this week. Previously distributed on the app Telegram, one of the images of Swift hosted on X was seen 47m times before it was removed.
X said in a statement: “Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them.”
Yvette D Clarke, a Democratic congresswoman for New York, wrote on X: “What’s happened to Taylor Swift is nothing new. For yrs, women have been targets of deepfakes [without] their consent. And [with] advancements in AI, creating deepfakes is easier & cheaper. This is an issue both sides of the aisle & even Swifties should be able to come together to solve.”
Some individual US states have their own legislation against deepfakes, but there is a growing push for a change to federal law.
In May 2023, Democratic congressman Joseph Morelle unveiled the proposed Preventing Deepfakes of Intimate Images Act, which would make it illegal to share deepfake pornography without consent. Morelle said the images and videos “can cause irrevocable emotional, financial, and reputational harm – and unfortunately, women are disproportionately impacted.”
In a tweet condemning the Swift images, he described them as “sexual exploitation”. His proposed legislation has not yet become law.
Republican congressman Tom Kean Jr said: “It is clear that AI technology is advancing faster than the necessary guardrails. Whether the victim is Taylor Swift or any young person across our country, we need to establish safeguards to combat this alarming trend.” He has co-sponsored Morelle’s bill, and introduced his own AI Labeling Act that would require all AI-generated content (including more innocuous chatbots used in customer service settings, for example) to be labelled as such.
Swift has not spoken publicly about the images. Her US publicist had not replied to a request for comment as of publication time.
Convincing deepfake video or audio has been used to imitate some high-profile men, particularly politicians such as Donald Trump and Joe Biden, and artists such as Drake and the Weeknd. In October 2023, Tom Hanks told his Instagram followers not to be lured in by a fake dentristry advert featuring his likeness.
But the technology is overwhelmingly targeted at women, and in a sexually exploitative way: a 2019 study by DeepTrace Labs, cited in the proposed US legislation, found that 96% of deepfake video content was non-consenting pornographic material.
The issue has considerably worsened since 2019. Fake pornography, where photo editing software is used to place a non-consenting person’s face into an existing pornographic image, is a longstanding problem. But a new frontier has opened up thanks to the sophistication of artificial intelligence, which can be used to generate entirely new and highly convincing images, including by using simple text commands.
High profile women are particularly at risk. In 2018, Scarlett Johansson spoke about widespread fake pornography featuring her likeness: “I have sadly been down this road many, many times. The fact is that trying to protect yourself from the internet and its depravity is basically a lost cause, for the most part.”
The UK government made nonconsensual deepfake pornography illegal in December 2022, in an amendment to the Online Safety Bill that also outlawed any explicit imagery taken without someone’s consent, including so-called “downblouse” photos.
Dominic Raab, then deputy prime minister, said: “We must do more to protect women and girls from people who take or manipulate intimate photos in order to hound or humiliate them. Our changes will give police and prosecutors the powers they need to bring these cowards to justice and safeguard women and girls from such vile abuse.”