Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Technology
Ben Quinn Political Correspondent

Slew of deepfake video adverts of Sunak on Facebook raises alarm over AI risk to election

Rishi Sunak
The deepfake videos of Rishi Sunak originated from 23 countries and may have been seen by as many as 400,000 people on Facebook. Photograph: Jeff Overs/BBC/AFP/Getty Images

More than 100 deepfake video advertisements impersonating Rishi Sunak were paid to be promoted on Facebook in the last month alone, according to research that has raised alarm about the risk AI poses before the general election.

The adverts may have reached as many as 400,000 people – despite appearing to break several of Facebook’s policies – and mark the first time that the prime minister’s image has been doctored in a systematic way en masse.

More than £12,929 was spent on 143 adverts, originating from 23 countries including the US, Turkey, Malaysia and the Philippines.

They include one with faked footage of a BBC newsreader, Sarah Campbell, appearing to read out breaking news that falsely claims a scandal has erupted around Sunak secretly earning “colossal sums from a project that was initially intended for ordinary citizens”.

It carries the untrue claim that Elon Musk has launched an application capable of “collecting” stock market transactions and follows with a faked clip of Sunak saying the government had decided to test the application rather than risking the money of ordinary people.

The clips then lead to a spoofed BBC News page promoting a scam investment.

The research was carried out by Fenimore Harper, a communications company set up by Marcus Beard, a former Downing Street official who headed No 10’s response to countering conspiracy theories during the Covid crisis.

He warned that the adverts, which mark a shift in the quality of the fakes, showed that elections this year were at risk of manipulation from a large quantity of high quality AI-generated falsehoods.

“With the advent of cheap, easy-to-use voice and face cloning, it takes very little knowledge and expertise to use a person’s likeness for malicious purposes.”

“Unfortunately, this problem is exacerbated by lax moderation policies on paid advertising. These adverts are against several of Facebook’s advertising policies. However, very few of the ads we encountered appear to have been removed”

Meta, which owns Facebook, has been approached for comment.

A UK government spokesperson said: “We are working extensively across government to ensure we are ready to rapidly respond to any threats to our democratic processes, through our defending democracy taskforce and dedicated government teams.

“Our Online Safety Act goes further by putting new requirements on social platforms to swiftly remove illegal misinformation and disinformation – including where it is AI-generated – as soon as they become aware of it.”

A BBC spokesperson said: “In a world of increasing disinformation, we urge everyone to ensure they are getting their news from a trusted source. We launched BBC Verify in 2023 to address the growing threat of disinformation – investing in a highly specialised team with a range of forensic and open source intelligence (OSINT) to investigate, factcheck, verify video, counter disinformation, analyse data and explain complex stories.

“We build trust with audiences by showing how BBC journalists know the information they are reporting, and offer explainers on how to spot fake and deepfake content. When we become aware of fake BBC content we take swift action.”

Regulators have been concerned that time is running out to enact wholesale changes to ensure Britain’s electoral system keeps pace with advances in artificial intelligence before the next general election, which is tipped to take place in November.

The government has been holding discussions with regulators including the Electoral Commission, which says new requirements under legislation from 2022 for digital campaign material to include an “imprint” for it will go some way to ensuring voters can see who paid for an ad or is trying to influence them.

A Meta spokesperson said: ‘“We remove content that violates our policies whether it was created by AI or a person. The vast majority of these adverts were disabled before this report was published and the report itself notes that less than0.5% of UK users saw any individual ad that did go live.

“Since 2018, we have provided industry-leading transparency for ads about social issues, elections or politics, and we continue to improve on these efforts.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.