More than 100 deepfake videos impersonating Rishi Sunak were promoted on Facebook last month, a study has found.
Communications Company Fenimore Harper has raised the alarm about the threat AI poses ahead of the next general election.
Fake adverts about the Prime Minister may have reached as many as 400,000 people despite appearing to break Facebook’s rules.
The videos mark the first time that the Prime Minister’s image has been doctored systematically en masse, the Guardian reports.
One video included faked footage of BBC newsreader Sarah Campbell appearing to read breaking news about a fake scandal that had erupted around Mr Sunak earning "colossal sums from a project that was initially intended for ordinary citizens".
Nearly £13,000 was spent on 143 adverts from 23 countries including the US, Turkey, Malaysia and the Philippines, the study found.
It comes after deepfake videos of Sir Keir Starmer were shared online at the beginning of the Labour party conference in October.
Fenimore Harper was set up by Marcus Beard, a former Downing Street official who headed No 10’s response to countering conspiracy theories during the Covid crisis.
Mr Beard has warned that the adverts mark a shift in the quality of fakes and that elections this year are at risk of being manipulated by AI-generated fakes.
He said: "With the advent of cheap, easy-to-use voice and face cloning, it takes very little knowledge and expertise to use a person’s likeness for malicious purposes.
"Unfortunately, this problem is exacerbated by lax moderation policies on paid advertising. These adverts are against several of Facebook’s advertising policies. However, very few of the ads we encountered appear to have been removed."
In 2024, more than 40 countries — accounting for over 40 per cent of the world — will hold national elections, making it the largest year for global democracy.
A UK government spokesperson told the Guardian: "We are working extensively across government to ensure we are ready to rapidly respond to any threats to our democratic processes, through our defending democracy taskforce and dedicated government teams.
"Our Online Safety Act goes further by putting new requirements on social platforms to swiftly remove illegal misinformation and disinformation – including where it is AI-generated – as soon as they become aware of it."
A BBC spokesperson told the newspaper: "In a world of increasing disinformation, we urge everyone to ensure they are getting their news from a trusted source.
"We launched BBC Verify in 2023 to address the growing threat of disinformation – investing in a highly specialised team with a range of forensic and open source intelligence (OSINT) to investigate, factcheck, verify video, counter disinformation, analyse data and explain complex stories.
"We build trust with audiences by showing how BBC journalists know the information they are reporting, and offer explainers on how to spot fake and deepfake content. When we become aware of fake BBC content we take swift action."
A spokesperson for Meta, Facebook's parent company, told the Standard: "We remove content that violates our policies whether it was created by AI or a person.
"The vast majority of these adverts were disabled before this report was published and the report itself notes that less than half a percent of UK users saw any individual ad that did go live.
"Since 2018, we have provided industry-leading transparency for ads about social issues, elections or politics, and we continue to improve on these efforts.”