At a time when suspicion of manipulated media has reached a new pitch of concern, the Princess of Wales photo furore underlines the sensitivity around image doctoring.
Catherine was the subject of an image editing row in 2011 when Grazia adapted a photo of her on her wedding day – but that was before breakthroughs in artificial intelligence put everyone on edge.
There has been a deluge of AI-generated deepfakes in recent years, from a video of Volodymyr Zelenskiy telling his soldiers to surrender, to explicit images of Taylor Swift. Historical examples of image manipulation can be clunky – from Argentinian footballers clutching handbags to Stalin’s missing underlings – but there is now an alarming credibility to AI-generated content.
Catherine’s attempts to adjust a family photo, amid frenzied social media speculation about her wellbeing, have run straight into widespread concerns about trust in images, text and audio in a year when half the world is going to the polls.
“This photo is a prime example of why 2024 is a crucial year for spotting – and stopping – manipulated media,” says Shweta Singh, an assistant professor of information systems at Warwick Business School.
“Whilst this may have been some low-level photoshopping, much of the edited media currently circulating can be more sinister. With elections in both the UK and the US this year, the importance of media being genuine has never been higher. Suspect photoshopping like this only undermines the faith of the public in the media they are presented with, and risks seriously damaging public trust.”
Michael Green, a senior lecturer in digital media at the University of Kent, says the Wales family photo is “clearly amateur-level editing” using software such as Adobe Photoshop, although the princess has not confirmed what tools she used after admitting on Monday she had altered the photo.
Green says Catherine’s changes “go beyond a touch-up” and represent significant edits, although he adds that the end result underlined her inexperience with whatever software she was using.
Aided by an online uproar, some of the world’s biggest picture agencies withdrew the photo after it was found to be clearly in breach of guidelines. Associated Press, which pulled the image, says a photograph “must not be altered in Photoshop or by any other means”, while Agence France-Presse says photos must not be “manipulated or edited”.
Despite these guidelines, the picture got through. Nic Newman, a senior research associate at the Reuters Institute for the Study of Journalism, says the Wales snap is a reminder that media institutions will have to vet all their stories more strongly in an age of high technological sophistication.
He says: “For years these agencies have been really focusing on verifying pictures that have come through less reliable sources. This is a reminder that we have to be careful about all sources going forwards because everyone has these tools now.”
Experts have also lined up to rule out the photo being AI-generated, in the kind of double-check that is becoming standard now.
“There is no evidence that this image is entirely AI-generated,” says Hany Farid, a professor at the University of California in Berkeley and a specialist in deepfake detection.
AI makes the challenge of spotting manipulated or faked media even greater, not least because of its sophistication, but also the ability to mass-produce convincing content at alarming speed. Speaking at a tech conference in London on Monday, the tech secretary, Michelle Donelan, admitted that in tackling AI-generated deepfakes there was “no one technological solution that is the answer to everything”.
Instead, there will be a patchwork approach to spotting AI-made disinformation from disruptors whose arsenal now includes an array of ruses from fake Joe Biden robocalls to newscaster avatars.
Efforts to combat the problem include the Coalition for Content Provenance and Authenticity, a grouping whose members include Adobe, the BBC and Google, and which is developing technical standards for identifying AI-generated disinformation. Mark Zuckerberg’s Meta has announced it will label AI-generated photos on Facebook, Instagram and Threads, while Google is trialling a technology that flags AI-generated images made by its own image-making tools.
Dame Wendy Hall, a professor of computer science at the University of Southampton and a member of the UN’s advisory body on AI, adds that the Wales family photo story is not about deepfakes and AI but about a longstanding issue that is here to stay.
She says: “It is about who we can trust in the telling of the story as it unfolds. This is an issue that is old as time, it’s just the technology that is different these days.”