Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Guide
Tom’s Guide
Technology
Nigel Powell

Will digital watermarking save the world from fake news?

Ideogram generated image of a robot labeling a painting which is also AI generated.

What do you do when AI fakery becomes more realistic than real life? Well, according to one recent MIT report, you label it with a watermark. But is that enough?

There’s a growing number of governments and concerned agencies who are driving a new move towards identifying fake media at source. The potential for abuse is obvious. How can people tell whether something is real or fake in today’s media-obsessed world?

The problem is not a new one. Those old enough to remember when Photoshop entered the market in 1990, will recall the shock and horror that greeted the first photographers who altered their work with the tool.

Of course, any product that can so easily remove wrinkles was never going to disappear, but for a time the retouching issue reached fever pitch across the world. And to a certain extent, it still exists. ‘Photoshopped’ is now an accepted verb everywhere.

AI just elevates the problem to a new level of potential pain. Not just images, but audio, and soon video. And for the first time, it’s not just retouching here and there, but creating new, completely non-existent, people, places and events. Where does it all stop?

How do you solve a problem like AI?

(Image credit: Google DeepMind)

Of course, the answer is, that it doesn't stop, so we just have to deal with it. The good news is it's already started. YouTube has recently mandated that all AI-created videos must be so labeled. All creators who upload a video must ‘disclose content that is meaningfully altered or synthetically generated when it seems realistic.’

TikTok has taken it one step further by implementing technology that will automatically label all AI content uploaded to the service, even when the creator has not identified it as such.

We all know the driving force behind these moves, especially in an election year. Fake news has become a popular rallying call, and the whole situation threatens to careen out of control without some sort of industry or legislative policing.

These early moves by the content platforms is obviously an attempt to deflect calls for legislation, but it may be too little too late.

The problem will get worse before it is solved

The problem is that the new AI tools are becoming too popular, and the majority of the material is not created for sinister purposes. Advertising, fashion, product marketing, even news services are using AI to enhance media content, often in innovative and valuable ways. And each channel comes with its own potential for abuse.

The most cohesive move against fake content has come from the Coalition for Content Provenance and Authenticity (C2PA). This is a project launched by the Joint Development Foundation, a Washington-based non-profit that aims to tackle AI-based misinformation and manipulation. Its Content Credentials initiative includes major players like Adobe, X, OpenAI, Microsoft and the New York Times.

The move follows on from a presidential Executive Order issued by President Biden late last year which aimed to “protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content.”

Google and Meta also have their own initiatives in place, or coming up shortly, called SynthID and StableSignature respectively. Whether these attempts will be enough remains to be seen.

A problem that will sort itself out

The fact is the best watermarking or legislative regulation in the world will not stop something from going viral if it’s addictive enough. And by that time, the provenance of the media is a minor part of the equation.

Who reads retractions in newspapers when they admit they got something wrong in a prior report? But in the end the one thing that might possibly solve the issue is the public’s growing ‘spidey-sense’ about true and false.

In the same way that many people can spot obviously Photoshopped images from their outrageous composition or improbable subject matter, so with AI it may eventually be possible to sense strangeness about a piece of media, no matter how well it’s done.

Or as someone clever once said, perhaps we should assume that all media content is AI faked, unless it’s been incontrovertibly identified as human-made.

More from Tom's Guide

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.