Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Guide
Tom’s Guide
Technology
Don Reisinger

OpenAI to launch deepfake detector as realism of AI-generated content grows

Still from a video created with a text prompt by OpenAI Sora.

It's no secret that AI-generated content is changing how people interact with the digital world. However, it also poses its fair share of risks as people create images and videos for potentially malicious reasons. And now, OpenAI says it's planning to address that.

OpenAI on Tuesday (May 7) launched a new image detection tool to help people identify whether an image was generated by its DALL-E 3 image-generation tool or if it was created without AI's assistance. In addition, OpenAI will make the tool available to a limited number of testers who are using its platform so they can integrate the image-detection feature into their apps.

In a blog post, OpenAI said that its tool could identify when an image was generated by AI in approximately 98 percent of cases and only returned false positions — identifying an actual image as one created by AI — in 0.5 percent of cases.

The new tool's announcement came alongside OpenAI's acknowledgment that it's been integrating metadata into the images and videos users create with its DALL-E 3 and Sora image- and video-creation tools, respectively. But as OpenAI acknowledged, with a little bit of know-how, malicious actors can remove that metadata, making its tool all the more necessary.

Deepfakes, or synthetically generated content designed to dupe users into believing it's human-generated, have become an increasingly concerning problem on the Internet. With an ever-increasing number of people turning to AI to create fake videos, images, and audio recordings, it's quickly becoming clear that verifying the veracity of content is exceptionally important.

While OpenAI's new tool is a step in the right direction, it's by no means a panacea. While it apparently works well, it's only been trained on DALL-E 3-generated images. In other words, if bad actors create images in other AI image generators, there's no guarantee the OpenAI tool will work as well—if it works at all. It's also worth noting that while OpenAI touted its image-sensing performance, deepfake videos designed to dupe users can be far more difficult to identify.

Still, at least OpenAI is doing something. In a world where bad actors are looking to fool users, companies like OpenAI need to find ways to protect users, or reality itself could fall prey to AI.

And in keeping with that, OpenAI also said on Tuesday that it's joined the steering committee for the Coalition for Content Provenance and Authenticity (C2Pa), a widely used standard that certifies digital content.

OpenAI stopped short of saying exactly how it'll impact C2PA, but did say that it "looks forward to contributing to the development of the standard, and we regard it as an important aspect of our approach."

More from Tom's Guide

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.