Taking a major step towards transparency and authenticity in AI-generated visuals, OpenAI announced it will be embedding watermarks directly into images created by ChatGPT on the web and the company's popular image generation model, DALL-E 3.
The move comes amid the prevalence of AI-generated deepfakes and misinformation online. To combat this issue, OpenAI said it will now include "C2PA metadata" in ChatGPT and DALL-E 3 AI-generated images. This will allow people to identify images created using AI tools.
Also, this decision is the AI startup's response to the growing demand for standardised methods to monitor and label AI content across social media platforms. In line with this, Meta recently confirmed it is developing a tool that will identify AI-generated content that appears on Facebook, Instagram and Threads.
OpenAI noted that the watermark will comprise pieces of vital information, including the C2PA logo and the time when the image was generated. However, the Sam Altman-led AI company believes incorporating the Coalition for Content Provenance and Authenticity (C2PA) into DALL-E 3 tech and ChatGPT is "not a silver bullet to address issues of provenance".
The company noted that metadata like C2PA can be removed either accidentally or intentionally. For instance, most social media platforms remove metadata from uploaded images. Alternatively, it can easily be removed by actions like taking a screenshot.
In other words, there's no way to ensure that an image lacking this metadata wasn't generated with ChatGPT or OpenAI's API. While alternative methods like reverse image search, metadata investigation and image analysis can offer hints, their accuracy is not guaranteed.
Furthermore, metadata like C2PA is likely to impact the file size for images generated using AI tools, but it won't affect the quality of the pictures. In its latest blog post, OpenAI shared some illustrative examples of how the image sizes might change with the addition of this data:
- 3.1MB → 3.2MB for PNG through API (3% increase)
- 287k → 302k for WebP through API (5% increase)
- 287k → 381k for WebP through ChatGPT (32% increase)
This change will roll out to mobile users by February 12, 2024.
Watermarking may not be enough
OpenAI has been sparing no effort to restrict the spread of misleading content made using its AI tools. For instance, the company recently banned the developer of Dean.Bot, an AI-powered bot that mimicked US presidential candidate Dean Phillips.
Likewise, watermarking for images generated using ChatGPT and DALL-3 is a step in the right direction. However, it may not be enough given that AI is being used to spread misinformation and create fake content ahead of the 2024 US election.
Explicit images of pop star Taylor Swift were floating around the internet a few weeks ago. Later, it was revealed that they were AI deep fakes reportedly generated using Microsoft Designer.
So, it is crucial to put guardrails in place and heighten censorship on tools that use AI to generate images.