Photographers are upset that Meta is labelling their edited photos on Instagram, Facebook and Threads as “made with AI”, raising concerns that the company’s “one-size-fits-all” approach to labelling artificial intelligence (AI) content is too blunt.
The tech giant’s rollout of automated AI content tagging has prompted a backlash from the photography community and kicked off a debate about what qualifies as being made with AI.
In April, Meta announced it would start labelling images, audio and videos that it detected were AI-generated based on “industry-shared signals of AI images”. This decision follows rapid advances in generative AI that have made it possible for anyone with a smartphone to immediately make photorealistic images for free, through Meta’s own Meta AI platform.
Over the past few weeks, people on Instagram and Threads have begun to notice images they posted were being tagged as “Made with AI”. In some cases, they’re not happy about it. One viral post on Threads was from photographer Matt Seuss who shared a photo that he took of Utah’s spectacular Mesa Arch which had been labelled on Instagram and Threads as made with AI. Acknowledging that he used Adobe Photoshop’s generative AI feature to “remove a small distracting dust flare”, Seuss took umbrage at the label: “Photo was made with camera and slightly edited with AI — big difference from made with AI,” he replied to one user.
Meta’s platforms and other online communities like r/Instagram are littered with photographers and other artists who dispute that the label should be added to their work. In some but not all cases, they’ve used Photoshop’s AI tools. Another Reddit thread has users claiming that their 3D product renderings have been tagged, too.
While the “Made with AI” tag doesn’t mean that a post is removed or penalised in Meta’s algorithms, some have argued that it undermines their work because it suggests the entire image is AI-generated or fake. “The AI label undermines the photographer, suggesting they have somehow contrived the image by using a controversial generative AI program like Midjourney,” wrote online photography publication PetaPixel’s Matt Growcoot.
Meta did not answer questions by deadline about how it detects and applies the tag on its platforms. But tests by users and PetaPixel suggest that Meta’s systems use an image’s metadata, which is the details embedded into a file like a digital equivalent of writing the date on the back of a physical photo. When Photoshop’s generative AI tools like generative fill — which lets users select an area of an image to replace with AI generated graphics using their prompt — were used, even to a miniscule amount, the file is tagged as “Made with AI” when uploaded.
One of the reasons that applying the “Made with AI” tag to these types of images has chafed photographers is because the scheme has significant holes that still allows people to post significantly edited or even AI-generated images without being branded with what one user deemed the “scarlet letter”.
Other Photoshop features which can significantly edit an image do not trigger the tag, including the “content aware fill” which fills in a selected section of an image using an algorithm to match the rest of the image. Nor does uploading images from well-known image generation services including OpenAI’s DALL-E and Midjourney, even if they are obviously fake to the eye. In fact, all it takes to avoid the tag is wiping the metadata — which can be done as easily as screenshotting and uploading an image. Simply put, Meta’s AI detection is exceedingly easy to sidestep.
Tech creator Tyler Stalman experimented by posting a photo of a streetscape with a sign edited out using Photoshop’s generative AI fill, and then the same photo with cartoonish monsters and a burning truck added and the metadata removed. Meta tagged the former as “made with AI” and not the latter.
The confusion and angst over how this label is being rolled out has its roots in thorny questions about the definitions of photographs, AI and reality. The distinction between Photoshop’s AI tools and its not-generative-AI but still algorithmic features is a fine and technical one; both tools use automation to edit an image in a photorealistic way. Similarly, modern smartphones use computational photography — algorithmic changes to the image — when capturing something.
Plus, while the average person may assume a “Made with AI” label suggests the image doesn’t reflect reality, it’s another question altogether as to whether they would consider a photograph as fake if it had been edited with AI tools to remove a blur or to brighten colours.
RMIT senior lecturer in visual communication and digital media Dr T.J. Thomson is grateful that Meta has taken some steps to improve transparency and context around images, but is worried that Meta’s “one-size-fits-all” approach may do more harm than good.
He would like to see more specific labels that might show which parts of an image have been edited or how it was edited. But even this wouldn’t solve the problem: “Machines won’t be able to guess intent so whether an edit is innocuous or meant to mislead or deceive will still require critical thinking skills and media literacy knowledge,” Thomson said in an email.