Get all your news in one place.
100’s of premium titles.
One app.
Start reading
iMore
iMore
Technology
Oliver Haslam

The Firefly generative AI was fed Midjourney images to help it learn — despite Adobe arguing it was a more ethical, safer alternative

Adobe Firefly Example Image.

You may have been using the Adobe Firefly generative AI tool to create commercially safe and ethical images for use in your work and other projects of late. After all, that's how Adobe sold the image-creating tool to users, saying that it was a better option than competing products like Midjourney. But it turns out that the two are more similar than you might think — because Firefly used Midjourney-created images as a way to learn.

While Adobe said that it used its own Adobe Stock images to teach Firefly's AI how to create new ones, things were a little murkier than the company first let on. While that was technically true, and Adobe Stock's database of hundreds of millions of licensed images was indeed used, the source of those images is a problem. Because some of them were created using Midjourney, the same generative AI that Adobe claimed wasn't to be trusted.

Adobe says that only around 5% of the images it used to train Firefly were generated by other AI platforms and tools, but the figure is absolutely not zero, and it flies in the face of a marketing strategy that has now come back to haunt the company.

Busted

The issue stems from the way Firefly was created. Like all other generative AI tools, Firefly needed a huge amount of data for its training. Midjourney and similar alternatives have historically used data scraped from the internet, regardless of where that might come from or who actually owned it. That's caused plenty of concern among rights holders and that's what Adobe jumped on when publicizing Firefly.

Bloomberg reports that Adobe had made quite a big deal about how it trained Firefly. AI-generated imagery in Adobe Stock “enhances our dataset training model, and we decided to include this content for the commercially released version of Firefly,” an Adobe employee reportedly wrote when explaining how Firefly works.

Now, we know some less ethical sources were used when training Firefly.

“Generative AI images from the Adobe Stock collection are a small part of the Firefly training dataset,” Adobe representative Michelle Haarhoff said in a Discord post in September 2023.

The company now says that around 5% of its training dataset came from other AI platforms. “Every image submitted to Adobe Stock, including a very small subset of images generated with AI, goes through a rigorous moderation process to ensure it does not include IP, trademarks, recognizable characters or logos, or reference artists’ names,” a company spokesperson reportedly told Bloomberg.

Ultimately, Adobe hasn't necessarily done a great deal wrong here, but it's a bad look for a company that had tried to market its generative AI as being better than the competition because of the ethics involved in how it learned to do the sole thing it was created to do. The fact that images created using another company's AI were used at all is less than ideal, no matter how small the number might have been.

It isn't thought that this snafu could leave Adobe in any legal hot waters, but it's a self-inflicted black eye that probably never needed to happen and it remains to be seen how Firefly users around the world will respond to the news.

Apple is expected to launch its own generative AI tools in the future, with June's WWDC event likely to see a big AI push for iOS 18, macOS 15, and beyond.

More from iMore

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.