Unless you've been in Loompaland for the past week, you will have seen news of the disastrous Willy Wonka Experience. Billed as an "exhilarating and immersive" live experience in Glasgow, it turned out to be an event so abysmal and so flagrantly misadvertised that it made the 2017 Fyre Festival look like a paragon of ethical promotion.
Adverts for The House of Illuminati's 'Willy's Choclate Experience' (sic; even the event website's meta title seems to have been written by AI) used AI-generated imagery to depict an enchanting fantasy world very vaguely inspired by Roald Dahl's stories but with much worse spelling. However, when guests arrived, they discovered that the magical and whimsical place was a virtually derelict warehouse forlornly decorated with a few homemade props.
Disappointed visitors were reportedly given refunds after police were called in, but we're left with the feeling that there are lessons to be learned here. Sure, the first lesson is perhaps for the general public: don't ever attend any event or buy any product whose advertising relies on what is clearly AI-generated imagery showing scenes that could not possibly be recreated in real life and whose promoter has not even bothered to proofread the name of their product.
But as the public learns to adopt a skepticism towards AI imagery, and while we wait for regulators to step in, brands need to ensure they are being honest in their use of AI if they don't want to harm their own reputation.
Experts at Bynder, a global leader in Digital Asset Management (DAM), point out that brands need to be wary of the ethics of using AI in advertising, not only out of respect for their customers, but also for the good of their reputation.
Steve Vinall, Director of Global Brand and Communications, told us: “The Wonka experience shows that an increased accessibility to high-quality AI image generation could cause issues for brands. All ads need to adhere to strict advertising standards, however in the case of using AI to create realistic looking imagery, false advertising could be on the rise.”
So what should brands do? Vinall suggests that although advertising laws don't yet specifically address concerns regarding AI-generated images, brands would be well advised to disclose their use of the tech.
“It’s in brands' best interests, as well as their customers', to ensure all AI generated content is disclosed as such, in order to ensure transparency and not partake in false advertising practices," he says.
“It could be argued that AI doesn’t pose any false advertising risks which weren’t there before it became mainstream. However, the sheer volume of people who now have access to AI to create these high quality images may mean that they are used more frequently in ads.”
“Currently advertising laws don’t specifically address AI-generated content, but with the ever changing landscape and growth of the technology we can only expect this to change soon. Despite this, advertising laws still apply to this AI-generated content, meaning advertisers should be aware of the risks associated with misleading consumers.”
“As more and more people get access to these tools it’s important that we all have an understanding of responsible AI use and when and where this must be disclosed."
If to go to their website right now, you will see this, I promise, this AI catastrophe of wordshapes is real https://t.co/nchBpJjnUx pic.twitter.com/nibcv1uFrPFebruary 27, 2024
AI is surely going to have a place in the creation of advertising – in the use of digital twins for example. But the Wonka debacle shows that brands need to be careful about egregious and misleading uses of the technology, especially when they don't have a product that can back it up.
Meanwhile, there's a new kid on the block among AI image generators. Some are suggesting that Ideogram could beat Midjourney and DALL-E 3 in some areas.