Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Newsroom.co.nz
Newsroom.co.nz
Technology
John Williams

Time to get over the shock of the 'new'

'AI-generated images of humans are not unusual in the advertising industry. It's cheaper, faster and better than paying for stock image libraries or a photoshoot featuring human models.' Photo: Getty Images

It’s unclear why AI-generated images in advertising are more or less deceptive or ethically questionable than using models

Opinion: There has been a recent furore about the use of AI-generated images featuring people used by the National Party in a political advertisement. My immediate reaction was that this was a storm in a teacup.

READ MORE:
This election year, we need to brace ourselves for AI A new, complex layer to the post-truth age Don’t write off AI to solve our doctor shortage AI chatbots will be a game-changer for educational assessment Artificial intelligence: Our dystopian future?

AI-generated images of humans are not unusual in the advertising industry. It's cheaper, faster and better than paying for stock image libraries or a photoshoot featuring human models.

I expect use of AI for image creation will become standard practice very soon, probably by the end of next year, unless the industries that are being disrupted react dramatically, eg Getty Images and the like, and also photographers and models dropping their prices by 50 percent or more.

There are several vendors of software capable of generating photo-realistic images featuring people, including OpenAI’s DALL·E, Stability AI’s StableDiffusion and Midjourney, so it is likely competitive pressure will keep AI image generation price-competitive with stock image libraries.

Furthermore, for advertisers that have hitherto not been able to find images that suit their requirements from commercial or free online libraries of images and hence have expended time and effort on a custom photo, the availability of generative AI has been a massive boon, not only in cost but in terms of the headaches involved in the logistics of planning and executing a photoshoot.

Pretty soon we won’t worry about deepfakes or AI-generated humans in still images or (within the next year or two, probably) commodification of photo-realistic video featuring humans

From an ethical point of view, it’s unclear why an AI-generated image is more or less deceptive or otherwise ethically questionable than using actors or models posing as nurses, road workers, etc. 

Do current political ads disclose if the people in the image are paid models/actors or real people? If they don't, then why should they have to disclose whether the people shown in the ad don't exist at all?

One aspect of the reporting made me chuckle, however. According to The Guardian, an unnamed National Party spokesman called the use of generative AI “an innovative way to drive our social media”.

This practice is hardly innovative, having been possible for the past couple of years, which in the fast-moving, always changing world of digital marketing technology time-scale, feels like about a decade ago.

So what could account for this furore, other than cheap political point-scoring? There has been a lot of concern and outright hysteria since OpenAI released ChatGPT to the general public in November 2022, which then went on to become the most rapidly adopted consumer technology of all time. 

Because of this, more people have become aware of the capabilities of AI to generate text and images. Some of that has been warranted, ie the possibility of displacing humans out of employment, and LAWs (lethal autonomous weapons), but some of it is probably a reaction of the shock of the new. Those of us involved with industries that use AI, notably marketing, see this as a continuing improvement of a long-standing trend, not a tech breakthrough.

I expect that the general public will calm down once the shock has worn off. Not too long ago, there was similar concern in relation to deepfakes, but one hardly sees this in the news any more.

That may be because the cost of living crisis, climate change and the Ukraine war are more important, but it also may be because most people have got over the shock of the new, and put that technology in its proper place: along with “Photoshopped” or “doctored” images, that decades or two ago had people worried that they couldn’t trust their eyes any more; that anyone could make malicious images purporting to show politicians in compromising positions, etc.

We don’t worry about that any more, and pretty soon we won’t worry about deepfakes or AI-generated humans in still images or (within the next year or two, probably) commodification of photo-realistic video featuring humans.

A recent tech demo by Nvidia and Epic Games showed that is now possible to use a mobile phone for a few seconds of motion-capture and a consumer-grade laptop computer to generate almost photorealistic footage in a few minutes, a process that until very recently took covering someone’s face with glowing dots, a day of motion-capturing actors expressions and movements, and weeks or months or processing. Not quite photorealistic, but it’s easy to see that will be achieved shortly. 

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.