Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Comment
Charlie Beckett

GPT-4 has brought a storm of hype and fright – is it marketing froth, or is this a revolution?

The GPT-4 logo on a mobile phone screen.
‘It is too late to put this technology back in the box.’ Photograph: Jaap Arriens/NurPhoto/Rex/Shutterstock
Charlie Beckett

Photograph: supplied for byline

The recent flurry, or rather blizzard, of announcements of new variants of generative AI have brought a storm of hype and fright. OpenAI’s ChatGPT already appeared to be a gamechanger, but now this week’s new version, GPT-4, is another leap ahead. GPT-4 can generate enough text to write a book, code in every computer language, and – most remarkably – “understand” images.

If your mind is not boggled by the potential of this, then you haven’t been paying attention. I have spent the past five years researching how artificial intelligence has been changing journalism around the world. I’ve seen how it can supercharge news media to gather, create and distribute content in much more efficient and effective ways. It is already the “next wave” of technological change. Now generative AI has moved potential progress up a gear or two.

But hang on. This is not a breakthrough to “sentient” AI. The robots are not coming to replace us. However, these large language models (LLMs) – such as ChatGPT – are an accelerant that operate at such scale and speed that they can appear to do whatever you prompt them to do. And the more that we use them and feed them data and questions, the faster they learn to predict outcomes.

A million startups are already claiming to use this secret sauce to create new products that will revolutionise everything from legal administration to share dealing, gaming to medical diagnosis. A lot of this is marketing froth. As with all tech breakthroughs, there is always a hype cycle and unexpected good and bad consequences. But I have seen enough to know that it’s going to alter our lives. Just think what these tools could do when used by creative people in fashion or architecture, for example.

Artificial intelligence such as machine-learning, automation or natural language processing is already part of our world. For example, when you search online you are using machine-learning-driven algorithms trained on vast datasets to give you what you are looking for. Now the pace of change is picking up. In 2021 alone, global private corporate investment in AI doubled, and I expect the generative AI breakthroughs to double that again.

Now take a breath. I don’t recommend that anyone uses ChatGPT or GPT-4 to create anything right now – at least not something that will be used without a human checking to make sure that it is accurate, reliable and efficient, and does no harm. AI is not about the total automation of content production from start to finish: it is about augmentation to give professionals and creatives the tools to work faster, freeing them up to spend more time on what humans do best.

We know that there are some real extra risks in using generative AI. It has “hallucinations” where it makes things up. It sometimes creates harmful content. And it will certainly be used to spread disinformation or to invade privacy. People have already used it to create new ways to hack computers, for example. You might want to use it to create a wonderful new video game, but what if some arch-villain uses it to create a deadly virus?

We know about those risks because we can see its flaws when we try out these prototypes that the technology companies have made publicly available. You can have a lot of fun getting it to write poems or songs or create surreal images. Ask it a straight question, and you usually get a sensible safe answer. Ask it a stupid or complex question, and it will struggle. A lot of tech experts and journalists have had fun testing it to destruction and making it respond in bizarre and disturbing ways. The AI boffins will be delighted because this all helps refine their programming. They are conducting their experimentation partly in public.

We also know about the risks because OpenAI itself has listed them on its “system card” that explains the new powers and dangers of this tech, and how it has sought to ameliorate them with each new iteration. Who decides in the end what risks are acceptable or what we should do about them is a moot question.

It is too late to put this technology “back in the box”. It has too much potential for helping humans meet the global challenges we face. It is vital that we have an open debate about the ethical, economic, political and social impact of all forms of AI. I hope that our politicians educate themselves rapidly about this fast-emerging technology better than they have in the past, and that we all become more AI-literate. But ultimately, my main hope is that we take the time and effort to think carefully about the best ways that it can be used positively. You don’t have to believe the hype to have some hope.

  • Charlie Beckett is a professor in the Media and Communications Department at the LSE. He is director of Polis, the LSE’s journalism thinktank and leader of the LSE Journalism and AI project.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.