Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Comment
Kenan Malik

ChatGPT can tell jokes, even write articles. But only humans can detect its fluent bullshit

The latest version of ChatGTP has just been released by OpenAI.
The latest version of ChatGTP has just been released by OpenAI. Photograph: NurPhoto/Getty Images

As the capabilities of natural language processing technology continue to advance, there is a growing hype around the potential of chatbots and conversational AI systems. One such system, ChatGPT, claims to be able to engage in natural, human-like conversation and even provide useful information and advice. However, there are valid concerns about the limitations of ChatGPT and other conversational AI systems, and their ability to truly replicate human intelligence and interaction.

No, I didn’t write that. It was actually written by ChatGPT itself, a conversational AI software program, after I asked it to create “an opening paragraph to an article sceptical about the abilities of ChatGPT in the style of Kenan Malik”. I might quibble about the stolid prose but it’s an impressive attempt. And it is not difficult to see why there has been such excitement, indeed hype, about the latest version of the chatbot since it was released a week ago.

Fed huge amounts of human-created text, ChatGPT looks for statistical regularities in this data, learns what words and phrases are associated with others, and so is able to predict what words should come next in any given sentence, and how sentences fit together. The result is a machine that can persuasively mimic human language.

This capacity for mimicry allows ChatGPT to write essays and poetry, think up jokes, formulate code, and answer questions whether to a child or an expert. And to do it so well that many over the past week have been both celebrating and panicking. “Essays are dead,” wrote the cognitive scientist Tim Kietzmann, a view amplified by many academics. Others claim that it will finish off Google as a search engine. And the program itself thinks it may be able to replace humans in jobs from insurance agent to court reporter.

And yet the chatbot that can write grade A essays will also tell you that if one woman can produce one baby in nine months, nine women can produce one baby in one month; that one kilo of beef weighs more than a kilo of compressed air; and that crushed glass is a useful health supplement. It can make up facts and reproduce many of the biases of the human world on which it is trained.

ChatGPT can be so persuasively wrong that Stack Overflow, a platform for developers to get help writing code, banned users from posting answers generated by the chatbot. “The primary problem,” wrote the mods, “is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good.” Or, as another critic put it, it’s a fluent bullshitter.

Some of these problems will be ironed out over time. Every conversation involving ChatGPT becomes part of the databank used to improve the program. The next iteration, GPT-4, is due next year, and will be more persuasive and make fewer errors.

Nevertheless, beyond such incremental improvement also lies a fundamental problem that faces any form of artificial intelligence. A computer manipulates symbols. Its program specifies a set of rules with which to transform one string of symbols into another, or to recognise statistical patterns. But it does not specify what those symbols or patterns mean. To a computer, meaning is irrelevant. ChatGPT “knows” (much of the time at least) what appears meaningful to humans, but not what is meaningful to itself. It is, in the words of the cognitive scientist Gary Marcus, a “mimic that knows not whereof it speaks”.

Humans, in thinking and talking and reading and writing, also manipulate symbols. For humans, however, unlike for computers, meaning is everything.

When we communicate, we communicate meaning. What matters is not just the outside of a string of symbols but its inside too, not just the syntax but the semantics. Meaning for humans comes through our existence as social beings, embodied and embedded in the world. I only make sense of myself insofar as I live in, and relate to, a community of other thinking, feeling, talking beings.

Of course, humans lie, manipulate, are drawn to and promote conspiracy theories that can have devastating consequences. All this is also part of being social beings. But we recognise humans as being imperfect, as potentially devious, or bullshitters, or manipulators.

Machines, though, we tend to view either as objective and unbiased, or potentially evil if sentient. We often forget that machines can be biased or just plain wrong, because they are not grounded in the world in the way humans are, and because they need to be programmed by humans and trained on human-gathered data.

We also live in an age in which surface often matters more than depth of meaning. An age in which politicians too often pursue policy not because it is necessary or right in principle but because it fares well in focus groups. An age in which we often ignore the social context of people’s actions or speech and are bedazzled by literalness. An age in which students are, in the words of the writer and educator John Warner, “rewarded for… regurgitating existing information” in a system that “privilege[s] surface-level correctness” rather than “develop[ing] their writing and critical thinking skills”. That ChatGPT seems so easily to write grade A essays, he suggests, “is mainly a comment on what we value”.

None of this is to deny the remarkable technical achievement that is ChatGPT, or how astonishing it feels to interact with it. It will undoubtedly develop into a useful tool, helping to enhance both human knowledge and creativity. But we need to maintain perspective. ChatGPT reveals not just the advances being made in AI but also its limitations. It also helps to throw light on both the nature of human cognition and the character of the contemporary world.

More immediately, ChatGPT raises questions, too, about how to relate to machines that are far better at bullshitting and at spreading misinformation than humans themselves. Given the difficulties in tackling human misinformation, these are not questions that should be delayed. We should not become so mesmerised by ChatGPT’s persuasiveness that we forget the real issues that such programs may pose.

• Kenan Malik is an Observer columnist

  • Do you have an opinion on the issues raised in this article? If you would like to submit a letter of up to 250 words to be considered for publication, email it to us at observer.letters@observer.co.uk

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.