Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Entertainment
Ella Creamer

‘Hallucinate’ chosen as Cambridge dictionary’s word of the year

‘It’s so easy to anthropomorphise these systems’ … large language model AIs are notorious for ‘hallucinating’ false information.
‘It’s so easy to anthropomorphise these systems’ … large language model AIs are notorious for ‘hallucinating’ false information. Photograph: Andrew Ostrovsky/Alamy

Cambridge dictionary’s word of the year for 2023 is “hallucinate” – a verb that gained an additional meaning this year.

The original definition of the chosen word is to “seem to see, hear, feel, or smell” something that does not exist, usually because of “a health condition or because you have taken a drug”. It now has an additional meaning, relating to when artificial intelligence systems such as ChatGPT, which generates text that mimics human writing, “hallucinates” and produces false information.

The word was chosen because the new meaning “gets to the heart of why people are talking about AI”, according to a post on the dictionary site. Generative AI is a “powerful” but “far from perfect” tool, “one we’re all still learning how to interact with safely and effectively – this means being aware of both its potential strengths and its current weaknesses”.

The dictionary added a number of AI-related entries this year, including large language model (or LLM), generative AI (or GenAI), and GPT (an abbreviation of Generative Pre-trained Transformer).

“AI hallucinations remind us that humans still need to bring their critical thinking skills to the use of these tools,” continued the post. “Large language models are only as reliable as the information their algorithms learn from. Human expertise is arguably more important than ever, to create the authoritative and up-to-date information that LLMs can be trained on.”

Henry Shevlin, an AI ethicist at the University of Cambridge, said that it is “striking” that rather than choosing a computer-specific term like “glitches” or “bugs” to describe the mistakes that LLMs make, the dictionary team decided on a “vivid psychological verb”. He said that this may be because “it’s so easy to anthropomorphise these systems, treating them as if they had minds of their own”.

Shevlin also said that this year will probably be the “high watermark of worries” about AI hallucinations because AI companies are making efforts to curb the frequency of mistakes by drawing on human feedback, users are learning what kinds of tasks to trust LLMs with, and models are becoming increasingly specialised.

The dictionary provides two usage examples of “hallucinate” as it relates to AI: “LLMs are notorious for hallucinating – generating completely false answers, often supported by fictitious citations” and “The latest version of the chatbot is greatly improved but it will still hallucinate facts.”

Cambridge’s decision follows Collins dictionary naming their word of the year “AI”.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.