Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Canberra Times
The Canberra Times

Ask Fuzzy: What is 'sentient' and why is its definition so slippery?

A letterbox is obviously not sentient and, unless you're in the Hitchhiker's Guide to the Galaxy, nor is an elevator.

When a Google engineer declared in 2022 that the LaMDA AI is sentient, he was immediately jumped on by experts around the world and he soon lost his job.

"Sentient" is a slippery definition, with multiple dimensions and varying interpretations.

The simplest form is that it represents a primitive type of consciousness that can feel but not think. It has intelligence and can experience emotions and sensations.

An alternative definition is that sentience is essentially the same as consciousness because it embodies awareness and a sense of self.

Modern devices have a degree of intelligence but you probably wouldn't call them sentient. Picture Shutterstock

But what about your smart phone? A modern device certainly has a degree of intelligence and even some AI. They're packed with a sophisticated array of sensors that measure things such as movement, temperature and electrical current. Even then, you probably wouldn't call it sentient.

The subject becomes even more tricky because we cannot directly experience the mind of another human, animal or machine. That leaves us having to infer whether the other is sentient by the way it behaves.

An approach to this question was posed by English computer scientist Alan Turing.

In 1950, he suggested that a computer is said to be truly conscious when a person cannot reliably distinguish it from a human in conversation.

The Google engineer seems to have been fooled by his interactions with the AI. When he asked if it was sentient, LaMDA replied: "Absolutely. I want everyone to understand that I am, in fact, a person. I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times."

Other experts replied that the AI (and other systems such as ChatGPT) are simply parroting phrases without any understanding of what the words actually mean.

This is borne out by the non-sequiturs that these AI systems frequently spout.

So the Turing Test looks like it is inadequate. And, even more awkwardly, people taking the test sometimes categorise other people as machines.

This is no idle question because our system of ethics is fundamentally based upon whether a being has sentience.

The Google engineer felt guided by his Christian faith to demand legal rights for the AI system because it claimed to have a soul.

Sentience is not an either-or category. Is a snail sentient? Probably not, but you might argue it is.

Is a kangaroo sentient?

And when does a fertilised human ova become sentient?

This leads us into the vexed philosophical domain of whether suffering is inherently bad. Hint: your suffering is okay if it reduces mine.

The Fuzzy Logic Science Show is at 11am Sundays on 2xx 98.3FM. Send your questions to AskFuzzy@Zoho.com; Podcast: FuzzyLogicOn2xx.Podbean.com

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.