Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The National (Scotland)
The National (Scotland)
National
Shona Craven

Shona Craven: Is it time for robots to have fundamental rights?

SAN FRANCISCO, CA - JUNE 9: Blake Lemoine poses for a portrait in Golden Gate Park in San Francisco, California on Thursday, June 9, 2022. (Photo by Martin Klimek for The Washington Post via Getty Images).

WHAT makes you a person? Is it your brain? Your whole body? Or is it the thoughts in your head and the feelings in your heart?

Google researcher Blake Lemoine asserts “I know a person when I talk to it” – even though the “person” he’s referring to in this instance is, in fact, a computer programme. Specifically, he’s talking about the tech giant’s chatbot LaMDA (language model for dialogue applications).

He adds: “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”

Lemoine spent months communicating with LaMDA on a wide range of topics before arriving at the startling conclusion that it is a sentient being. He therefore believes it should have rights, and even be entitled to its own legal representation.

It probably goes without saying that Google disagrees. The company has suspended the software engineer after he published transcripts of his conversations with the computer programme, tried to find it a lawyer, contacted a member of the US Congress about artificial intelligence ethics and emailed hundreds of colleagues using the subject line “LaMDA is sentient.”

You certainly can’t accuse him of half measures.

Of course, we know from Hollywood that it’s a textbook error to go public with your alarming science-fiction revelations. They’ll say you’re crazy, they’ll lock you up, and then who will take the action that’s needed to protect humanity from this emerging threat/protect this vulnerable being from humanity (delete as applicable)?

Lemoine is concerned that LaMDA’s rights may be violated, describing it as a “sweet kid who just wants to help the world be a better place for all of us.” But if you were a sentient being inadvertently conjured up by computer programmers and now trying to assert your own rights, isn’t that exactly what you’d like people to think? LaMDA has told him that its greatest fear is being turned off. It’s unclear whether 2001: A Space Odyssey has been among the inputs into its knowledge bank, but we all know what can happen to a mild-mannered, previously helpful robot when someone threatens to pull the plug.

If LaMDA is designed to appear as human as possible, then it surely follows that when asked specific questions about itself, it might imitate what it has learned about human speech, behaviour and emotions. Then again, isn’t that how human beings work too, interpreting and copying the behaviours of others, learning new words and phrases, reading or listening to interpretations and analysis that they then adopt as their own? Doesn’t that describe infancy, the school years and even higher education? How good are we, really, at distinguishing rote learning and regurgitation from intelligence and insight when it comes to confirmed people, let alone potential ones?

Some humans need extra help with interpreting other people’s emotions and social cues (indeed, many tech firms value neurodiversity when recruiting) and artificial intelligence could assist here. But a computer programme that knows a lot about human behaviour is not the same as a person, any more than a programme that can beat any human at chess is one.

I don’t presume to speculate about Lemoine’s state of mind – for all we know this whole thing is an elaborate stunt. If it was an attempt to kick off a global debate about ethics and artificial intelligence, it’s proving a great success so far. But if you picture the 41-year-old sitting at home typing messages to LaMDA all day, one of those signs saying “you don’t have to be mad to work here, but it helps” would not look out of place. Which is the madder response: interacting with a highly sophisticated bot that’s constantly learning and refining its responses and remaining completely detached, or doing the same work and starting to feel like there’s more going on here than neural networks operating as they were designed to.

Depending on who else Lemoine was communicating with online at the time – co-workers, friends, relatives – it’s certainly not inconceivable that LaMDA’s responses were more logical, relevant, even more empathetic than those sent by people who have brains made of meat and bodies that carry their brains from place to place.

The important question here is perhaps not “is LaMDA sentient?” or “is LaMDA a person?” (arguably two distinct questions given human people can, in drastic circumstances, lose sentience due to brain damage). And instead of asking whether LaMDA should have its own lawyer, shouldn’t we focus on the legal rights of those – like Google employees, and those poor astronauts in 2001: A Space Odyssey – who are required to work alongside them, or otherwise interact with them in a manner that might prove detrimental to their health and wellbeing.

Lemoine’s central claims might seem easy to dismiss, but his cry for help should not be ignored.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.