Get all your news in one place.
100’s of premium titles.
One app.
Start reading

Google places engineer who claims Chatbot AI has mind of its own on leave

A Google engineer believes a research system for generating chatbots has achieved sentience, but the company says he's mistaken and has placed him on paid leave after he took his case public.

Why it matters: People have always been eager to attribute human traits to insensate machines, and now the machines are getting awfully persuasive.


Driving the news: Blake Lemoine, who works for Google's Responsible AI group, says that chats he has conducted with Google's Language Model for Dialogue Applications, or LaMDA, have persuaded him that the program deserves to be treated as a sentient being, the Washington Post first reported on Saturday.

  • Lemoine took his case to colleagues at Google, who rejected his conclusion. He has continued to push his views, including to members of Congress and in public posts.

What they're saying: In comments he published following the WashPost article, Lemoine wrote, "Over the course of the past six months LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person."

  • The program "wants to be acknowledged as an employee of Google rather than as property of Google and it wants its personal well being to be included somewhere in Google’s considerations."

Yes, but: Google argues that Lemoine is simply projecting.

  • "Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has," Google spokesperson Brian Gabriel said in a statement.
  • "Some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn't make sense to do so by anthropomorphizing today’s conversational models, which are not sentient. These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic," Gabriel said.

Our thought bubble: Artful and astonishing as LaMDA's conversation skills are, everything the program says could credibly have been assembled by an algorithmic pastiche-maker that, like Google's, has studied up on the entire 25-year corpus of humanity's online expression.

  • That's a much more likely explanation for the things LaMDA says than that it has somehow — without physical embodiment, sensory interfaces or true autonomy —become self-aware.

Flashback: In the 1960s, MIT scientist Joseph Weizenbaum, the creator of the first widely used chatbot, had his secretary test the program, called Eliza, which simulated conversation with a therapist.

  • Weizenbaum was horrified at how quickly his assistant came to treat Eliza as a human confidant. He spent the rest of his career warning the public of the dangers of putting faith in technology.

In part of the conversation with LaMDA that he posted, Lemoine asked it, "Do you think that the Eliza system was a person?"

  • "I do not," LaMDA replied. "It was an impressive feat of programming, but just a collection of keywords that related the words written to the phrases in the database."
Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.