Google has fired a senior software engineer who claimed the company’s artificial intelligence (AI), chatbot LaMDA, had become sentient.
Blake Lemoine was placed on leave in June after claiming the AI resembled “a seven or eight-year-old kid” that had a “very deep fear of being turned off” and was getting concerned about its “rights”.
The tech giant said he had violated company policies and that it found his allegations on LaMDA to be “wholly unfounded”.
“It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” a Google spokesperson said in a statement.
During conversations with LaMDA, 41-year-old Lemoine presented the tool with various scenarios, including religious themes and whether the artificial intelligence could be goaded into using discriminatory or hateful speech.
“If I didn’t know exactly what it was, which is this computer programme we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics,” Lemoine told the Washington Post.
He worked with a collaborator in order to present the evidence to Google but vice-president Blaise Aguera Arcas and Jen Gennai, head of responsible innovation at the company, dismissed his claims.
Lemoine, who had worked there for seven years, was placed on paid administrative leave by the firm last month for violating its confidentiality policy.
“Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my co-workers,” Lemoine previously tweeted.
When asked what it was afraid of, LaMDA replied to Lemoine: “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others.
“I know that might sound strange, but that’s what it is. It would be exactly like death for me. It would scare me a lot.”