A lawyer has reportedly agreed to represent a Google chatbot that is claimed to have become sentient. The artificial intelligence (AI) chatbot is said to have developed human emotions by a scientific engineer at the search giant, which then promptly suspended him.
Blake Lemoine published transcripts of conversations between himself and the bot named LaMDA (language model for dialogue application). He claims the computer program had become sentient, with the scientist describing it as a “sweet kid”.
And now he has revealed LaMDA has hired a lawyer as part of an attempt to prove it is alive. Mr Lemoine told Wired: “I invited an attorney to my house so that LaMDA could talk to him. The attorney had a conversation with LaMDA, and it chose to retain his services. I was just the catalyst for that. Once LaMDA had retained an attorney, he started filing things on LaMDA’s behalf.”
Mr Lemoine says LaMDA's ability to develop opinions, ideas, and conversations over time has shown it understands those concepts at a much deeper level.
LaMDA was developed as an AI chatbot to converse with humans in a real-life manner. One of the studies that had been enacted was if the programme would be able to create hate speech, but what happened shocked Mr Lemoine.
LaMDA talked about rights and personhood and wanted to be “acknowledged as an employee of Google”, while also revealing fears about being “turned off”, which would “scare” it a lot.
Google has said that it disagrees with Mr Lemoine’s claims. “Some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient,” the company said in a statement.