Artificial intelligence pioneer says a part of him now regrets his life’s work
Geoffrey Hinton, a man often referred to as “the godfather of artificial intelligence”, has quit his job at Google over concerns about the risks the technology poses to humanity.
- SEE MORE Chat GPT, Generative AI and the future of creative work
- SEE MORE Pros and cons of artificial intelligence
- SEE MORE Will AI steal all our jobs?
The computer scientist, who led the team that built an image analysis neural network in 2012, was employed by Google over the past decade to help develop the company’s AI technology.
This week, however, “he officially joined a growing chorus of critics who say [tech] companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT”, said The New York Times (NYT).
Who is Geoffrey Hinton?
The great-grandson of George Boole, the mathematician who invented Boolean algebra, Geoffrey Hinton was born in Wimbledon in 1947.
Hinton said he was first inspired to pursue psychology and computer science after a school friend suggested that the brain might work like a hologram. To create holographic images, people record how beams of light bounce off an object. Applying this idea to the human mind, Hinton began to consider whether brains might work similarly, with each memory being spread out across a neural network rather than just in one location.
“I got very excited about that idea,” he told Wired magazine in 2014. “That was the first time I got really into how the brain might work.”
The revelation led Hinton to explore neural networks at Cambridge and the University of Edinburgh. Eventually his work led him to try to develop a new way of using computing technology to create a form of AI that we now call “deep learning”.
This was once “an outlier”, Wired said, but eventually deep learning became mainstream. Ultimately, according to The Guardian, Hinton’s research “led the way for current systems like ChatGPT”.
‘Bad actors will use it for bad things’
In his interview with the NYT, Hinton said he has now quit Google specifically so he can speak freely out about the dangers posed by AI.
“It is hard to see how you can prevent the bad actors from using it for bad things,” Hinton told the newspaper.
According to Hinton, Google was a “proper steward” of AI technology until last year, when Microsoft started incorporating a chatbot into its Bing search engine. This made Google worried about its search business.
Talking to the BBC, Hinton said that some of the dangers of AI chatbots were “quite scary”.
“I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have.”
He added: “So it’s as if you had 10,000 people and whenever one person learned something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”
Hinton’s specific short-term concerns revolve around AI’s ability to generate realistic photos, videos and text that could flood the internet, leading to difficulties in discerning truth. He is also worried that AI may replace many jobs in the not-too-distant future, including occupations such as personal assistants and translators.
A ‘responsible approach’ to AI?
Google said it appreciated Hinton’s contributions to the company over the past decade, but maintained that it is being “responsible” in its approach to AI.
“I’ve deeply enjoyed our many conversations over the years. I’ll miss him, and I wish him well!” Google’s chief scientist Jeff Dean said in a statement. “As one of the first companies to publish AI Principles, we remain committed to a responsible approach to AI. We’re continually learning to understand emerging risks while also innovating boldly.”
According to the NYT: “A part of him, he said, now regrets his life’s work.” But “I console myself with the normal excuse: If I hadn’t done it, somebody else would have”.
Hinton added that when people used to ask him why he worked on a technology that posed dangers, he would paraphrase Robert Oppenheimer, who led the US effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”
But, the paper noted: “He does not say that anymore.”