While AI like ChatGPT and Google Bard have certainly been impressive, not everyone is thrilled. AI experts have called for “digital health warnings,” and Elon Musk joined many others in the industry in signing an open letter calling for an AI arms race pause.
Now, one of Google’s own is joining the anti-AI side. According to The New York Times, Geoffrey Hinton — who pioneered the use of neural networks in AI — has left Google after more than a decade with the company. His reason? So he can speak out freely against the rise of AI.
For the record, Dr. Hinton doesn’t seem to be concerned about a Skynet-like scenario a la The Terminator. Instead, he echos two prominent concerns regarding AI: misinformation and automation.
Despite feeling as recently as a year ago that Google was a “proper steward” for AI technology, the rise of chatbots and the way they are abused has seemingly changed his mind. He now views Google and Microsoft in an AI arms race that is impossible to stop while the average person still struggles to differentiate between AI-created and human-created content. And while current versions of ChatGPT and Bard handle mostly mundane tasks, he worries that soon “[chatbots] might take away more than that.”
Former Google engineer disagrees with AI alarmists
Dr. Hinton wasn’t the only ex-Google employee to speak about AI in recent days, however. Blake Lemoine was a Google engineer until he went on the record with The Washington Post claiming that LaMDA — the large language model (LLM) powering Google Bard — was sentient. He was fired by Google not too long afterward.
But despite this, Lemoine thinks that Google is behaving in “a safe and responsible manner.” In an interview with Futurism, he claims that Google was actually about to release a version of Bard prior to OpenAI unleashing ChatGPT but pulled the plug partly because of concerns he raised. He also claims that the idea that Google is “being pushed around by OpenAI” is largely a media narrative.
On this note, we should point out that there is a fair amount of reporting out there — including from us — that suggests Google was caught off guard by the launch of ChatGPT and Bing with ChatGPT. It’s this sentiment that has reportedly spurred the creation of Magi, the rumored project name for Google’s next-generation AI search engine.
But Lemoine seems to brush off these allegations. According to him, Google still has more advanced technology that they are holding off on releasing to the public despite having developed it for years.
Personally, I think the truth probably lies somewhere in between, but it is still noteworthy that an expert with every reason to call out Google on its AI ethics seems confident the tech giant will handle the new technology appropriately.
Microsoft AI expert remains cautiously optimistic
Somewhere in the middle — but closer to Lemoine than to Hinton sits Eric Horvitz, Microsoft’s chief scientific officer. While he recently signed his name on a separate open letter from the one signed by Musk, he doesn’t necessarily think we need to slam the breaks on AI. Notably, his letter calls for government regulation, rather than a research pause.
Horvitz recently sat down with Fortune for an interview that addressed many of the same concerns that Dr. Hinton cited in his decision to leave Google. Horvitz gets why many people signed onto the open letter calling for a pause in AI model development, but argues that six months doesn’t really amount to much of a pause. He also agrees with Hinton that bad actors using AI is a major concern, but that an AI takeover is not at the top of his list of worries.
But he reiterates that the key to preventing these bad actors from misusing AI is a combination of factors that includes regulations. He also isn’t ready to raise the alarm bells about automation either. When asked by Fortune what aspects of human life he expects to be replaced by machines, Horvitz said that “My reaction is that almost everything about humanity won’t be replaced by machines.” He added, “while they [AI] could do amazing things, I haven’t seen incredible bursts of true genius that come from humanity.”
Conclusion: There are no easy answers about the future of AI
As you can see, there’s no unanimity regarding the threat AI poses and if companies are currently doing enough to solve the threats that may exist. However, between these three men, a common thread does appear.
Both Dr. Hinton and Horvitz make a point to state that bad actors misusing AI is a key concern. In Dr. Hinton’s case, his solution appears to be stopping these generative AI before we reach an atomic bomb moment. By comparison, Horvitz calls for government regulations and corporate action to work side by side to ensure a safer AI future.
And there are signs Horvitz isn’t alone. As reported by CNBC, Nvidia has announced NeMo Guardrails to keep LLM chatbots in check by setting parameters to limit the boundaries that AI chatbots can operate within.
Time will tell if ultimately Dr. Hinton is right or if relative optimists such as Lemoine and Horvitz are closer to the mark. But we may not have to wait too much longer for new advances in AI technology.
Be sure to check out our Google I/O 2023 coverage to see what Google may or may not announce about its future AI products.