Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Evening Standard
Evening Standard
Technology
Alan Martin

Google CEO Sundar Pichai admits people ‘don’t fully understand’ how chatbot AI works

As Google plans the further integration of artificial intelligence into its core search product, CEO Sundar Pichai has admitted that the company doesn’t “fully understand” how the company’s Bard AI comes up with some of its answers.

“There is an aspect of this which we call — all of us in the field call it as a ‘black box,’” Pichai told CBS’ 60 Minutes programme, when asked how Bard began fluently conversing in Bengali — a language it wasn’t trained in.

“You know, you don’t fully understand. And you can’t quite tell why it said this, or why it got wrong,” he continued. “We have some ideas, and our ability to understand this gets better over time. But that’s where the state of the art is.”

When pressed by interviewer Scott Pelley on why Google would believe technology it doesn’t completely understand is ready for public consumption, Pichai defended its choice. He compared it to our knowledge of the human psyche. “I don’t think we fully understand how a human mind works either,” he said.

While impressively lifelike in its responses, users are advised not to take everything that the new wave of AI chatbots say as gospel. One of the common issues is something known as “hallucinations”, where AI will invent facts to fit a user question.

One such hallucination proved very expensive for Google. The company lost $100 billion (£80.5bn) from its valuation when Bard claimed the James Webb Telescope captured “the very first image of a planet outside our solar system”.

That sounds convincingly plausible to the uninitiated. But the truth is that that honour belongs to the NACO Very Large Telescope — 17 years before the James Webb Telescope existed.

Such hallucinations still appear to be an active problem. As part of 60 Minutes’ report, the show asked Bard about inflation, and the bot duly wrote an essay on the topic of economics, recommending five books in the process. None of these books exist, it turns out.

Such oddities, Pichai says, are “expected”.

“No one in the field has yet solved the hallucination problems,” he said. “All models do have this as an issue,” he continued, but stated his belief that “we’ll make progress.”

No one in the field has yet solved the hallucination problems

Google CEO CEO Sundar Pichai

After being quizzed on this and the inherent risks of disinformation, Pichai was asked whether Bard is safe for society. His answer was perhaps a bit more equivocal than you would expect of a company expected to go big on AI in the near future.

“The way we have launched it today, as an experiment in a limited way, I think so,” Pichai said. “But we all have to be responsible in each step along the way.”

For some, including the 1,000-plus AI experts, business leaders and technologists who signed a recent open letter on the subject, the only “responsible” thing to do is to pause training of “AI systems more powerful than GPT-4” for at least half a year.

It doesn’t sound like Google will be doing that, though it is at least making cautious sounds. Pichai told Pelley that the company is testing more advanced versions of Bard that can reason and plan, but deliberately not rushing the rollout.

Slowing down rather than pausing may not be quite what the signatories had in mind, but the reasoning is largely the same. Asked whether the caution was to allow society to get used to AI, Pichai said that this was only part of the calculation.

“That’s one part of it. One part is also so that we get the user feedback. And we can develop more robust safety layers before we build, before we deploy more capable models.”

But with ChatGPT already boasting more than 100 million users, there’s a risk that consumer use will quickly outpace society’s ability to legislate against the more dangerous implementations — and that’s before the environmental impact is considered.

For these bigger questions, Pichai ended his interview by saying that for humanity to “develop AI systems that are aligned to human values” we need a full spectrum of expertise.

“I think the development of this needs to include not just engineers, but social scientists, ethicists, philosophers, and so on,” he said.

“I think these are all things society needs to figure out as we move along. It’s not for a company to decide.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.