Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - US
The Guardian - US
Technology
Edward Helmore in New York

‘We are a little bit scared’: OpenAI CEO warns of risks of artificial intelligence

Sam Altman
Sam Altman made warning as OpenAI released latest version of its language AI model, GPT-4. Photograph: The Washington Post/Getty Images

Sam Altman, CEO of OpenAI, the company that developed the controversial consumer-facing artificial intelligence application ChatGPT, has warned that the technology comes with real dangers as it reshapes society.

Altman, 37, stressed that regulators and society need to be involved with the technology to guard against potentially negative consequences for humanity. “We’ve got to be careful here,” Altman told ABC News on Thursday, adding: “I think people should be happy that we are a little bit scared of this.

“I’m particularly worried that these models could be used for large-scale disinformation,” Altman said. “Now that they’re getting better at writing computer code, [they] could be used for offensive cyber-attacks.”

But despite the dangers, he said, it could also be “the greatest technology humanity has yet developed”.

The warning came as OpenAI released the latest version of its language AI model, GPT-4, less than four months since the original version was released and became the fastest-growing consumer application in history.

In the interview, the artificial intelligence engineer said that although the new version was “not perfect” it had scored 90% in the US on the bar exams and a near-perfect score on the high school SAT math test. It could also write computer code in most programming languages, he said.

Fears over consumer-facing artificial intelligence, and artificial intelligence in general, focus on humans being replaced by machines. But Altman pointed out that AI only works under direction, or input, from humans.

“It waits for someone to give it an input,” he said. “This is a tool that is very much in human control.” But he said he had concerns about which humans had input control.

“There will be other people who don’t put some of the safety limits that we put on,” he added. “Society, I think, has a limited amount of time to figure out how to react to that, how to regulate that, how to handle it.”

Many users of ChatGPT have encountered a machine with responses that are defensive to the point of paranoid. In tests offered to the TV news outlet, GPT-4 performed a test in which it conjured up recipes from the contents of a fridge.

The Tesla CEO, Elon Musk, one of the first investors in OpenAI when it was still a non-profit company, has repeatedly issued warnings that AI or AGI – artificial general intelligence – is more dangerous than a nuclear weapon.

Musk voiced concern that Microsoft, which hosts ChatGPT on its Bing search engine, had disbanded its ethics oversight division. “There is no regulatory oversight of AI, which is a *major* problem. I’ve been calling for AI safety regulation for over a decade!” Musk tweeted in December. This week, Musk fretted, also on Twitter, which he owns: “What will be left for us humans to do?”

On Thursday, Altman acknowledged that the latest version uses deductive reasoning rather than memorization, a process that can lead to bizarre responses.

“The thing that I try to caution people the most is what we call the ‘hallucinations problem’,” Altman said. “The model will confidently state things as if they were facts that are entirely made up.

“The right way to think of the models that we create is a reasoning engine, not a fact database,” he added. While the technology could act as a database of facts, he said, “that’s not really what’s special about them – what we want them to do is something closer to the ability to reason, not to memorize.”

What you get out, depends on what you put in, the Guardian recently warned in an analysis of ChatGPT. “We deserve better from the tools we use, the media we consume and the communities we live within, and we will only get what we deserve when we are capable of participating in them fully.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.