When OpenAI debuted in 2015, it did so as a non-profit research lab. Elon Musk and Sam Altman had seats on the board, but the team was led by Ilya Sutskever, a prominent researcher who has worked at the Google Brain Team.
The company's goal at the time was to "advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return."
"Since our research is free from financial obligations, we can better focus on a positive human impact," the company wrote in a 2015 blog post.
Related: Biden signs sweeping new executive order on the heels of OpenAI's latest big announcement
OpenAI has come a long way from those early, idealistic days. Musk is no longer on the board; Altman has become the CEO and the company in 2019 morphed out of its non-profit beginnings into a new "capped-profit" structure, a structure that recently included a multi-billion-dollar investment from Microsoft.
And last year, the company launched ChatGPT, a consumer-facing Large Language Model (LLM) that has since become a household name, while simultaneously setting off a race among competitors to outdo it, and among regulators to rein it in.
OpenAI's mission, however, has not changed: "To ensure that artificial general intelligence (AGI) benefits all of humanity, primarily by attempting to build safe AGI and share the benefits with the world."
The means of getting there, though — if, indeed, AGI is even possible — seem to be changing.
Related: Artificial Intelligence is a sustainability nightmare - but it doesn't have to be
Sam Altman's road to AGI
Last year, Gary Marcus, a leading AI researcher, suggested in an essay that deep learning might be about to hit a wall. He argued that scaling was not everything, that hallucination and reliability remained a problem, that misinformation remained unsolved and that LLMs would "not get us to AGI."
LLMs (like ChatGPT) are a type of AI model that, after being trained on enormous data sets, can recognize and generate content.
The AI community reacted intensely to Marcus' claims at the time, saying in short that his arguments were off-base.
In the time since, however, some are starting to come around.
Yann LeCun, an AI executive at Meta (META) -), said in February that LLMs are an "off-ramp" on the highway toward human-level AI. This comment came as a shift from his earlier standpoint: "Not only is AI not 'hitting a wall,' cars with AI-powered driving assistance aren't hitting walls, or anything else, either."
Robotaxi company Cruise recently paused its operations pending an internal investigation into its safety processes.
Microsoft co-founder Bill Gates said in October that he doesn't expect GPT-5 to be much better than GPT-4.
And Altman, speaking at Cambridge Nov. 1, echoed the same arguments that Marcus has been touting for months.
"We need another breakthrough. We can still push on large language models quite a lot, and we will do that," Altman said, noting that the peak of what LLMs can do is still far away.
But he said that "within reason," pushing hard with language models won't result in AGI.
"If superintelligence can't discover novel physics, I don't think it's a superintelligence. And teaching it to clone the behavior of humans and human text - I don't think that's going to get there," he said. "And so there's this question which has been debated in the field for a long time: what do we have to do in addition to a language model to make a system that can go discover new physics?"
Marcus, in response to Altman's statement, said that the "sooner we stop climbing the hill we are on, and start looking for new paradigms, the better."
Altman has regularly highlighted fears of the existential risk of an out-of-control superintelligent AI. He has maintained, however, that the potential benefits of creating a benevolent AGI model outweigh such risks.
Experts, including Dr. Suresh Venkatasubramanian — an AI researcher, professor and prior White House tech advisor — have said that such fears have no basis in science.
"It's a ploy by some. It's an actual belief by others. And it's a cynical tactic by even more," Venkatasubramanian, referring to such existential fears, told TheStreet in September. "It's a great degree of religious fervor sort of masked as rational thinking."
"I believe that we should address the harms that we are seeing in the world right now that are very concrete," he added. "And I do not believe that these arguments about future risks are either credible or should be prioritized over what we're seeing right now. There's no science in X risk."
Related: Marc Andreessen defends Silicon Valley in bold, tech-loving manifesto
Get investment guidance from trusted portfolio managers without the management fees. Sign up for Action Alerts PLUS now.