The villain of a little-known 2015 film -- "Avengers: Age of Ultron" -- is an artificially intelligent system gone horribly wrong. Programmed to protect the world, Ultron became self-aware and, in fulfilling its perceived mandate, attempted to destroy the greatest threat to humanity: itself.
In speaking about AI recently, my brother wondered aloud if -- or when -- ChatGPT will, like Ultron, decide to just wipe humanity out.
The long answer: it's ... complicated.
The short answer: It can't. Yet.
DON'T MISS: Protesters Say OpenAI CEO Is Dangerously Misled When It Comes to AGI
The difference comes down to Large Language Models (LLMs) like ChatGPT versus artificial general intelligence (AGI), which has also been referred to as artificial superintelligence.
What Exactly Is AGI?
AGI -- a hypothetical evolution of AI that would be equal to or greater than human intelligence -- does not yet exist, though Microsoft (MSFT)-backed OpenAI is working on developing it.
"The risks could be extraordinary," OpenAI CEO Sam Altman said of AGI. "A misaligned superintelligent AGI could cause grievous harm to the world."
Despite the apparent "existential risk" of developing such a technology, Altman, though he's not sure when or if AGI will ever be achieved, and certainly not what it might look like, thinks the possible upsides of AGI are too significant to stop trying.
The inherent risk to a potential AGI model, for AI expert Professor Gary Marcus, revolves around control.
"How do we control an intelligence that is smarter than us?" Marcus said in a recent podcast.
In the Ultron example -- which was achieved only through the power of a magical space staff -- the AI, which was smarter than its creators, became uncontrollable.
And while AGI is somewhere on the horizon, LLMs like ChatGPT are a far cry from human-like intelligence.
The Road From ChatGPT to AGI
At their core, LLMs are language models. They work through supervised learning on enormous, hyper-focused data sets. And language, AI expert Professor John Licato told The Street, is only one element of human intelligence.
Human knowledge, Licato said, is developed from sensory-motor data. AI systems would need to interact with the world at a sensory level to develop human-like intelligence, and models like ChatGPT don't currently have the kind of processing ability underlying language that humans do.
However, he explained that the road to AGI starts with LLMs; the next step in the process will be an integration of a new modality that would allow AI to process images, and then videos, and then sound, and eventually, through robotics, a sense of touch.
An AI model capable of that more varied level of data processing could have human-adjacent intelligence. And that could be achieved in as little as a decade.
"I would say it's realistic to have something fully human level within the next 10 years," Licato said. "You have to take that with a grain of salt because AI experts have been making this prediction since the 1950s at least, but I'm pretty convinced that 10 years is a generous timeframe."
Licato went on to emphasize the vital importance of preparing, as a society, for this new technology, because there's no real way of knowing when AGI might be achieved. There might be hundreds of small switches necessary to get to AGI, but there might just be one major switch that will allow everything else to fall into place.
"It is a technology that is possibly more consequential than any weapon that we've ever developed in human history," Licato said. "So we absolutely have to prepare for this. That's one of the few things that I can state unequivocally."
Part of that preparation, he said, involves government regulation, the other part involves transparent, ethical and public research into AI. When the bulk of that research is done by for-profit companies, like OpenAI, or foreign governments, "that's when all the things that we're afraid could happen with AI are much more likely to happen."
And though LLMs have made plenty of progress, in their current form, they remain nowhere near human intelligence.
"The best-performing language models are doing really well at some subset of these tests that we designed to measure broad intelligence," Licato said. "But one thing that we're finding is a lot of it may be more hyped than the popular conception leads us to believe."
The Risks
If AGI is achieved, the risks it could pose to humanity are great.
"We're talking about a technological change where everything that humans could possibly become in order to adapt to that change is already something that AI could do if that technology is there," Licato said. "The type of adjustment that we'd have to do as a species is fundamentally different than anything that's ever happened before."
The good news is we're not there yet. The not-so-great news is we don't know when, or if, we'll ever get there. The bad news is that the models we currently have -- LLMs like ChatGPT -- pose a whole host of far less existential, though no less serious risks.
Marcus explained that these current models, beyond peddling misinformation to the public, can serve as a tool to enhance criminal activity, mass-generate propaganda and amplify online fraud.
"Current technology already poses enormous risks that we are ill-prepared for," he said. "With future technology, things could well get worse."
As one commenter put it, "the risk right now is not malevolent AI but malevolent humans using AI."
"Ultron isn’t here now. And I don't think anybody will say that LLMs are capable of it, or that any AI tool is capable of it right now," Licato said. "But we do need to remember that it could be that there's just one little thing that we're missing. Once you figure that out, then all the little pieces of AI that we have can be rapidly put together."