Elon Musk owns and/or operates plenty of companies, from Twitter to SpaceX, xAI, Neuralink, The Boring Company and Tesla (TSLA) -). The world's richest man holds bachelor’s degrees in both physics and economics. But just as Musk is no astrophysicist when it comes to questions about space, he is likewise not an expert when it comes to artificial intelligence.
Despite this, he has discussed the topic on numerous occasions recently, perhaps most significantly adding his signature to the open letter which called for a six-month moratorium on the development of more powerful AI models.
DON'T MISS: Elon Musk Shares His Unusual Vision For a Safer Form of AI
Musk jumped more officially into the sector earlier in July when he launched a new company -- xAI -- whose goal, beyond providing a rival option to Microsoft and Google, is to "understand the universe."
His intention with xAI is to build a safe AI model. And his theory as to how that's done involves the "growing" of a "curious and truth-seeking" model for the simple reason that "I think to a superintelligence, humanity is much more interesting than not humanity." And if humanity is the most interesting thing in existence, and an AI model is designed to be curious and truth-seeking, the risks of a rogue AI attacking humanity, to Musk, become diminished.
This approach, he added, is one that ought to go in hand with government regulation.
Additionally, Musk explained in a Twitter Spaces July 14 that xAI is intent on avoiding what he calls the "inverse morality problem," which has become known by some as the "Waluigi problem. If you make Luigi, you risk making Waluigi at the same time."
The crux of the so-called Waluigi problem is this: "When you spend many bits of optimization locating a character, it only takes a few extra bits to specify their antipode."
Musk's Claims Through The Expert Lens
Musk's approach to developing what he says will be a safe AI system doesn't make perfect sense to some experts. Despite Musk's assertions, AI can't actually be curious.
More Elon Musk:
- Twitter Is in Hot Water Again
- Elon Musk Issues a Challenge to the FTC
- Elon Musk Says He Has the Answer About Aliens
"I don't think that attributing human attributes to AI models is a good idea, or accurate in any way. Models can't be curious because they're not sentient," AI expert and researcher Dr. Sasha Luccioni told The Street. "They can have a higher learning rate, or be presented with more training data, or have a specific architecture that allows them to explore a wider information space ... but they're not curious."
To be honest, I'm less afraid of the 'Waluigi problem' (creating evil AGI as soon as we create good, benevolent AGI) then of this bet.
— Knut Jägersberg -- e/acc (@JagersbergKnut) July 17, 2023
This approach seems like throwing a coin, to me.
It could entirely be that our presence is not interesting but a pest for curious AGI.
Dr. John Licato, a professor of computer science and engineering at the University of South Florida, agreed with Luccioni. AI models, he said, are generally defined by the objectives of their learning functions.
"The priorities (and subsequent behaviors) of the learning algorithm are defined relative to those objectives, so in a way, all of existing AI has a type of curiosity built-in already," he said. "It's very questionable to say that more curiosity equals systems that are more "truth-seeking."
And, though the intention might not be totally off-base, Musk's Waluigi problem doesn't really apply to the world of Large Language Models.
Studies have revealed a dual-use problem in the context of an AI model being used to discover beneficial drugs (and then "mega-poisons" as well, Luccioni said. "But given the fact that generative AI models such as language models are, by definition, multi-purpose, the dual use ("Waluigi") problem doesn't apply to them."
And while Licato said that creating benevolent AI goes in hand with developing the more dangerous kinds of AI, "It's often impossible to know what the potential harms (or benefits) of a new technology will be until we actually create and deploy it, but then when we finally learn what the harms are it's often too late to stop them."