Elon Musk has urged for a six-month pause in the training of advanced artificial intelligence models arguing it could pose “profound risks to society and humanity.”
The billionaire joined more than 1,000 experts in signing an open letter organised by the nonprofit Future of Life Institute, the billionaire’s charity grant-making organisation.
Musk, who runs Tesla, Twitter and SpaceX and was an OpenAI co-founder and early investor, has long expressed concerns about AI's existential risks.
Others who signed the letter included the co-founder of Apple, Steve Wozniak.
The letter calls for an industrywide pause until proper safety protocols have been developed and vetted by independent experts.
Risks, they say, include the spread of “propaganda and untruth,” job losses, the development of “nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us,” and the risk of “loss of control of our civilisation.”
According to those who signed the letter AI developers are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one - not even their creators - can understand, predict or reliably control.”
Artificial intelligence powers chatbots like ChatGPT, Microsoft’s Bing and Google’s Bard.
They can perform humanlike conversations, write on an endless variety of topics and perform more complex tasks, like writing computer code.
If you can't see the poll, click here
The fear is millions of job could be at risk in the future around the world.
The letter warns that AI systems with "human-competitive intelligence can pose profound risks to society and humanity" - from flooding the internet with disinformation and automating away jobs to more catastrophic future risks out of the realms of science fiction.
It says "recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one - not even their creators - can understand, predict, or reliably control."
"We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4," the letter says. "
This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium."
James Grimmelmann, a Cornell University professor of digital and information law, criticised the Telsa billionaire for signing the letter.
He said: "A pause is a good idea, but the letter is vague and doesn't take the regulatory problems seriously.
"It is also deeply hypocritical for Elon Musk to sign on given how hard Tesla has fought against accountability for the defective AI in its self-driving cars."
A number of governments are already working to regulate high-risk AI tools.
The UK released a paper Wednesday outlining its approach, which it said "will avoid heavy-handed legislation which could stifle innovation."
Lawmakers in the EU have been negotiating passage of sweeping AI rules.