Open letter says powerful new systems should only be developed once it is known they are safe
Tech leaders and experts including Elon Musk, Apple co-founder Steve Wozniak and engineers from Google, Amazon and Microsoft have called for a six-month pause in the development of artificial intelligence systems to allow time to make sure they are safe.
- SEE MORE OpenAI: the ChatGPT start-up now worth billions
- SEE MORE Donald Trump, the Pope and the disruptive power of AI images
- SEE MORE Chatbot wars: Google launches Bard to take on ChatGPT
“AI systems with human-competitive intelligence can pose profound risks to society and humanity,” said the open letter titled Pause Giant AI Experiments.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” it said.
“We call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4,” it added.
The letter also said that in recent months AI labs have been “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control”.
“The warning comes after the release earlier this month of GPT-4… an AI program developed by OpenAI with backing from Microsoft,” said Deutsche Welle (DW). The latest iteration from the makers of ChatGPT has “wowed users by engaging them in human-like conversation, composing songs and summarising lengthy documents”, added Reuters.
The open letter has been signed by “major AI players”, according to The Guardian, including Musk, who co-founded OpenAI, Emad Mostaque, who founded London-based Stability AI, and Wozniak.
Engineers from Amazon, DeepMind, Google, Meta and Microsoft also signed it, but among those who have not yet put their names to it are OpenAI CEO Sam Altman and Sundar Pichai and Satya Nadella, CEOs of Alphabet and Microsoft respectively.
The letter “feels like the next step of sorts”, said Engadget, from a 2022 survey of over 700 machine learning researchers. It found that “nearly half of participants stated there’s a 10 percent chance of an ‘extremely bad outcome’ from AI, including human extinction”.
But the letter has also attracted criticism. Johanna Björklund, an AI researcher and associate professor at Umea University in Sweden, told DW: “I don’t think there’s a need to pull the handbrake.” She called for more transparency rather than a pause.