Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Evening Standard
Evening Standard
World
Jacob Phillips

Tech chiefs including Elon Musk and Steve Wozniak call on scientists to pause development of AI systems

Technology experts including Elon Musk have urged scientists to pause developing artificial intelligence (AI) to ensure it does not pose a risk to humanity.

Tech chiefs including Apple co-founder Steve Wozniak and Skype co-founder Jaan Tallinn have signed an open letter demanding all labs stop training AI systems for at least six months.

The prevalence of AI has increased massively in recent years, with systems such as chatbot ChatGPT quickly becoming part of everyday life.

The letter said: “Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no-one – not even their creators – can understand, predict or reliably control.

“Contemporary AI systems are now becoming human-competitive at general tasks and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?”

Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall

Open letter

It added: “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

The technology chiefs do not want any AI systems more powerful than new chatbot GPT-4 and called for researchers to focus on making sure the technology is accurate, safe and transparent.

US tech firm OpenAI released its latest version of AI chatbot ChatGPT earlier this month.

ChatGPT was launched late last year and it has become an online sensation because of its ability to hold natural conversations but also to generate speeches, songs and essays.

The bot can respond to questions in a human-like manner and understand the context of follow-up queries, much like in human conversations. It can even admit its own mistakes or reject inappropriate requests.

According to OpenAI, GPT-4 has “more advanced reasoning skills” than ChatGPT but, like its predecessors, GPT-4 is still not fully reliable and may “hallucinate” – a phenomenon where AI invents facts or makes reasoning errors.

The letter said humanity can now enjoy an “AI summer” where it can reap the rewards of the systems but only once safety protocols have been made.

The letter added: “Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an ‘AI summer’ in which we reap the rewards, engineer these systems for the clear benefit of all and give society a chance to adapt.

“Society has hit pause on other technologies with potentially catastrophic effects on society.

“We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.