Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Independent UK
The Independent UK
National
Via AP news wire

Artificial intelligence threatens extinction, experts say in new warning

Copyright 2023 The Associated Press. All rights reserved

The heads of two of the leading AI firms have once again warned of the existential threat posed by advanced artificial intelligence.

DeepMind and OpenAI chief executives Demis Hassabis and Sam Altman pledged their support to a short statement published by the Centre for AI Safety, which claimed that regulators and lawmakers should take the “severe risks” more seriously.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement read.

The Centre for AI Safety is a San Francisco-based non-profit which aims “to reduce societal-scale risks from AI”, claiming that the use of AI in warfare could be “extremely harmful” as it could be used to develop new chemical weapons and enhance aerial combat.

Signatories of the short statement, which did not clarify what they think may become extinct, also included business and academic leaders in the space.

Among them were Geoffrey Hinton, who is sometimes nicknamed the “Godfather of AI”, and Ilya Sutskever, the chief executive and co-founder respectively of ChatGPT-developer OpenAI.

The list also included dozens of senior bosses at companies like Google, the co-founder of Skype, and the founders of AI company Anthropic.

AI is now in the global consciousness after several firms released new tools allowing users to generate text, images and even computer code by just asking for what they want.

Experts say the technology could take over jobs from humans – but this statement warns of an even deeper concern.

The emergence of tools like ChatGPT and Dall-E have resurfaced fears that AI could one day wipe out humanity if it passes human intelligence.

Earlier this year, tech leaders called on leading AI firms to pause development of their systems for six months in order to work on ways to mitigate risks.

“AI systems with human-competitive intelligence can pose profound risks to society and humanity,” the open letter from the Future of Life Institute stated.

“AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.”

Additional reporting from agencies

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.