Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Marco Quiroz-Gutierrez

AI scientists warn it could become uncontrollable ‘at any time’

Yoshua Bengio, professor at the Montreal Institute for Learning Algorithms, is seen at a lecture event wearing a tan suit with a maroon dress shirt. His mouth is slightly open as he speaks and he is wearing a microphone. His hands are raised as he gestures while speaking on AI. (Credit: Graham Hughes—Bloomberg via Getty Images)

The world’s leading AI scientists are urging world governments to work together to regulate the technology before it’s too late.

Three Turing Award winners—basically the Nobel Prize of computer science—who helped spearhead the research and development of AI, joined a dozen top scientists from across the world in signing an open letter that called for creating better safeguards for advancing AI.

The scientists claimed that as AI technology rapidly advances, any mistake or misuse could bring grave consequences for the human race.

“Loss of human control or malicious use of these AI systems could lead to catastrophic outcomes for all of humanity,” the scientists wrote in the letter. They also warned that with the rapid pace of AI development, these “catastrophic outcomes,” could come any day.

Scientists outlined the following steps to start immediately addressing the risk of malicious AI use:

Government AI safety bodies

Governments need to collaborate on AI safety precautions. Some of the scientists’ ideas included encouraging countries to develop specific AI authorities that respond to AI “incidents” and risks within their borders. Those authorities would ideally cooperate with each other, and in the long term, a new international body should be created to prevent the development of AI models that pose risks to the world.

“This body would ensure states adopt and implement a minimal set of effective safety preparedness measures, including model registration, disclosure, and tripwires,” the letter read.

Developer AI safety pledges

Another idea is to require developers to be intentional about guaranteeing the safety of their models, promising that they will not cross red lines. Developers would vow not to create AI, “that can autonomously replicate, improve, seek power or deceive their creators, or those that enable building weapons of mass destruction and conducting cyberattacks,” as laid out in a statement by top scientists during a meeting in Beijing last year. 

Independent research and tech checks on AI

Another proposal is to create a series of global AI safety and verification funds, bankrolled by governments, philanthropists and corporations that would sponsor independent research to help develop better technological checks on AI.  

Among the experts imploring governments to act on AI safety were three Turing award winners including Andrew Yao, the mentor of some of China’s most successful tech entrepreneurs, Yoshua Bengio, one of the most cited computer scientists in the world, and Geoffrey Hinton, who taught the cofounder and former OpenAI chief scientist Ilya Sutskever and who spent a decade working on machine learning at Google

Cooperation and AI ethics

In the letter, the scientists lauded already existing international cooperation on AI, such as a May meeting between leaders from the U.S. and China in Geneva to discuss AI risks. Yet they said more cooperation is needed.

The development of AI should come with ethical norms for engineers, similar to those that apply to doctors or lawyers, the scientists argue. Governments should think of AI less as an exciting new technology, and more as a global public good. 

“Collectively, we must prepare to avert the attendant catastrophic risks that could arrive at any time,” the letter read.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.