A growing number of people — including AI pioneers and other prominent tech figures — want to stop the development of AI that can outperform all humans.
- A group of scientists, policymakers and actors is calling for a pause on superintelligence until it's proven safe and controllable.
Why it matters: AI development is moving at breakneck speed with minimal oversight and with the full-throated endorsement of the Trump administration.
- AI "doomers" have lost their foothold with U.S. policymakers. But they're still trying to be heard, and are highly involved in global AI policy debates.
Driving the news: The call to action, organized by the Future of Life Institute, has more than 800 signatures from a diverse group, including:
- AI pioneers Yoshua Bengio and Geoffrey Hinton, Apple co-founder Steve Wozniak, Sir Richard Branson, Steve Bannon, Susan Rice, will.i.am and Joseph Gordon-Levitt.
- The group also released polling that found that three-quarters of U.S. adults want strong regulations on AI development, with 64% of those polled saying they want an "immediate pause" on advanced AI development, per a survey of 2,000 adults from Sept. 29 to Oct. 5.
Yes, but: In early 2023, the Future of Life Institute and many of the same signatories published a similar letter calling for a six-month pause on training any models more powerful than GPT-4.
- That pause was largely ignored.
What they're saying: "We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in," a statement from the group's website reads.