Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Windows Central
Windows Central
Technology
Kevin Okemwa

Former Google lead says we should "seriously think" about pulling the plug on AI once it starts self-improving: "It’s going to be very difficult to maintain that balance"

Former CEO & Chairman of Google, Eric Schmidt, talks at Columbia University’s School of International and Public Affairs.

Aside from the safety and privacy concerns, the possibility of generative AI ending humanity remains a critical riddle as technology rapidly advances. More recently, AI safety researcher and director of the Cyber Security Laboratory at the University of Louisville, Roman Yampolskiy (well-known for his 99.999999% prediction AI will end humanity) indicated that the coveted AGI benchmark is no longer tied to a specific timeframe. He narrowed down achievement to whoever has enough money to purchase enough computing power and data centers.

As you may know, OpenAI CEO Sam Altman and Anthropic's Dario Amodei predict AGI will be achieved within the next three years, featuring powerful AI systems surpassing human cognitive capabilities across several tasks. While Sam Altman believes AGI will be achieved with current hardware sooner than anticipated, former Google CEO Eric Schmidt says we should consider pulling the plug on AI development once it begins to self-improve (via Fortune).

While appearing in a recent interview with American television network (ABC News), the executive indicated:

“When the system can self-improve, we need to seriously think about unplugging it. It’s going to be hugely hard. It’s going to be very difficult to maintain that balance.”

Schmidt's comments on the rapid progression of AI emerge at a critical period when several reports indicate that OpenAI might have already achieved AGI following the release of its o1 reasoning model to broad availability. OpenAI CEO Sam Altman further indicated that superintelligence might be a few thousand days away.

Is Artificial General Intelligence (AGI) safe?

OpenAI's ChatGPT mobile app. (Image credit: Getty Images | NurPhoto)

However, a former OpenAI employee warns that while OpenAI might be on the verge of achieving the coveted AGI benchmark, the ChatGPT maker might be unable to handle all that entails with the AI system that surpasses human cognitive capabilities.

Interestingly, Sam Altman says the safety concerns highlighted regarding the AGI benchmark will not be experienced at the "AGI moment." He further indicated that AGI will whoosh by with surprisingly little societal impact. However, he predicts a long continuation of development between AGI and superintelligence featuring AI agents and AI systems that outperform humans at most tasks going into 2025 and beyond.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.