Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Chloe Taylor

‘Sapiens’ author says A.I. is an alien threat that could wipe us out

Prof. Yuval Noah Harari, speaks during a demonstration in Tel Aviv. (Credit: Eyal Warshavsky/SOPA Images/LightRocket via Getty Images)

Billions of dollars are being poured into the development of A.I., with the technology being hailed as a “revolution”—but famed historian and philosopher Yuval Noah Harari sees it as an “alien species” that could trigger humanity’s extinction.

“A.I. is fundamentally different from anything we’ve seen in history, from any other invention, whether its nuclear weapons or the printing press,” Harari—the bestselling author of Homo Deus and Sapiens: A Brief History of Humankind—told an audience at CogX Festival in London on Tuesday.

“It’s the first tool in history that can make decisions by itself. Atom bombs could not make decisions. The decision to bomb Hiroshima was taken by a human.”

The risk that comes with this ability to think for itself, Harari said, is that superintelligent machines could ultimately end up usurping the human race as the world’s dominant power.

“Potentially we are talking about the end of human history—the end of the period dominated by human beings,” he warned. “It’s very likely that in the next few years, it will eat up all of human culture, [everything we’ve achieved] since the stone age, and start spewing out a new culture coming from an alien intelligence.”

This raises questions, according to Harari, about what the technology will do not just to the physical world around us, but also to things like psychology and religion.

“In certain ways, A.I. can be more creative [than people],” he argued. “In the end, our creativity is limited by organic biology. This is a non-organic intelligence. It’s really like an alien intelligence.

“If I said an alien species is coming in five years, maybe they will be nice, maybe they will cure cancer, but they will take our power to control the world from us, people would be terrified.

“This is the situation we’re in, but instead of coming from outer space, [the threat is] coming from California.”

A.I. evolution

The phenomenal rise of OpenAI’s generative A.I. chatbot ChatGPT over the past year has been a catalyst for major investment into the space, with Big Tech entering into a race to develop the most cutting-edge artificial intelligence systems in the world.

But it’s the pace of development in the A.I. space, according to Harari—whose written works have examined humanity’s past and future—that “makes it so scary.”

“If you compare it to organic evolution, A.I. now is like ameba—in organic evolution, it took them hundreds of thousands of years to become dinosaurs,” he told the crowd at CogX Festival. “With A.I., the ameba could become a T-rex within 10 or 20 years. Part of the problem is we don’t have time to adapt. Humans are amazingly adaptable beings … but it takes time, and we don’t have this time.”

Humanity’s next ‘huge and terrible experiment’?

Conceding that previous technological innovations, such as the steam engine and airplanes, had sparked similar warnings about human safety and that “in the end it was ok,” Harari insisted when it came to A.I., “in the end is not good enough.”

“We are not good with new technology, we tend to make big mistakes, we experiment,” he said.

During the industrial revolution, for example, mankind had made “some terrible mistakes,” Harari noted, while European imperialism, twentieth-century communism and Nazism had also been “huge and terrible experiments that cost the lives of billions of people.”

“It took us a century, a century and a half, of all these failed experiments to somehow get it right,” he argued. “Maybe we don’t survive it this time. Even if we do, think about how many hundreds of millions of lives will be destroyed in the process.”

Divisive technology

As A.I. becomes more and more ubiquitous, experts are divided on whether the tech will deliver a renaissance or doomsday.

At the invitation-only Yale CEO Summit this summer, almost half of the chief executives surveyed at the event said they believed A.I. has the potential to destroy humanity within the next five to 10 years.

Back in March, 1,100 prominent technologists and A.I. researchers—including Musk and Apple co-founder Steve Wozniak—signed an open letter calling for a six-month pause on the development of powerful A.I. systems. They pointed to the possibility of these systems already being on a path to superintelligence that could threaten human civilization.

Tesla and SpaceX co-founder Musk has separately said the tech will hit people “like an asteroid” and warned there is a chance it will “go Terminator.” He has since launched his own A.I. firm, xAI, in what he says is a bid to “understand the universe” and prevent the extinction of mankind.

Not everyone is on board with Musk’s view that superintelligent machines could wipe out humanity, however.

Last month, more than 1,300 experts came together to calm anxiety around A.I. creating a horde of “evil robot overlords,” while one of the three so-called Godfathers of A.I. has labeled concerns around the tech becoming an existential threat “preposterously ridiculous.”

Top Meta executive Nick Clegg also attempted to quell concerns about the technology in a recent interview, insisting that large language models in their current form are “quite stupid” and certainly not smart enough yet to save or destroy civilization.  

‘Time is of the essence’

Despite his own dire warnings about A.I., Harari said there was still time for something to be done to prevent the worst predictions from becoming a reality.

“We have a few years, I don’t know how many—five, 10, 30—where we are still in the driver’s seat before A.I. pushes us to the back seat,” he said. “We should use these years very carefully.”

He suggested three practical steps that could be taken to mitigate the risks around A.I.: Don’t give bots freedom of speech, don’t let artificial intelligence masquerade as humans, and tax major investments into A.I. to fund regulation and institutions that can keep the technology under control.

“There are a lot of people trying to push these and other initiatives forward,” he said. “I hope we do [implement them] as soon as possible, because time is of the essence.”

He also urged those working in the A.I. space to consider whether unleashing their innovations on the world was really in the planet’s best interests.

“We can’t just stop the development of technology, but we need to make the distinction between development and deployment,” he said. “Just because you develop it, doesn’t mean you have to deploy it.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.