Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Chloe Taylor

Spies, scientists, defense officials and tech founders are divided on how to keep superintelligent AI under control: ‘We’re running at full speed toward a cliff’

(Credit: Valerie Plesch—Bloomberg via Getty Images)

As billions of dollars continue to be funneled into making superintelligent machines smarter, the question of regulation has emerged as an urgent—and polarizing—topic.

That much has been clear at CogX Festival, a major AI expo taking place in London this week.

Speaking at the event on Tuesday, Stuart Russell, a world-renowned British computer scientist who has been researching artificial intelligence for decades, argued Big Tech and the wider technology sector had hampered regulation for so long that officials were now “running around like headless chickens” trying to figure out how to keep AI under control.

“For the last decade or so the aliens have been sending us emails from outer space saying they’re going to arrive in a decade or two, and we have been sending back an out of office auto reply saying we’ll get back to you when we return,” he said.

Governments need to step up now to ensure unsafe AI systems can never “cause havoc on our species,” according to Russell, who has written several books on artificial intelligence and now teaches as a professor of computer science at the University of California, Berkeley.

“Would you like to get on a plane that’s no one has ever tested to make sure it’s been tested?” he asked the audience on Tuesday. “Even sandwiches are much more regulated than AI systems.”

Big Tech’s influence

Part of the reason the world was so behind when it came to AI regulation, Russell argued, was thanks to Big Tech’s influence.

“Governments have been lulled into inactivity by decades and decades of being told that regulation stifles innovation. I think tech companies have put something into their earpieces that says that every night as they go to bed,” he said. “The technical community has to stop getting into this defensive crouch. It is not anti-science [to think about the potential downsides of AI]. It is not anti-physics to say nuclear power plants could explode if not taken care of sufficiently.”

While tech giants have pushed back against regulatory proposals in the past, many of them have been supporters of introducing legislation to govern how AI can be used—although there have been arguments that regulation should be “balanced” to ensure it does not suppress innovation. Over the summer, Amazon, Google, Microsoft and Facebook parent company Meta agreed to follow a set of AI safety rules brokered by the Biden administration.

“While self-regulation is vital, it is not enough,” Google said in a document published earlier this year. “As our CEO Sundar Pichai has noted, AI is too important not to regulate. The challenge is to do so in a way that is proportionately tailored to mitigate risks…while still enabling innovation.”

Russell’s proposed regulations would essentially be a series of red lines laid down by governments across the globe. He said those boundaries should encompass bans on AI systems engaging in “unallowable behaviors,” like replicating themselves or hacking into another person’s computer.

Sanctions for breaking such rules should see the offending AI system immediately removed from the market, Russell suggested, while the developer should be fined 10% of its annual revenue.

“That would create a huge incentive for companies to design their AI systems so that they don’t do what they’re not allowed to do,” he said. “It puts the existential risk on the company. If you violate the red lines, your company is at risk.”

He pointed out that while China has already introduced rules for generative AI tools like ChatGPT, other countries including the U.S. and the U.K. have “no red lines whatsoever.”

Last fall, the U.S. government unveiled its Blueprint for an AI Bill of Rights, but Washington is yet to formalize any regulation specifically designed to govern artificial intelligence.  

Meanwhile, Britain’s government has published proposals outlining a “pro-innovation” approach to regulating AI, but the country still has no holistic legislation in place to regulate the technology.

“We can’t afford to take 30 or 40 years, we can’t afford mistakes,” Russell, author of the 1995 book Artificial Intelligence: A Modern Approach, said. “Governments have left the tech sector to do what it wants for too many decades, and now it has to address the problem.”

Russell’s sentiment was echoed by Yuval Noah Harari, the famed historian and bestselling author of Sapiens: A Brief History of Humankind, who used the event in Britain’s capital as an opportunity to warn about AI’s potential for malice and harm.

He called for a tax on major investments into AI to fund regulation and institutions that can keep the technology under control.

Regulation vs innovation

Speaking at CogX Festival from a defense perspective, NATO official David van Weel agreed with Russell’s take that more needed to be done to protect society from the potential harms that could be inflicted by superintelligent machines.

“We should quickly regulate, we are too slow to keep up with technology,” said van Wheel, the military alliance’s assistant secretary general for emerging security challenges.

However, he noted that crafting regulation for a technology still in its infancy was tricky.

“If we regulate AI on the basis of where it stands now, in two years’ time it will be obsolete and probably inhibit innovation,” van Wheel explained. “We need to be very tech savvy about what’s coming towards us and be quick with regulation that keeps up with innovation.”

Earlier this year, more than 1,000 tech luminaries—including Tesla CEO Elon Musk and Apple co-founder Steve Wozniak—signed a letter urging a six-month pause in AI development, arguing that decisions around the technology “must not be delegated to unelected tech leaders.”

At CogX Festival on Wednesday, LinkedIn co-founder Reid Hoffman dismissed that letter as “foolish” and “anti-humanist.” He argued that if anything, the development of AI needed to be sped up.

Meanwhile, Alex Younger, the former head of MI6—Britain’s secret intelligence service—warned too much regulation could have unintended consequences.

“By the time laws go through the system, they’re [often] out of date,” he told an audience at CogX Festival. “I can understand…looking at this in ethical terms which I applaud, but if the net result of that is it becomes too difficult to innovate, we will become dependent on China for our basic goods and services, and we’ll be weakened economically.”

He went on to suggest that Europe’s tougher stance on the tech sector had weakened its domestic technology industry.

“There are no global scale tech companies in Europe… I’m afraid there is a link,” he said.

Roeland Decorte, founder and CEO of Decorte Future Industries, a startup that uses AI to extract health data from sound, told Fortune on the sidelines of the conference that he was “scared the founder voice will get lost” as governments clampdown on artificial intelligence.

“The kind of things we’re seeing tends to be the same group of large corporates, academics, politicians, and policymakers [discussing regulation], and the actual AI founder is never really part of that conversation,” he said.

“AI is like any technology that has lots of potential—it can also be used for wrong. But in order for us to effectively regulate that you don't just go to the corporates who scale technologies or the academics who germinate it, you go to the startup to try out the first commercial applications.”

Decorte also argued that once you reach the stage where a technology has been scaled by a corporation, it will already be “too late,” as the tech will have permeated the public—who can choose to use it “for bad or for good.”  

“If you actually believe, like Elon Musk, this AI is a threat to the future of humanity, then the answer to that is not really regulation, because the regulators will never have the expertise to actually even know what's in an individual algorithm,” he said. “If your focus is on wanting to avert the Terminator, investing in Explainable AI is the natural route forward.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.