Artificial Intelligence (AI) systems could be powerful enough to “kill many humans” within just two years, Rishi Sunak’s tech adviser has warned.
Matt Clifford, who is helping the prime minister set up the government’s AI taskforce, said policymakers should be prepared for threats such as cyberattacks or creation of bioweapons.
He warned of the need mankind fails to find a way to control the expanding technology, telling TalkTV: "You can have really very dangerous threats to humans that could kill many humans, not all humans, simply from where we’d expect models to be in two years time.
“The kind of existential risk that I think the letter writers were talking about - what happens once we effectively create a new species, you know an intelligence that is greater than humans."
Mr Clifford, who chairs the government’s Advanced Research and Invention Agency (Aria), said that this “sounds like the plot of a movie” but was a real concern.
He added: “If we try and create artificial intelligence that is more intelligent than humans and we don’t know how to control it, then that’s going to create a potential for all sorts of risks now and in the future - it’s right that it should be very high on the policymakers’ agendas.”
It comes as Rishi Sunak is eager to promote the UK as a possible hub for a future global regulator, modelled on the nuclear body the International Atomic Energy Agency.
What concerned him about the present situation, he said, was that “the people who are building the most capable systems freely admit that they don’t understand exactly how they exhibit the behaviours that they do”. When asked whether this was “quite terrifying”, Mr Clifford replied: “Absolutely”.
However, Mr Clifford said that AI also had the potential to be an overwhelming force for good — provided that we find ways to control it. “If it goes right . . . you can imagine AI curing diseases, making the economy more productive, helping us get to a carbon neutral economy.”
Arvind Narayanan, professor of computer science at Princeton, wrote last week: “The history of technology to date suggests that the greatest risks come not from technology itself, but from the people who control the technology using it to accumulate power and wealth.
“We should be wary of Prometheans who want to both profit from bringing the people fire, and be trusted as the firefighters.”
Others believe that the voices of prominent computer scientists, such as Geoffrey Hinton and Yoshua Bengio, two of the AI “godfathers” who have recently warned about the technology’s threats, are being treated like “hero scientists”.
Kyunghyun Cho, a prominent AI researcher and associate professor at New York University, told Venture Beat: “I think we are seeing the negative side of the hero scientist.
"They’re all just individuals. They can have different ideas. Of course, I respect them and I think that’s how the scientific community always works. We always have dissenting opinions. But now this hero worship, combined with this AGI doomerism. . . I don’t know, it’s too much for me to follow.”
Azeem Azhar, an industry expert at the research group Exponential View, said in his newsletter: “Just because people are experts in the core research of neural networks does not make them great forecasters, especially when it comes to societal questions or questions of the economy, or questions of geopolitics.”