The Oppenheimer director, Christopher Nolan, has highlighted the difficulties of applying nuclear weapons-style regulation to artificial intelligence, as he warned that the United Nations had become a “very diminished” force.
Nolan told the Guardian J Robert Oppenheimer’s call for international control of nuclear weapons had “sort of come true”, but there had nonetheless been extensive proliferation of the technology since the “father of the atomic bomb” led the Manhattan project in the second world war.
“To look at the international control of nuclear weapons and feel that the same principles could be applied to something that doesn’t require massive industrial processes – it’s a bit tricky,” he said.
“International surveillance of nuclear weapons is possible because nuclear weapons are very difficult to build. Oppenheimer spent $2bn and used thousands of people across America to build those first bombs. It’s reassuringly difficult to make nuclear weapons and so it’s relatively easy to spot when a country is doing that. I don’t believe any of that applies to AI.”
This week the UN secretary-general, António Guterres, said the UN was the “ideal place” for establishing a global standard and approach to AI, as calls grow for an international effort to moderate the technology’s development. Under the UN-brokered treaty on the non-proliferation of nuclear weapons, nuclear-armed countries are committed to not helping non-nuclear weapon states acquire or build such military technology.
Nolan said Oppenheimer had wanted countries to give up “some portion” of their sovereignty to put control of nuclear energy into the hands of the international community via the UN. However, he said the UN was “very different and very diminished from what it was in the 1950s”.
Nolan said there were “very strong parallels” between the renowned physicist and AI experts who were calling for the technology’s development to be reined in. Nolan’s film details how Oppenheimer’s calls for nuclear restraint, including over the development of the powerful hydrogen bomb, led to clashes with the US political and military establishment.
Dr Geoffrey Hinton, the British “godfather of AI”, quit Google this year to speak more openly about the “existential risk” posed by advanced AI, while the Tesla chief executive, Elon Musk, was among thousands of signatories of a letter calling for a pause in building powerful AI systems.
“I have been interested to talk to some of the leading researchers in the AI field, and hear from them that they view this as their ‘Oppenheimer moment’,” Nolan said. “And they’re clearly looking to his story for some kind of guidance … as a cautionary tale in terms of what it says about the responsibility of somebody who’s putting this technology to the world, and what their responsibilities would be in terms of unintended consequences.”
However, Nolan said the AI issue contained “a lot of ethical dilemmas without necessarily a clear path forward”.
This week Mark Zuckerberg’s Meta announced it would make an AI model publicly available, which led to one expert to compare it to giving someone a template for nuclear bomb-making. The Oppenheimer biography the film is based upon, American Prometheus by Kai Bird and Martin J Sherwin, details how sharing nuclear knowledge with the then Soviet Union was a subject of debate during the Manhattan project.
Nolan said the choice between “boxing in” AI knowledge or releasing it into common ownership was an imperfect one. The president of global affairs at Zuckerberg’s Meta business, the former deputy prime minister Nick Clegg, said making AI models available for public scrutiny would make them safer.
“I think there are arguments for both [approaches] and I think both are equally unsatisfactory solutions. It’s going to require a lot more work and a lot more thought. But I think as long as accountability is at the centre of the discussion, I think that’s our best bet,” he said.
Fears over the disruptive impact of AI on the film and TV industries have played a role in strikes by actors and writers that have paralysed Hollywood. However, members of the Directors Guild of America including Nolan have agreed a new contract with US film studios stating that AI cannot take over directors’ work.
Nolan said generative AI, the catch-all term for tools that can produce convincing text, image and voice from simple human prompts, would be disruptive but also create “tremendous opportunities” in areas such as visual effects. However, he said executives needed to be held responsible for its use.
“I do think it’s going to be a powerful tool in the future. What I’ve tried to put into the debate, and keep voicing, is the notion of responsibility and employer responsibility. The one thing we can’t do is let management, employers and the producers use AI to sidestep responsibility for their actions.”
Asked if AI would be the subject of his next film, Nolan said Oppenheimer, released this week, was his “entire focus” for now. He added: “[AI] leaves me with a lot of troubling questions. And quite often those become fuel for a what I do next. I really don’t have any idea about what I will be doing next.”