In some ways the last two days have marked a real move forward in AI regulation. There is now an explicit consensus to take on an international shared responsibility. There are plans for further meetings in South Korea and France over the next year to continue the conversation surrounding risks posed by AI. However, critics have said that the focus on the more hypothetical aspects of existential threats as opposed to the more current, pressing problems – such as insurance companies using AI to determine your risk or AI’s potential to take people’s jobs – dilutes the impact of the summit.
This is an edited extract from the First Edition newsletter – sign up here for our free daily email, taking you through the top stories and what they mean.
The big moments of the summit
The big breakthrough was the creation of an agreed framework around the nature of the risks of AI. It’s a first, but “all very high level at the moment”, says Dan. “There is not much granular detail.”
Bringing the US and China together to sign this declaration is a diplomatic coup for Sunak as it is the first time that China has met with western governments to discuss the issue. The prime minister has batted away criticism for allowing China to join the summit – former prime minister Liz Truss wrote that she was “deeply disturbed” at the invitation and urged Sunak to rescind it. The government argued that China is a key player in the development of AI and that it should be at the table for the discussion to be truly global.
The US vice-president, Kamala Harris, stole some of Sunak’s thunder, as she arrived with a freshly minted White House executive order on AI that laid out actionable plans that the US government would be taking, including setting up its own institute that would police artificial intelligence. With all the noise and bluster of the summit, Sunak did not have plans with anywhere near as much detail to match. Though he insisted that he welcomed Harris’s intervention, it made it clear that the UK was not setting the agenda.
There is a sense now that “structures need to be put in place from individual countries to tackle AI”, Dan says, and Harris was signalling that the US has the blueprint for those plans.
Musk and Sunak
Elon Musk is not necessarily at the forefront of the commercial side of AI but this was nonetheless a consequential meeting that gave the prime minister’s event a further level of importance.
Many did not know what to expect of the conversation: a world leader interviewing a tech billionaire one-on-one is not a regular occurrence. But once things got going it became clear quite quickly it was not going to be a difficult discussion. Instead, the pair traded compliments, with some commentators noting that it seemed like Sunak was angling for Musk’s approval. Kiran Stacey wrote in his analysis that the prime minister played “the role of eager chatshow host, keen to elicit drawled lessons on love, life and technology from the billionaire superstar sitting next to him”.
Before the summit Musk, co-founder of OpenAI, said that he would like a “third party referee”, a point that he reiterated in his 40-minute conversation with Sunak at Lancaster House. There was a chumminess throughout, with the prime minister piling on the praise, while Musk publicly backed some of Sunak’s decisions – like bringing in China.
Musk highlighted the potential benefits of AI, while simultaneously issuing stark warnings about “humanoid robots”, and predicted that there would be no jobs as AI would have taken them all. Despite some of his bleakest predictions, Musk also said that he believed AI to be a force for good.
What happens next?
There is momentum behind the idea that more regulation and much closer oversight is needed of artificial intelligence in all its forms: “Government oversight is certainly a lot stronger than it was last week because of this summit,” Dan says.
Sunak has also insisted that he wants to ensure that the safety issues are dealt with in a way that avoids hampering innovation in the tech sector. He stressed that AI has the capacity to make people’s lives much better, in healthcare, education and the economy but later added that it could also wreak havoc on the same level as a pandemic or nuclear war.
The real change however is to come in the following months. Dan will be closely monitoring how much further the government is willing to go to look under the hood of AI products being funded by private companies. “It’s a fast moving industry, so the other question is how will the legislation keep up with all the changes,” he adds. “Governments are worried about what will happen next year when new, more powerful AI models come out.”
Sunak said last week that “only governments can properly assess the risks to national security [that AI poses]. And only nation states have the power and legitimacy to keep their people safe.” But he added that we should not rush to regulation without fully understanding the risks. It is no secret that the prime minister wants tech companies to invest and develop their products in the UK, with the hope of economic benefits. How he plans to balance the goals of regulation and innovation remains unclear.