The increased international co-operation on AI safety sparked by UK-created AI safety summits will help the formation of domestic legislation, Technology Secretary Michelle Donelan has said.
The UK is currently co-hosting the AI Seoul Summit with South Korea, where more than a dozen AI firms have agreed to create new safety standards, while 10 nations and the EU have agreed to form an international network of publicly backed safety institutes to further AI safety research and testing globally.
The summit comes six months after the UK held the inaugural AI Safety Summit at Bletchley Park, where world leaders and AI firms agreed to focus on the safe and responsible development of AI, and carry out further research on the potential risks around the technology.
Ms Donelan said these regular gatherings and discussions were helping to place AI safety “at the top” of national agendas around the world, as many countries consider how to best legislate on the subject of the emerging technology.
I think what we have done in the UK by setting up this long-term process of summits ... is to create a long-term process to convene the world on the very topic of AI safety and innovation and inclusivity— Michelle Donelan
“What we’ve said is that legislation needs to be at the right time, but the legislation can’t be out of date by the time you actually publish it, and we have to know exactly what is going into that legislation – we have to have a grip on the risks – and that’s another thing that this summit process helps us to achieve,” she told the PA news agency.
“I think what we have done in the UK by setting up this long-term process of summits – starting with Bletchley and now here in Seoul and then there’ll be the one in France – is to create a long-term process to convene the world on the very topic of AI safety and innovation and inclusivity, which are all intertwined together, so that we can really focus on and keep this at the top of other countries’ and nations’ agendas.”
The Technology Secretary added that “AI doesn’t respect geographical boundaries” and it “isn’t enough” to work only on AI safety domestically, with the “interoperability” of the new network of international safety institutes and resulting shared knowledge helping the governments to be “much more strategic” at managing the risks of AI.
“That said, of course we have a domestic track in this area, which revolves around adding to the resources and the skills and support of our existing regulators, as well as making sure that when the time comes we do actually legislate,” she said.
The Technology Secretary added that the next step in discussions at the summit in Seoul, where she will co-chair a discussion with other technology ministers from around the world on Wednesday, would be on how to further embed safety into AI development.
“How I see it is that phase one was basically Bletchley until Seoul, and what we managed to achieve there was the ‘Bletchley effect’ so that rocketed AI up the agenda in many different countries,” she said.
“It also demonstrated the UK’s global leadership in this area, and we set up the framework as to how we can do (AI model) evaluations via the institutes.
“Now in phase two – Seoul and beyond, to France – we need to also look at not just how can we make AI safe but how can we make safety embedded throughout our society, what I call systemic safety.”