Countries attending the U.K.’s AI Safety Summit have released a declaration named after the venue, Bletchley Park, where codebreakers including the brilliant and tragic Alan Turing shortened World War II by a couple years.
The Bletchley Declaration is, in itself, nowhere near as much of a game changer as Turing’s bombe was. Unsurprisingly, given the flurry of lobbying that’s taken place in the run-up to the event, it mostly just serves as a pretty good snapshot of what 28 countries (and the EU) currently understand AI’s promises and risks to be.
The communiqué talks about the importance of “alignment with human intent” and points out that we really need to work on better understanding AI’s full capabilities. It notes the potential for “serious, even catastrophic, harm, either deliberate or unintentional,” but also recognizes that “the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection needs to be addressed”—these are broad strokes, but they acknowledge the concerns that many have about AI’s immediate impact, as opposed to more arcane fears about the potential misdeeds of a future, rogue artificial general intelligence.
Civil society must play a part in working on AI safety, the document declares, despite the complaints of civil society groups that they have been shut out of the summit. Companies building “frontier” AI systems “have a particularly strong responsibility for ensuring the safety of these AI systems, including through systems for safety testing, through evaluations, and by other appropriate measures.”
There’s not much in here in the way of firm commitments and tangible measures, which is what you might expect from a declaration that: a) is the first of its kind; and b) is a compromise between frenemies and outright rivals with conflicting imperatives and legal systems, like the U.S., the U.K., the European Union, and China.
British commentators have noted the U.S.’s decision to use the summit to announce its own AI Safety Institute, which they say takes the shine off British Prime Minister Rishi Sunak’s recent announcement of a U.K. AI Safety Institute as a way to “advance the world’s knowledge of AI safety.” But I’m not so sure—the White House was careful to note that the U.S. institute will collaborate with its British counterpart, so I don’t really see how anyone’s a loser in this scenario.
As for China, the British government has been keen to keep the superpower in the room, but at arm’s length—Deputy Prime Minister Oliver Dowden talked up China’s attendance, but also said it “might not be appropriate for China to join” certain sessions “where we have like-minded countries working together.”
The Financial Times also notes that several of the Chinese academics attending the summit have signed onto a statement calling for stricter measures than those included in the Bletchley Declaration—or U.S. President Joe Biden’s executive order earlier this week—to address AI’s “existential risk to humanity.” This isn’t the official Chinese line just yet, but it may indicate where that’s going. There's certainly a lot of scope for discord as the U.S. and China race for so-called AI supremacy, whatever that means.
So everyone isn’t entirely on the same page, but that was never going to be the case. I’d call this a promising start for international cooperation on a subject that—let’s not forget—was on very few people’s radars as a serious threat before this year. Crucially, these summits will be regular occurrences; the next one will take place in Korea in six months, and then there will be another in France a year from now. Let’s just hope those events are as inclusive as the Bletchley Declaration promises.
More news below.
David Meyer
Want to send thoughts or suggestions to Data Sheet? Drop a line here.