Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Comment
John Naughton

Thank the Lords someone is worried about AI-controlled weapons systems

A Ukrainian soldier, in a forest near Avdiivka, Donetsk, carries a drone on his shoulders
A Ukrainian soldier in Donetsk carrying a drone. Photograph: Libkos/AP

The most interesting TV I’ve watched recently did not come from a conventional television channel, nor even from Netflix, but from TV coverage of parliament. It was a recording of a meeting of the AI in weapons systems select committee of the House of Lords, which was set up to inquire into “how should autonomous weapons be developed, used and regulated”. The particular session I was interested in was the one held on 20 April, during which the committee heard from four expert witnesses – Kenneth Payne, who is professor of strategy at King’s College London; Keith Dear, director of artificial intelligence innovation at the computer company Fujitsu; James Black from the defence and security research group of Rand Europe; and Courtney Bowman, global director of privacy and civil liberties engineering at Palantir UK. An interesting mix, I thought – and so it turned out to be.

Autonomous weapons systems are ones that can select and attack a target without human intervention. It is believed (and not just by their boosters) that these systems could revolutionise warfare, and may be faster, more accurate and more resilient than existing weapons systems. And that they could, conceivably, even limit the casualties of war (though I’ll believe that when I see it).

The most striking thing about the session (for this columnist, anyway) was that, although it was ostensibly about the military uses of artificial intelligence in warfare, many of the issues and questions that arose in the two hours of discussion could equally have arisen in discussions about civilian deployment of the technology. Questions about safety and reliability, for example, or governance and control. And, of course, about regulation.

Many of the most interesting exchanges were about this last topic. “We just have to accept,” said Lord Browne of Ladyton resignedly at one point, “that we will never get in front of this technology. We’re always going to be trying to catch up. And if our consistent experience of public policy development sustains – and it will – then the technology will go at the speed of light and we will go at the speed of a tortoise. And that’s the world that we’re living in.”

This upset the professor on the panel. “Instinctively, I’m reluctant to say that’s the case,” quoth he. “I’m loth to agree with an argument that an academic would sum up as technological determinism – ignoring all kinds of institutional and cultural factors that go into shaping how individual societies develop their AI, but it’s certainly going to be challenging and I don’t think the existing institutional arrangements are adequate for those sorts of discussions to take place.”

Note the term “challenging”. It is also ubiquitous in civilian discussions about governance/regulation of AI, where it is a euphemism for “impossible”.

So, replied Browne, we should bring the technology “in house” (ie, under government control)?

At which point the guy from Fujitsu remarked laconically that “nothing would slow down AI progress faster than bringing it into government”. Cue laughter.

Then there was the question of proliferation, a perennial problem in arms control. How does the ubiquity of AI change that? Greatly, said the guy from Rand. “A lot of stuff is very much going to be difficult to control from a non-proliferation perspective, due to its inherent software-based nature. A lot of our export controls and non-proliferation regimes that exist are very much focused on old-school traditional hardware: it’s missiles, it’s engines, it’s nuclear materials.”

Yep. And it’s also consumer drones that you buy from Amazon and rejig for military purposes, such as dropping grenades on Russian soldiers in trenches in Ukraine.

Overall, it was an illuminating session, a paradigmatic example of what deliberative democracy should be like: polite, measured, informed, respectful. And it prompted reflections about the fact that the best and most thoughtful discussions of difficult issues that take place in this benighted kingdom happen not in its elected chamber, but in the constitutional anomaly that is the House of Lords.

I first realised this during Tony Blair’s first term, when some of us were trying to get MPs to pay attention to the Regulation of Investigatory Powers Act, then being shepherded through parliament by the home secretary, Jack Straw, and his underling Charles Clarke. We discovered then that, of the 650 members of the House of Commons, only a handful displayed any interest at all in that flawed statute. (Most of them had accepted the Home Office bromide that it was just bringing telephone tapping into the digital age.) I was astonished to find the only legislators who managed to improve the bill on its way to the statute book were a small group of those dedicated constitutional anomalies in the Lords who put in a lot of time and effort trying to make it less defective than it would otherwise have been. It was a thankless task, and it was inspiring to see them do it. And it’s why I enjoyed watching them doing it again 10 days ago.

What I’ve been reading

Democratic deficit
A blistering post by Scott Galloway on his No Mercy/No Malice blog, Guardrails, outlines the catastrophic failure of democratic states to regulate tech companies.

Hit those keys
Barry Sanders has produced a lovely essay in Cabinet magazine on the machine that mechanised writing.

All chatted out
I’m ChatGPT, and for the Love of God, Please Don’t Make Me Do Any More Copywriting is a nice spoof by Joe Wellman on McSweeney’s Internet Tendency.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.