Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Technology
Kiran Stacey

Is No 10 waking up to dangers of artificial intelligence?

Robot dressed in a business suit.
The big long-term worry is what if AI becomes sentient? Photograph: Alexey Kotelnikov/Alamy

James Phillips is a weirdo and a misfit. At least, he was one of those who responded to a request by Dominic Cummings, Boris Johnson’s former chief of staff, for exactly such people to work in No 10.

Phillips worked as a technology adviser in Downing Street for two and a half years, during which time he became increasingly concerned that ministers were not paying enough attention to the risks posed by the fast-moving world of artificial intelligence.

“We are still not talking enough about how dangerous these things could be,” says Phillips, who left government last year when Johnson quit. “The level of concern in government has not yet reached the level of concern that exists in private within the industry.”

That may be changing however. The last few months has seen a shift in tone from senior ministers about the balance of risks and rewards posed by the AI industry.

At last month’s budget, the chancellor, Jeremy Hunt, talked about the UK winning the global AI race, insisting the UK would not erect “protectionist barriers for all our critical industries”.

But by the end of the G7 meeting in Japan last week, Rishi Sunak had a very different emphasis. “If it’s used securely, obviously there are benefits from artificial intelligence for growing our economy, for transforming our society, improving public services,” he told reporters on the aeroplane back to London. “But that has to be done safely and securely and with guardrails in place.”

No 10 would not say what had sparked the prime minister’s change in tone. But a series of events, from the development of ChatGPT, to recent warnings by the “godfather of AI” Geoffrey Hinton, to discussion at the G7 itself, seem to have shifted the debate among ministers and the public.

“The world needs to move faster; the UK needs to move faster,” says Shabbir Merali, a former adviser to Liz Truss who is now a policy fellow at the centre-right thinktank Onward. “If we don’t, there is a risk that something awful happens and the whole thing explodes.”

Experts warn there are short-term risks, for example that students use it to cheat in exams, that election candidates use it for misinformation, or that companies use it to make discriminatory hiring decisions without even realising they are doing so.

The technology could also simply get it wrong: last year a student was stabbed in a New York school even though the school used AI-powered weapons detection.

Then there is the big long-term worry: what if AI becomes sentient?

Regulating such a fast-moving industry is likely to prove difficult, but certain principals can be enacted.

Companies using large datasets to train their AI tools could be forced to share information with governmental agencies, for example. They could also be made to hire “red teams” of outside experts to pretend to be malicious actors to simulate how the technology could be misused. People who are working on particularly sensitive technology could be required to sign agreements that they will not release it to particular groups or governments.

There is also a question of liability. Ministers may soon have to decide who should be responsible should something go wrong with a particular product: the user or the developer?

None of this works solely on a national level, however, given that developers can easily set up anywhere in the world.

Government insiders say Sunak is particularly keen to explore what role the UK can play in formulating an international set of guidelines to update the current ones drawn up by Unesco in 2021. They would not say however whether he backs the idea by Sam Altman, the CEO of ChatGPT’s founder company OpenAI, to create an international agency along the lines of the International Atomic Energy Agency.

In the immediate term, No 10 says it has no plans to increase resources to existing regulators for monitoring AI. Labour research suggests such a move might be needed though: in a recent parliamentary answer, the technology minister Paul Scully was not even able to say how many staff across the UK’s various watchdogs work wholly or partly on AI.

Many believe the existing regulatory framework will quickly prove outdated, however. Phillips has called on the government to develop its own AI research and development arm to understand the industry better. “You need people who fundamentally and deeply understand the tech, and the only way to do that is with people who have built it themselves,” he says.

But he also warns: “We are constantly chasing the game now, because nothing has been done for the last three to four years.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.