Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Conversation
The Conversation
Albert Sanchez-Graells, Professor of Economic Law and Co-Director of the Centre for Global Law and Innovation, University of Bristol

The UK wants to export its model of AI regulation, but it's doubtful the world will want it

On AI, the UK hopes that it can strike the right balance between addressing risks and fostering innovation. PopTika / Shutterstock

Recent claims that artificial intelligence (AI) poses an existential threat to humanity seem to have jolted Prime Minister Rishi Sunak into action. Despite being seen as having a “pro-techology” stance, he appears to be quickly shifting position.

The Centre for AI Safety recently made mitigating the risk of extinction from AI a global priority. Against this background of caution, Sunak now reportedly wants the UK to lead in the development of guardrails to regulate AI growth.

During a trip to the US, Sunak was expected to try to persuade US president Joe Biden that the UK should play such a leading role on global AI guidelines, pitching the UK as the ideal hub for AI regulation. He seems to have met with limited success. So, is the case for this strong enough to persuade the US, and other global leaders?

Part of the anticipated pitch is that “the UK could promote a model of regulation that would be less ‘draconian’ than the approach taken by the EU, while more stringent than any framework in the US”. This is likely to raise some eyebrows and ruffle some feathers.

In part, this is because the UK’s “principles-based” approach can hardly be considered stringent at all. In its March 2023 white paper, the UK government laid out its “pro-innovation approach” to AI regulation. White papers are policy documents setting out plans for future legislation. The plans have been criticised for being too lax, already outdated, and lacking in meaningful detail.

Fit for export?

Even the Information Commissioner’s Office (ICO), one of the UK’s regulators affected by the white paper, was quick to point out its shortcomings. In this light, it does not seem to be a prime candidate for regulatory export.

Moreover, the US and the EU are making significant strides in coordinating their approaches to technology regulation. Only last week, they launched three joint expert groups to move forward with their December 2022 joint AI roadmap. It is unclear what the UK would bring to this table.

Finally, other major players have a much more credible track record of AI and digital regulation. The EU is close to completing the legislative process for its AI Act, initiated in 2021. This will give it a first-mover advantage in the jostle for position to advance a global standard for AI regulation.

Rishi Sunak in 2022
Rishi Sunak has been a supporter of the UK tech sector. Sussex Photographer / Shutterstock

Japan developed a principles-based approach to AI regulation back in 2019, which provides a clear alternative to the UK’s similar framework. While the international community still seems to accept that the UK could punch above its weight in tech matters, it is far from clear whether they would hand it the keys to global AI regulation.

Rishi Sunak’s bid to place the UK as a prime hub for AI regulation could also be seen as a calculated move to boost the country’s tech sector, which the Prime Minister has been bullish in promoting. This is evident from the launch in March 2023 of the Foundation Model Taskforce. With a budget of £100 million and the mission “to ensure sovereign capabilities and broad adoption of safe and reliable foundation models”, this is the PM’s push for the development of a “British ChatGPT”.

A country invested in promoting the development of “British AI”, and playing catch up with US and Chinese AI giants, could be seen as trying to secure an advantageous position in the race to AI regulation. This would help steer the development of AI global standards in ways that support the UK’s digital strategy, rather than being genuinely worried about dubious AI existential threats.

Fear of missing out?

Such “fantasy concerns” have been readily dismissed, and experts agree they have not been backed up by evidence. Experts are united on the risk of AI wiping out humanity being “close to zero” and have rejected the “doomer narratives” advanced by the tech industry. Earlier studies of AI-related existential risks have shown that they depend on human use or abuse of AI.

There has been no breakthrough to suggest any new need for regulation. The PM’s sudden change of heart could easily be read as an opportunistic intervention to reposition the UK in the global AI scene, as the current “pro-innovation” approach is clearly out of kilter.

The sincerity of the concerns behind the move is also put in question by the fact that the UK’s approach to AI regulation has consistently sidelined the importance of tackling the very real and current risks posed by AI – such as algorithmic discrimination or environmental impacts – which experts agree should be the primary focus of regulation.

Some of the ways in which the UK is seeking to generate a digital Brexit dividend pose serious threats to individual rights, such as in the data protection and digital information (No. 2) bill currently in discussion in parliament. This is at odds with a genuine will to put adequate guardrails in place to protect the public from AI-related harms.

So all in all the case looks weak. However, AI regulation will not be sorted in one go. If the UK wants to play a leading role in the future, it would do well to get its house in order. Seriously revising the March 2023 white paper and the data protection and digital information (No. 2) bill would be a good place to start.

Only by implementing effective protections and showing strong and decisive action domestically can the UK government hope to build the credibility needed to lead international efforts of AI regulation.

The Conversation

Albert Sanchez-Graells received funding from the British Academy. He is one of the Academy's 2022 Mid-Career Fellows (MCFSS22\220033, £127,125.58). His research and views are however not attributable to the British Academy.

This article was originally published on The Conversation. Read the original article.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.