Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Evening Standard
Evening Standard
Technology
Mary-Ann Russon

UK needs ‘urgent rethink’ on AI regulation and human rights, researchers warn

AI researchers are calling on the Government to have an “urgent rethink” on current proposals relating to the regulation of artificial intelligence (AI) in the UK.

A new report by the Ada Lovelace Institute summarising the UK’s current plans for new AI laws has made 18 recommendations, and in particular has found that the legal protections for private citizens to seek redress when AI goes wrong and makes a discriminatory decision are severely limited.

This follows a study by the same body in June of 4,000 UK adults, which found that 62 per cent would like to see laws and regulations guiding the use of AI technologies, 59 per cent would like clear procedures in place for appealing to a human against an AI decision, and 54 per cent want “clear explanations of how AI works”.

UK Prime Minister Rishi Sunak is keen for the UK to host the world’s first AI safety summit this autumn and will look for bilateral support at the event to help improve AI regulation.

The researchers fear that protections for the public will become worse going forward, unless changes are made to draft legislation, such as the Data Protection and Digital Information Bill, which is currently going through the House of Commons.

The report suggests a range of solutions and protections for the UK to implement, including:

  • Investing in pilot projects to improve Government understanding of trends in AI research and technology development
  • Clarifying the law around AI liability
  • Establishing an AI ombudsman to regulate disputes, similar to the financial and energy sectors
  • Enabling civil society groups like unions and charities to be a part of regulatory processes
  • Expanding the definition of “AI safety”
  • Ensuring that existing GDPR and intellectual property laws are enforced

”If you're a business and you make an important decision about an individual's access to products or services like mortgages or loans using AI, or you’re an employer and you terminate someone’s employment because AI makes a decision about their productivity — at the moment, it's prohibited by law, there has to be human insight,” Matt Davies, UK public policy lead at the Ada Lovelace Institute told The Standard.

“Instead there will be an expectation that there are safeguards in place, it’s changing in the draft legislation, so instead of the burden of proof being on the organisation that they didn’t do this, the burden of proof is now on the individual.”

However a Government spokesman told The Standard that all existing protections will still continue to apply.

The researchers would like diverse sections of society to be represented at the AI safety summit, not just politicians.

Alex Lawrence‑Archer, a solicitor with London-based law firm AWO, which provided a legal analysis of UK AI regulations for the report, told The Standard: “Weak regulation means that when things go wrong, the burden of finding out and putting it right is placed on those who can least afford to bear it.

He added that he felt the Government’s data protection reforms “are taking us in the opposite direction”.

“We're very sympathetic towards what the Government is doing with the Data Protection and Digital Information Bill — they want to make it easier for businesses to use technologies, including AI, but we think some parts of the bill, including automated decision making, need a rethink, as they weren’t designed with these systems in mind,” said Mr Davies.

Among other things, the researchers warned that it was unlikely that international agreements will be effective in making AI safer and preventing harm, unless they are underpinned by “robust domestic regulatory frameworks” able to shape corporate incentives and AI developer behaviour in particular.

A spokesman for the Department for Science, Innovation, and Technology, said: “As set out in our AI White Paper, our approach to regulation is proportionate and adaptable, allowing us to manage the risks posed by AI whilst harnessing the enormous benefits the technology brings.

“The Data Protection and Digital Information Bill preserves protections around automated decision-making. The existing safeguards will continue to apply to all relevant use of data, and ensure individuals are provided with information about automated decisions, can challenge them, and have such decisions corrected, where appropriate.”

Media and political AI rhetoric not helping

Debates over the so-called ‘existential risks’ that AI poses are not helping lawmakers to create sensible new rules, experts and the tech industry are warning (Levart_Photographer / Unsplash)

The report also highlights the need to avoid “speculative” claims about AI systems and, rather than panicking about “existential risks” like the idea that AI could kill mankind in just two years, to take comfort from the fact that solutions to any harms can be achieved by working more closely together with AI developers as they develop new products.

“In some cases, these harms are common and well-documented — such as the well-known tendency of certain AI systems to reproduce harmful biases — but in others they may be unusual and speculative in nature. Some commentators have argued that powerful AI systems may pose extreme or ‘existential’ risks to human society, while others have condemned such claims as lacking a basis in evidence,” the report says.

Professor Lisa Wilson, member of International Cyber Expo‘s Advisory Council, feels that the UK has left it “a little too late” in terms of AI lawmaking, and she feels many of the conversations being had by the media and politicians recently have been “highly polarised”.

“There are those who can see the incredible benefits and those who are, in essence, petrified for society moving forward. In reality, there are many more pieces of the puzzle that I also think includes two other dimensions — inclusion and design,” she told The Standard.

“Global aging is one of the greatest issues around technology. We have significantly more non-digital natives being exposed to AI and, in many ways, it is devoid of the inclusion of their data as well as the billions still not connected.”

Dr Clare Walsh, director of education at the Institute of Analytics, tells The Standard that part of the problem is that AI doesn’t have its own set regulations — she says most AI laws are somewhere in a “patchwork” of at least 12 other existing laws relating to topics like human rights, data privacy, or equal opportunities.

“What many working in AI would like from organisations like Ada Lovelace is clearer guidance, and that is understandable. The intention to produce trustworthy ethical AI far outstrips the capacity of many firms to build that level of internal audit into their practices because we have a huge shortage of people trained to assist. External review would be even better,” she said.

However, Dr Walsh added that there are not enough people in the tech industry who specialise in AI assurance or risk management, and that no one can really anticipate all the emerging risks that could be discovered in the coming years.

“Ultimately, given the complexity and temporary nature of the AI landscape now, we need to fall back on AI assurance professionals, rather than one law to rule them all... nobody is better placed to explain what could go wrong, or where models should never be used, than the person who built that model and worked on that data.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.