Amidst pressure to act regarding AI safety-related issues, the UK government has taken the bold step of stricter AI regulation as it published an AI Regulation White Paper consultation.
The Department for Science, Innovation and Technology announced on February 6 a new AI regulation white paper which would help UK regulators with over £100 million in funding to advance research and innovation in the AI field, especially in the healthcare and drug discovery sector. As part of the plan, the Sunak government has asked UK regulators to publish a plan of action to tackle AI risks and opportunities by April end.
Through the AI regulation white paper, the UK government seeks to introduce future targeted legally binding requirements for general-purpose AI systems. The move will help create an agile AI regulation that has the back of UK regulators with the necessary skills and tools to tackle AI risks.
As part of the initiative to equip the UK regulators, £10 million has been announced to upskill them so that they can address risks and take advantage of the opportunities of AI technology. From healthcare to telecoms to finance to the education sector this funding is crucial to develop the tools to monitor AI risks and frauds like developing technical tools to monitor AI systems, said the government.
The AI industry is already facing the heat as many regulators have started taking strict action like the new updated guidance on data protection laws by the Information Commissioner's Office that applies to AI systems. This includes personal data protection and fairness in AI systems. The guidance holds organisations responsible if their AI systems don't follow the law.
Now, the Sunak government is stepping up its guard against AI by further equipping the UK regulators with a detailed AI regulation white paper which allows them to assess emerging risks and gives them the room to innovate and tackle the issues.
To create an agile AI regulatory framework that helps UK businesses and enhance transparency, the UK government has urged UK regulators like Ofcom, and the Competition and Markets Authority (CMA) to publish their outlook on approaching AI threats by April 30. The organisations have to underline the AI risks in their area of expertise and their skillset to address the issues along with a detailed plan of how to regulate AI in the coming year.
The AI regulation white paper has set out the UK's way of dealing with regulation in this emerging field, ensuring its pole position in AI safety to reduce the burden of business that prevents innovation. The government thinks that this approach will help Britain have a competitive edge over other nations as AI safety, innovation, AI research and evaluation get centre stage.
The UK government has made it clear as AI is developing rapidly making way for new types of frauds and scams which haven't been understood fully, it will work to formulate effective legislation, not quick-fix solutions that will become outdated. UK regulators working in a targeted way to address AI risks is the government's context-based approach towards AI regulation.
This comes at a time when the Online Safety Act came into force recently to protect UK children and UK consumers from cybercrimes.
Robust AI regulation to empower the UK businesses and public services?
Speaking about the AI regulation white paper, the Secretary of State for Science, Innovation, and Technology, Michelle Donelan said: "The UK's innovative approach to AI regulation has made us a world leader in both AI safety and AI development."
Donelan underlined how artificial intelligence has the potential to transform UK public services and help the UK economy grow by unlocking "advanced skills and technology" that powers the "British economy of the future" and help treat incurable diseases like dementia and cancer.
"By taking an agile, sector-specific approach, we have begun to grip the risks immediately, which in turn is paving the way for the UK to become one of the first countries in the world to reap the benefits of AI safely," Donelan added.
The Tech Secretary further announced that a £90 million funding allocation has been made to create nine research hubs across the country through a US-UK responsible AI partnership. These hubs will be assisting AI experts in developing Britain's AI expertise in various sectors like mathematics, chemistry and healthcare.
Some £2 million has been allocated through the Arts and Humanities Research Council (AHRC) for AI research in policing, education and creative industries. This is part of the AHRC's Bridging Responsible AI Divides (BRAID) programme.
Responsible AI and machine learning solutions to deploy AI systems and other technologies to drive productivity is a crucial need for the UK and hence £19 million has been allocated across 21 projects for this. The Accelerating Trustworthy AI Phase 2 competition delivered by the Innovate UK Bridge AI programme will fund these projects.
For better AI regulation, a steering committee will be launched to guide the government in the activities of a formal regulator coordination structure in the spring of this year.
The measures announced on Tuesday will support the £100 million investment in creating the world's first AI Safety Institute as declared in the AI safety summit held at the Bletchley Park in November last year. The International Scientific Report on Advanced AI Safety launched at the summit will be used for evidence-based frontier AI understanding.
The government has also announced a £9 million investment to bring AI researchers and innovators from the UK and the US together to build a responsible AI system through the International Science Partnerships Fund.
The UK's approach is to identify a small number of organisations developing general-purpose AI systems and enrol them in targeted binding requirements so that they are held accountable for making AI technologies safe. This measure will build on the steps taken by UK regulators in response to AI risks in their area of expertise.
Reacting to the stricter AI regulation call by the UK, the Vice President of External Affairs at Microsoft UK, Hugh Milward said: "The decisions we take now will determine AI's potential to grow our economy, revolutionise public services and tackle major societal challenges and we welcome the government's response to the AI White Paper."
"Seizing this opportunity will require responsible and flexible regulation that supports the UK's global leadership in the era of AI," Milward added.
The CEO of Cohere, Aidan Gomez lauded the UK government for taking the lead in AI policy by launching this AI regulation white paper which speaks of "its commitment to an agile, principles-and-context based, regulatory approach to keep pace with a rapidly advancing technology".
"The UK is building an AI-governance framework that both embraces the transformative benefits of AI while being able to address emerging risks," said Gomez.
The Chief Operating Officer of Google DeepMind, Lila Ibrahim welcomed the next steps of AI regulation in the UK as a balanced strike "between supporting innovation and ensuring AI is used safely and responsibly".
Ibrahim further underlined how the hub and spoke model "will help the UK benefit from the domain expertise of regulators, as well as provide clarity to the AI ecosystem".
"I'm particularly supportive of the commitment to support regulators with further resources," Ibrahim added.
The AI Policy Advisor at the Centre for Long-Term Resilience, Tommy Shaffer Shane said: "We're pleased to see this update to the government's thinking on AI regulation, and especially the firm recognition that new legislation will be needed to address the risks posed by rapid developments in highly-capable general-purpose systems."
According to Shane, moving quickly and thinking carefully about the details is essential to balance AI innovation and mitigate AI risks for the UK government in AI governance.
Meanwhile, techUK CEO Julian David said the government now needed "to move forward at speed, delivering the additional funding for regulators and getting the Central Function up and running".
"Our next steps must also include bringing a range of expertise into government, identifying the gaps in our regulatory system and assessing the immediate risks," David added.
The UK Country Manager of Amazon John Boumphrey, said: "Amazon supports the UK's efforts to establish guardrails for AI, while also allowing for continued innovation. As one of the world's leading developers and deployers of AI tools and services, trust in our products is one of our core tenets and we welcome the overarching goal of the white paper."
"We encourage policymakers to continue pursuing an innovation-friendly and internationally coordinated approach, and we are committed to collaborating with government and industry to support the safe, secure, and responsible development of AI technology," Boumphrey added.
Markus Anderljung, Head of Policy, Centre for the Governance of AI underlined the UK approach to AI regulation as one that's evolving in a positive direction as "it heavily relies on existing regulators, takes concrete steps to support them, while also investing in identifying and addressing gaps in the regulatory ecosystem".
"I am particularly pleased that the response acknowledges the need to address one such gap that has become more apparent since the white paper's publication: how the most impactful and compute-intensive AI systems are developed and deployed onto the market," said Anderljung.
According to Anderljung, the AI regulation white paper has highlighted the strong support of the five cross-sectoral principles – safety, transparency, fairness and accountability needed in the UK's approach towards AI systems.