European Union policymakers have agreed on landmark legislation to regulate artificial intelligence (AI), paving the way for the most ambitious set of standards yet to control the use of the game-changing technology.
The agreement to support the “AI Act” on Friday came after nearly 38 hours of negotiations between lawmakers and policymakers.
“The AI Act is a global first. A unique legal framework for the development of AI you can trust,” EU chief Ursula von der Leyen said.
“And for the safety and fundamental rights of people and businesses. A commitment we took in our political guidelines – and we delivered. I welcome today’s political agreement.”
Efforts to pass the “AI Act”, which was first proposed by the EU’s executive arm in 2021, have accelerated since the release last year of OpenAI’s ChatGPT, which thrust the rapidly developing field of AI into the public consciousness.
The law is widely seen as a global benchmark for governments hoping to take advantage of the potential benefits of AI while guarding against risks that range from disinformation and job displacement to copyright infringement.
The legislation, which had been delayed by divisions over the regulation of language models that scrap online data and the use of AI by police and intelligence services, will now go to member states and the EU parliament for approval.
Under the law, tech companies doing business in the EU will be required to disclose data used to train AI systems and carry out testing of products, especially those used in high-risk applications such as self-driving vehicles and healthcare.
The legislation bans indiscriminate scraping of images from the internet or security footage to create facial recognition databases, but includes exemptions for the use of “real-time” facial recognition by law enforcement to investigate terrorism and serious crimes.
Tech firms that break the law will face fines of up to seven percent of global revenue, depending on the violation and the size of the firm.
The EU law is seen as the most comprehensive effort yet to regulate AI amid a growing patchwork of guidelines and regulations globally.
In the United States, President Joe Biden in October issued an executive order focused on AI’s impact on national security and discrimination, while China has rolled out regulations requiring AI to reflect “socialist core values”.
Other countries such as the UK and Japan have taken a largely hands-off approach to regulation.