The European Union has taken the first step towards becoming the first body to pass laws governing the use of artificial intelligence (AI).
The Internal Market Committee and the Civil Liberties Committee’s draft negotiating mandate passed by 84 votes to seven, with 12 abstentions. The aim, the EU says, is to ensure that AI systems “are overseen by people, are safe, transparent, traceable, non-discriminatory, and environmentally friendly”.
While the last point is an often overlooked part of AI bots like ChatGPT, it’s the practices that could be banned outright that will attract more attention.
If unamended, the EU could change the way biometrics are used, with a ban on “real-time” remote use in public spaces and the outlawing of “post” remote use except for the prosecution of serious crimes with judicial authorisation. The use of biometric categorisation based on gender, race, ethnicity, and other sensitive characteristics would also be banned.
For law enforcement, predictive-policing systems based on profiling, location, and past behaviour would be out of bounds. Emotional-recognition systems would also be forbidden, not just for policing, but in border management, workplace, and educational institutions.
Finally, the “indiscriminate scraping of biometric data from social media or CCTV footage to create facial-recognition databases” would also be blocked.
Beyond outright bans, the draft legislation also hopes to put guardrails in place for what the EU calls “high-risk” AI implementation. The definition has been expanded to “include harm to people’s health, safety, fundamental rights or the environment,” and AI systems designed to influence political campaigns. Large social media platforms (more than 45 million users) would also have their recommendation engines scrutinised as high-risk.
Finally, the legislation seeks greater transparency on general-purpose AI. The likes of ChatGPT, for example, would “have to comply with additional transparency requirements, like disclosing that the content was generated by AI, designing the model to prevent it from generating illegal content, and publishing summaries of copyrighted data used for training.”
The point on AI disclosure is hard to enforce. There’s nothing stopping someone from generating information in ChatGPT and then pasting the text elsewhere without the AI watermark.
The potential to experiment in a safe space — a regulatory sandbox — may prove very attractive
The draft now needs to be voted on by the whole of the EU Parliament, something that is expected to occur during the June 12-15 parliamentary session. Assuming it passes, further negotiations on the final form of the law will take place — and it will be interesting to see whether the text is strengthened or weakened once the various wings of parliament give their full scrutiny.
Will the UK comply with EU AI rules?
As the UK is no longer part of the European Union, any passed laws won’t be automatically picked up in Great Britain and Northern Ireland.
Tim Wright, an AI regulatory partner at London law firm Fladgate, believes that the UK is torn between the US and EU’s approach when it comes to AI.
“The US tech approach (think Uber) is typically to experiment first and, once market and product fit is established, to retrofit to other markets and their regulatory framework,” he says. “This approach fosters innovation, whereas EU-based AI developers will need to take note of the new rules and develop systems and processes which may take the edge off their ability to innovate.
“The UK is adopting a similar approach to the US, although the proximity of the EU market means that UK-based developers are more likely to fall into step with the EU ruleset from the outset; however the potential to experiment in a safe space — a regulatory sandbox — may prove very attractive.”