Get all your news in one place.
100’s of premium titles.
One app.
Start reading
PC Gamer
PC Gamer
Jacob Ridley

EU outlines rules aiming to make it clear when content has been generated by an AI

OpenAI logo displayed on a phone screen and ChatGPT website displayed on a laptop screen are seen in this illustration photo taken in Krakow, Poland on December 5, 2022.

European Parliament today agreed on how its proposed rules on AI will look ahead of being formally agreed upon by EU member states. The new rules aim to make it easier to spot when content has been AI generated, including deep fake images, and would completely outlaw AI's use in biometric surveillance, emotion recognition, and predictive policing.

The new rules would mean AI tools such as OpenAI's ChatGPT would have to make it clear that content is AI generated, and would have some responsibility for ensuring users know when an image is a deep fake or the real deal. That seems a mighty task, as once the image is generated it's tough to limit how a user shares it, but that might be something these AI companies have to figure out in the near future.

If these new rules were to pass through European Parliament as is, AI models would need to release "detailed summaries" of copyrighted data used in training to the public. For OpenAI, specifically, this would force it to unveil its training data for its massive GPT-3 and GPT-4 models used today, which are currently not available to peruse. There are some big datasets used for training AI models that already make this data available, such as LAION-5B.

There would also be AI uses that are entirely prohibited, specifically those that could encroach on EU citizens' privacy rights.

  • "Real-time" and "post" remote biometric identification systems in publicly accessible spaces.
  • Biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation).
  • Predictive policing systems (based on profiling, location or past criminal behaviour).
  • Emotion recognition systems in law enforcement, border management, the workplace, and educational institutions.
  • Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases (violating human rights and right to privacy).

These rules are yet to actually be enshrined into law just yet. Ahead of that, member states get to jump in with any propositions of their own, and that process will begin later today. Expect the finalised rules for AI to look similar to these proposed ones, however. The EU seems dead set on making sure it has the jump on AI and its potential uses—in as well any government can, anyways. 

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.