
The European Parliament on Thursday approved a ban on artificial intelligence tools that generate sexualised deepfakes without consent, as lawmakers also voted to delay key parts of the EU’s landmark AI law. The move follows outrage over explicit fake images produced using Elon Musk’s chatbot Grok.
The measures, adopted by an overwhelming majority, target so-called “nudifier” systems that use AI to create or manipulate sexually explicit or intimate images of real, identifiable people without their consent.
The ban forms part of broader changes to the Artificial Intelligence Act, the bloc’s main effort to regulate the fast-growing technology. EU member states have already backed similar measures, and negotiations will now begin to agree a final version of the law.
The new rules define nudifier systems as those that use AI “to create or manipulate images that are sexually explicit or intimate and resemble an identifiable real person” without consent, the European Parliament said in a statement.
Systems with safeguards will not be affected. Lawmakers said AI tools with “effective safety measures preventing users from creating such images” would remain allowed.
Can France really keep kids off social media, and will it make them safer?
Deepfake backlash
The European Parliament vote passed with 569 lawmakers in favour and 45 against.
EU countries had already approved a similar ban earlier in March, paving the way for negotiations between member states and parliament on a final text.
The move follows widespread anger earlier this year after Grok, an AI chatbot available on X, was used to generate highly realistic fake nude images of women and children from real photos.
The controversy triggered an ongoing EU investigation. X said in January it would introduce changes to stop the creation of sexualised deepfakes of children and women.
Is AI sexist? How artificial images are perpetuating gender bias in reality
Delayed AI rules
Lawmakers also backed delaying the implementation of rules for high-risk AI systems, which are considered to pose potential threats to safety, health or citizens’ fundamental rights.
These rules were originally due to come into effect in August. Under the proposal, standalone high-risk systems would face compliance deadlines from 2 December 2027, while AI tools embedded in other products would have until 2 August 2028.
Parliament cannot change EU law on its own, meaning negotiations with the European Council, which represents the 27 member states, will determine the final outcome before the rules are formally adopted.
(with newswires)