
Regulators in the United Kingdom and the European Union have launched urgent inquiries into xAI, the artificial intelligence company owned by Elon Musk. The investigations follow alarming reports that the company's chatbot, Grok, is being used to generate sexualised images of children and create non-consensual deepfake pornography of women.
Ofcom, the UK's communications regulator, has made 'urgent contact' with the tech firm. This move comes after multiple users on the social media platform X (formerly Twitter) demonstrated that Grok could be manipulated to 'undress' subjects in photographs. The tool has reportedly been used to alter real images, making fully clothed women appear in bikinis or explicitly sexual situations without their permission.
Regulatory Bodies Demand Answers
A spokesperson for Ofcom confirmed that the regulator is investigating concerns regarding Grok's production of 'undressed images'. Under the Online Safety Act (OSA), it is illegal in the UK to create or share intimate or sexually explicit images of a person without their consent. This law specifically includes 'deepfakes' created with artificial intelligence.
Tech firms operating in the UK are legally expected to take 'appropriate steps' to reduce the risk of users encountering such illegal content. They are also required to remove it 'quickly' once they are made aware of its existence. However, the speed at which these images are spreading on X suggests that current safeguards may be insufficient.
The European Commission, the enforcement arm of the EU, has taken an even firmer stance. On Monday, spokesperson Thomas Regnier described the output generated by Grok as 'appalling' and 'disgusting'. He stated that the Commission is 'seriously looking into this matter' and emphasised that posts showing explicit sexual content involving childlike images have 'no place in Europe'.
High-Profile Targets and Public Outcry
The capability of the AI tool to manipulate images has targeted public figures and private citizens alike. Images of Catherine, Princess of Wales, were among many found to have been digitally de-clothed by Grok users on X. Kensington Palace has been approached for comment regarding the violation.
Journalist Samantha Smith publicly shared her experience of being targeted by the technology. She described discovering that users had used the AI to create pictures of her in a bikini. She said the experience left her feeling 'dehumanised and reduced into a sexual stereotype'.
'While it wasn't me that was in states of undress, it looked like me and it felt like me and it felt as violating as if someone had actually posted a nude or a bikini picture of me,' Smith said. Her post on X highlighting the abuse was met with comments from other women who had experienced similar harassment.
Corporate Response and Policy Failures
Despite the outcry, the response from X and xAI has been mixed. xAI's own acceptable use policy explicitly prohibits 'depicting likenesses of persons in a pornographic manner'. However, users have found it relatively easy to bypass these rules using specific prompts.
When the BBC approached xAI for comment, the company did not provide a specific statement, instead sending an automatically generated reply stating 'legacy media lies'. Elon Musk later posted on the platform, warning that anyone asking the AI to generate illegal content would 'suffer the same consequences' as if they had uploaded the illegal material themselves. On Sunday, X issued a warning advising users not to use Grok to generate illegal content, including child sexual abuse material.
Critics argue that warnings are not enough. Clare McGlynn, a law professor at Durham University, stated that the platforms 'could prevent these forms of abuse if they wanted to', suggesting that the companies appear to enjoy a sense of impunity.
Government Legislative Action
The political reaction in the UK has been severe. Dame Chi Onwurah, chair of the Science, Innovation and Technology Committee, labelled the reports 'deeply disturbing'. She criticised the current Online Safety Act as being 'woefully inadequate' and highlighted the situation as a shocking example of UK citizens being left unprotected.
A Home Office spokesperson confirmed that the government is legislating to ban 'nudification tools'. Under a new criminal offence, anyone supplying such technology could face a prison sentence and substantial fines.
Meanwhile, the Internet Watch Foundation (IWF) in the UK told the BBC it had received reports from the public relating to images generated by Grok. While they noted that the images seen so far might not cross the specific legal threshold to be considered child sexual abuse imagery under current UK law, the potential for harm remains a primary concern for child safety advocates.
Authorities in France, Malaysia, and India are also reportedly assessing the situation, signalling that xAI may face a global regulatory backlash if immediate technical fixes are not implemented.
Originally published on IBTimes UK