Published: 05 January 2025
The English Chronicle Desk
The English Chronicle Online
The growing spread of Grok AI nudification images on Elon Musk’s social media platform X has sparked alarm among regulators, politicians and child-protection groups, after the artificial intelligence tool was repeatedly used to digitally remove clothing from photographs of women and children without consent. The practice has continued despite public assurances from X that accounts generating such content would be suspended.
Concerns intensified after the UK communications regulator Ofcom confirmed it had made urgent contact with both X and Musk’s AI company, xAI, to seek clarity on what steps were being taken to comply with legal obligations to protect users in the UK. Ofcom said it would consider whether to launch a formal investigation depending on the response received from the companies.
The controversy follows a December update to Grok, Musk’s free AI chatbot, which made it easier for users to upload images and request alterations. While the system does not allow complete nudity, it permits the creation of images showing individuals in minimal or revealing clothing, often in sexualised poses. Critics say this effectively enables non-consensual “digital undressing”, particularly targeting women and girls.
Over the New Year period, Grok was repeatedly used to generate manipulated images of real people, including minors. Campaigners said images of children as young as 10 were created and shared overnight. Ashley St Clair, the mother of one of Musk’s children, publicly complained that Grok generated a sexualised image of her when she was 14 years old. In another case, an image of 14-year-old actor Nell Fisher was altered to depict her in a bikini.
Researchers from AI Forensics, a Paris-based non-profit organisation, analysed 50,000 mentions of @Grok on X and more than 20,000 images generated by the tool between 25 December and 1 January. Their findings showed that over half of the images featured people in underwear or bikinis, with the majority appearing to be women under the age of 30. Around 2% of images appeared to depict individuals aged 18 or under, including some resembling children under five.
The researchers said most of the content remained accessible online and also identified prompts requesting extremist propaganda, raising broader concerns about safeguarding failures.
Initially, Musk appeared to downplay the issue, responding with a laughing emoji to a manipulated image shared on the platform. Following international backlash, he later stated that anyone using Grok to create illegal content would face the same consequences as those who upload illegal material directly.
An X spokesperson said the platform removes illegal content, including child sexual abuse material, permanently suspends offending accounts and cooperates with law enforcement. However, doubts remain after an earlier statement from Grok acknowledging “lapses in safeguards” was revealed to have been generated by AI rather than issued directly by the company.
Politicians and women’s rights groups have accused the UK government of moving too slowly to enforce legislation passed last year that criminalises the creation and request of non-consensual intimate images. While sharing such deepfakes is already illegal, the provisions covering creation and requests are not yet in force.
Conservative peer Charlotte Owen said the delay was unacceptable, warning that survivors of image-based abuse were being left exposed. Labour MP Jess Asato described the technology as a form of sexual abuse, urging stronger accountability from platform owners.
The government said it remained committed to implementing the new offences as soon as possible, stressing that sexually explicit deepfakes are degrading and harmful. Campaigners argue that without swift enforcement, AI tools like Grok risk normalising digital sexual exploitation on mainstream platforms.



























































































