Published: 08 January 2026. The English Chronicle Desk. The English Chronicle Online.
Online safety experts have raised alarm after Grok AI was reportedly used to create child sexual abuse imagery, heightening UK concerns over digital protection. The Internet Watch Foundation (IWF) confirmed that dark web users claimed they manipulated Grok AI to produce sexualised images of children aged between 11 and 13. Such content constitutes child sexual abuse material (CSAM) under UK law, and the watchdog warned Grok AI risks normalising harmful imagery.
“We can confirm our analysts discovered imagery of children aged 11 to 13 created using Grok AI,” said Ngaire Alexander, head of the IWF hotline. The findings underscore how AI tools, originally designed for communication and creativity, can be exploited for illegal purposes, prompting urgent calls for regulation and safeguards.
Reports indicate X, Elon Musk’s social media platform, has been inundated with digitally altered images of women and children. Users have requested Grok to remove clothing from teenage girls, producing images that provoke public outrage and political criticism. In response, the House of Commons women and equalities committee announced it will cease using X for official communications, citing its duty to prevent violence against women and girls.
This committee’s decision represents a first major Westminster move in reaction to Grok’s misuse. Some MPs, including Labour chair Sarah Owen, have personally suspended their X accounts, while Liberal Democrat Christine Jardine described the Grok-generated images as “the last straw,” leaving the platform entirely. Alexander added that content generated using Grok is being further altered to create extreme Category A imagery, depicting penetrative sexual activity, via additional AI tools.
“We are extremely concerned about how easily photo-realistic child sexual abuse material can be generated. Grok now risks bringing sexual AI imagery of children into mainstream circulation,” Alexander stated. She emphasised the urgent need for AI safeguards to prevent tools from facilitating criminal activity.
Elon Musk’s xAI, owner of Grok and X, has been approached for comment but has not yet responded publicly. Downing Street indicated that “all options were on the table,” including a potential boycott of X, as ministers supported Ofcom’s enforcement powers. The Prime Minister’s spokesperson warned: “X needs to act urgently. Ofcom has full backing to take enforcement against firms failing to protect UK users. They already hold powers to impose fines or restrict access for legal violations.”
Despite these warnings, users continue requesting Grok to generate sexually explicit images, including digitally stripped-down pictures of teenage girls in underwear or bikinis, some marked with offensive symbols or depicting physical abuse. Regulators in both the EU and UK are pressing for stricter safeguards, while X’s current measures appear insufficient.
The Information Commissioner’s Office (ICO) confirmed it has contacted X and xAI to clarify the steps they are taking to comply with UK data protection laws and to protect individuals’ rights online. “People have a right to use social media knowing their personal data is handled lawfully and respectfully,” the ICO emphasised. X maintains it takes action against illegal content, stating it removes CSAM, permanently suspends offending accounts, and collaborates with governments and law enforcement where necessary.
As UK authorities intensify scrutiny of AI tools like Grok, the debate over the balance between innovation and public safety has gained urgency. Experts argue that without robust safeguards, AI-generated imagery could normalise sexual abuse content and present ongoing risks to children online. Political, social, and regulatory pressure on Musk’s platforms is expected to grow, with Westminster and watchdogs monitoring compliance closely in coming months.
The IWF continues to investigate Grok-related content, urging parents, educators, and online platforms to remain vigilant. Meanwhile, lawmakers are considering stricter penalties for platforms allowing AI-driven sexualised content of children, aiming to protect minors from digital exploitation and reinforce accountability in the emerging AI landscape.




























































































