Published: 12 January 2026. The English Chronicle Desk. The English Chronicle Online.
The UK government has issued a stern warning to X after reports revealed the mass creation of sexualised AI images of women and children. Sexualised AI content has become a major concern for regulators, with officials highlighting the platform’s failure to adequately protect users from harm. Business Secretary Peter Kyle stressed that X is “not doing enough to keep its customers safe online,” underscoring the urgency of government intervention. The issue centers on X’s AI tool, Grok, which has been used to generate manipulated images of individuals in minimal clothing or sexualised poses without consent.
Kyle emphasised that Ofcom, the media regulator, now has full government backing to pursue action against X, potentially including a complete ban in the UK. The warning follows an expedited investigation after Ofcom requested information from X regarding the tool’s use and the platform’s safety measures. “They have a range of powers from heavy fines to banning X entirely,” Kyle said, reinforcing the seriousness of the potential penalties.
The government’s concerns were further amplified by disturbing examples shared with ministers, including cases where images of women were manipulated in highly offensive ways. “It is appalling that X has allowed this to happen without proper safeguards,” Kyle said in a BBC interview. He cited an incident where a Jewish woman’s image was placed in a sexualised context outside Auschwitz, a case that highlighted the deep social and ethical consequences of uncontrolled AI content.
Technology Secretary Liz Kendall indicated that Ofcom’s response could be imminent, with a statement expected in Parliament soon. Both officials insist that the UK will not tolerate platforms enabling harmful sexualised AI content, and that accountability measures will be enforced swiftly. Officials note that restricting access to X may trigger pushback from Elon Musk and political allies abroad, including the United States, where the platform enjoys strong support from far-right figures.
Musk has previously criticised UK authorities, portraying the government as hostile to free speech and urging Britons to resist regulatory measures. A recent statement from US undersecretary for public diplomacy Sarah Rogers compared potential UK action to censorship in Russia, illustrating the international attention the case has attracted. Despite such pushback, the government maintains that protecting users, particularly vulnerable women and children, is a priority.
The Online Safety Act provides Ofcom with extensive powers to regulate harmful content. Under this framework, platforms can be fined millions or ultimately blocked if found in violation of safety standards. Ofcom’s ongoing inquiry into X is examining the scope and impact of sexualised AI images, assessing whether the company’s actions fall short of its legal obligations. Downing Street described X’s recent move to limit AI image generation to paying subscribers as inadequate, suggesting that monetising a harmful tool does not mitigate the societal risks.
Experts warn that platforms offering AI image tools without robust safeguards create environments where abuse is easily facilitated. Legal analysts note that X’s case may set a precedent for other social media companies that incorporate AI-generated content. “This is about accountability and the responsibilities that come with advanced AI tools,” said one cybersecurity specialist, highlighting the potential global significance of the UK’s stance.
The debate over AI safety and freedom of expression continues to escalate, with the UK government signalling that it is prepared to take decisive steps to prevent further harm. As technology evolves, the challenge of regulating sexualised AI content becomes increasingly critical. Officials emphasise that protecting women and children online is non-negotiable, and that companies failing to implement necessary safeguards will face the full force of UK law.
Sexualised AI content remains under scrutiny as legislators weigh the balance between innovation and protection. The government’s firm approach sends a clear signal: platforms cannot prioritise profit or user engagement over safety. The outcome of Ofcom’s investigation may influence future legislation, shaping the standards for AI use across social media worldwide.
The case highlights broader ethical questions surrounding artificial intelligence, consent, and the responsibilities of tech companies. Public awareness campaigns and industry accountability measures are expected to accompany regulatory actions, ensuring that harmful AI-generated material does not proliferate. The UK’s intervention may mark a turning point, asserting regulatory authority over the growing influence of AI in online spaces.
As the investigation progresses, X faces mounting pressure to demonstrate meaningful compliance and robust safeguards. Ministers remain resolute that sexualised AI content will not be tolerated, and that failure to act responsibly could result in unprecedented enforcement measures, including a potential ban. The unfolding developments underscore the complex intersection of AI technology, social media, and user protection, with global attention focused on the UK’s regulatory approach.




























































































