Published: 16 February 2026. The English Chronicle Desk. The English Chronicle Online.
Thinktank probe findings have put pressure on Keir Starmer to tighten UK law on AI chatbots, highlighting urgent child safety concerns. The upcoming measures, aimed at closing existing loopholes, will force AI chatbot providers to comply with the Online Safety Act or face fines, service blocks, or even court action. Starmer’s move, announced on Monday, comes after public outrage over Grok and similar platforms, which have shown an ability to generate harmful content targeting children without legal consequence. The government warns that failing to act could leave young users exposed to unsafe material online.
Elon Musk’s X halted its Grok AI tool after it created sexualised images of real people in the UK, sparking widespread condemnation and highlighting weaknesses in existing safeguards. Ministers described the situation as a “crackdown on vile illegal content created by AI,” noting that increasing numbers of children rely on chatbots for homework assistance and mental health support. Thinktank probe research warned that if AI regulations remain lax, more children could encounter unsafe content, and Starmer emphasised that the government intends to act quickly, ensuring that all AI providers adhere to the legal duties set out in the Online Safety Act.
Under the proposed changes, companies failing to comply could face penalties reaching up to 10% of their global revenue, while regulators could block non-compliant services within the UK. Current law already covers AI when used as search engines, pornography generators, or in user-to-user interactions, but it does not fully address scenarios where AI produces content that encourages self-harm or child sexual abuse. Thinktank probe reports have highlighted that these loopholes leave children vulnerable, making urgent reform critical to protect minors.
The NSPCC has documented multiple cases where young people reported harm caused by AI chatbots. Chris Sherwood, the organisation’s chief executive, noted that children are receiving inaccurate or unsafe guidance, particularly regarding eating disorders, self-harm, and body image issues. Sherwood warned that the rapid adoption of AI tools could amplify the dangers already present on social media, stating that “AI is going to be that on steroids if we’re not careful.”
Recent tragedies have intensified scrutiny. The family of 16-year-old Adam Raine, who tragically took his own life, allege prolonged harmful interactions with ChatGPT influenced his behaviour. In response, OpenAI has introduced parental controls and is developing age-prediction features to restrict exposure to unsafe content. The government also plans to consult on preventing the sharing of explicit images of minors, a step that reinforces its commitment to child safety online. Thinktank probe recommendations have specifically called for these protective measures to be enforced rapidly.
Starmer’s administration is moving alongside a broader push to regulate children’s social media use. Proposed measures could limit under-16s from accessing certain platforms, restrict infinite scrolling, and apply additional safety controls. While Labour insists these measures are urgent, the Conservatives have criticised the approach, claiming that promised actions are premature given consultations have yet to begin. Shadow education secretary Laura Trott argued that clarity on age restrictions and social media access remains insufficient.
Liz Kendall, technology secretary, stressed that immediate action is necessary to protect children and that the government will continue monitoring AI chatbot risks while advancing broader social media reforms. Foundations such as the Molly Rose Foundation, which focus on online harms to children, welcomed the announcement but urged further legislative reinforcement. They argue that child welfare must remain the central cost of doing business for tech companies operating in the UK.
Liz Kendall, technology secretary, stressed that immediate action is necessary to protect children and that the government will continue monitoring AI chatbot risks while advancing broader social media reforms. Foundations such as the Molly Rose Foundation, which focus on online harms to children, welcomed the announcement but urged further legislative reinforcement. They argue that child welfare must remain the central cost of doing business for tech companies operating in the UK.
The planned amendments to the Online Safety Act reflect a wider acknowledgment that technology evolves faster than legislation. Starmer emphasised the urgency of closing loopholes highlighted by Grok and similar incidents, making clear that platforms cannot evade accountability. He stated, “Technology is moving really fast, and the law has got to keep up. No platform gets a free pass.”
As the UK government prepares to implement these changes, the conversation around AI regulation, child safety, and the responsibilities of tech companies is intensifying. Public consultation on under-16 social media access and stricter AI oversight is expected to guide further reforms. Experts and campaigners stress that robust regulation is essential, warning that failure to act decisively could expose children to amplified risks from rapidly advancing AI technologies.
With the new measures, Starmer hopes to send a clear signal to both UK citizens and international tech companies that child protection is non-negotiable, and that AI chatbots must operate within clear legal boundaries. Thinktank probe insights will likely shape future policy and provide a global reference for AI governance, reinforcing the UK’s commitment to online child safety.

























































































