Published: 12 November 2025. The English Chronicle Desk. The English Chronicle Online
MPs should stop relying on ChatGPT to draft their parliamentary speeches, according to Kanishka Narayan, the minister responsible for artificial intelligence in the UK government. While Mr Narayan confirmed he uses AI tools for “core research” and background work, he emphasised that he has not used them to write speeches on his behalf.
The warning comes amid growing evidence that a number of parliamentarians have increasingly turned to AI-powered language models to prepare their speeches in the House of Commons. Analysts have noted that phrases and stylistic choices commonly suggested by ChatGPT, such as “I rise to speak” or “I rise today,” have become noticeably prevalent in recent parliamentary debates, raising concerns about the authenticity and independence of MPs’ contributions.
Mr Narayan told The Telegraph: “I think MPs should write their own speeches. Colleagues commenting on colleagues’ speeches never goes down well. But I will say simply this: I use ChatGPT for core research. I use it for background stuff. I have not been culpable amongst some MPs in using AI to write speeches and standing up and saying, ‘I beg to rise’, which I think is the classic phrase the chatbot suggests. And so I am not so far guilty of it.”
Political commentators and bloggers have noted that the adoption of AI tools in parliamentary work has been gradual but pervasive since 2022, when ChatGPT became widely available. In addition to conventional speechwriting, MPs have reportedly used AI for policy research, drafting briefings, and even preparing responses to questions, with the intention of saving time or improving the clarity of their statements.
The issue has prompted criticism from senior politicians. Tom Tugendhat, the former security minister, argued that MPs’ dependence on AI to write speeches “shows the Commons has become absurd,” suggesting that the use of automated tools risks undermining parliamentary debate and authenticity. Meanwhile, Science Secretary Peter Kyle revealed that he had asked ChatGPT to provide insights into why small businesses had been slow to adopt AI, illustrating the broader use of the technology beyond speechwriting.
Despite the cautionary message on speechwriting, Mr Narayan remains a strong advocate for the responsible use of AI across the public sector. He highlighted the potential of AI to improve public services, especially in areas such as employment support and education. “I am full of excitement about the potential of AI and public services,” he said. “One area that I’m particularly very passionate about personally is to support people into quicker and better job matching, into better education opportunities. I think there’s huge potential to use AI to be able to support learning and working.”
Mr Narayan, who was first elected to Parliament in 2024 and appointed as minister last month, also oversees online safety and has recently announced a major legal change aimed at curbing the risks posed by AI-generated content. Under the new law, trusted organisations will be allowed to test AI models to ensure they cannot be used to produce indecent images and videos of children.
Currently, UK law prohibits the creation or possession of child sexual abuse material (CSAM), which has inadvertently made it illegal for developers to test AI models for safety against this risk. The legislative change, described as one of the first of its kind globally, enables child protection agencies and AI companies to examine the underlying technology of chatbots and image generators under controlled conditions. This aims to ensure that the AI tools cannot generate harmful content and that potential abuse is prevented at the source.
The Internet Watch Foundation, which monitors online CSAM, reported that instances of AI-generated child sexual abuse material more than doubled in the past year, increasing from 199 in January 2024 to 426 in January 2025. The organisation has stressed the growing sophistication of AI-generated content and its capacity to cause significant harm, particularly to children.
Mr Narayan described the law change as a preventive measure: “This is ultimately about stopping abuse before it happens. Experts, under strict conditions, can now spot the risk in AI models early. It will also free up police time because we will be stopping the content from being created in the first place.”
Childline, the UK’s helpline for children, has reported a surge in counselling sessions mentioning AI, ranging from issues like bullying and body-image concerns to cases where AI-generated deepfake images were used for online blackmail. Between April and September this year, Childline provided 367 counselling sessions where AI and related technologies were mentioned, a fourfold increase from the same period last year. Half of these involved mental health and wellbeing, demonstrating the wide-reaching impact of AI on young people.
While the use of AI for research and efficiency is widely acknowledged as beneficial, the minister’s message to MPs underscores the importance of retaining personal responsibility, authenticity, and critical judgment when engaging in public duties. By limiting the use of AI to research and background work, MPs are expected to preserve the integrity of parliamentary debates while continuing to explore how AI can enhance the wider public sector.
Mr Narayan’s dual approach—caution in personal use of AI and proactive regulation to prevent harm—reflects the government’s broader stance on AI governance. It illustrates the balance policymakers are trying to strike between harnessing AI’s potential benefits and mitigating its societal risks, particularly for vulnerable populations.


























































































