Published: 18 October 2025. The English Chronicle Desk. The English Chronicle Online.
British lawmakers have issued a stark warning that the United Kingdom could once again face widespread unrest like the 2024 summer riots unless the government takes decisive action to combat the spread of online misinformation. Members of Parliament on the Commons Science and Technology Committee have accused ministers of complacency, saying the country remains dangerously exposed to viral falsehoods capable of fuelling violence and public disorder.
The warning follows the government’s response to the committee’s report, Social Media, Misinformation and Harmful Algorithms, which examined how false and inflammatory material spread online after the tragic Southport murders last year. Three children were killed in those attacks, and within hours, misleading and AI-generated images began circulating across social media platforms, stoking anger and sparking a wave of violence that left several communities reeling.
Committee chair Chi Onwurah said the government had failed to learn lessons from that crisis. “The government urgently needs to plug gaps in the Online Safety Act, but instead seems complacent about harms from the viral spread of legal but harmful misinformation,” she said. “Public safety is at risk, and it is only a matter of time until the misinformation-fuelled 2024 summer riots are repeated.”
The committee had previously warned that the profit-driven business models of major social media companies encourage the rapid amplification of harmful and divisive content. It called for stronger regulation of generative artificial intelligence tools that make it easier to produce false or inflammatory material, and for tighter oversight of online advertising systems that monetise such content.
However, ministers rejected key recommendations, arguing that new legislation was unnecessary. In its official reply, the government stated that the existing Online Safety Act already covers AI-generated material and that further laws could delay the implementation of the current framework. Officials also dismissed calls for an independent body to regulate social media advertising, saying they preferred to work with industry partners to improve transparency and accountability voluntarily.
That response has done little to reassure the committee. Lawmakers said they were “disappointed” by the government’s refusal to introduce new powers, particularly around digital advertising, which they argue remains one of the main engines driving the online misinformation economy. Sites that spread false claims about the Southport murderer’s identity, for example, reportedly profited through ad revenue during the unrest — a situation MPs say should never be allowed to happen again.
The committee’s concerns were echoed by communications regulator Ofcom, which admitted that not all AI systems fall under the Online Safety Act’s remit. Officials from the regulator told MPs that further consultation with the technology sector was needed to understand how emerging platforms, including AI chatbots and image generators, interact with misinformation ecosystems. The committee said this proved the law still contains serious gaps.
In its statement, the government said it “acknowledges the concerns” about the opacity of the online advertising market and promised to continue reviewing industry practices. It pointed to the creation of an online advertising taskforce that aims to increase transparency, crack down on illegal advertisements, and enhance protection for children. Still, MPs said this piecemeal approach does not go far enough.
The government also refused to commit to producing an annual report to Parliament on the state of online misinformation, arguing that doing so could hinder operational work by revealing sensitive details about ongoing counter-disinformation efforts. Critics dismissed that explanation as evasive, saying transparency is essential to understanding the scale of the threat.
In its report, the committee drew a clear distinction between “misinformation” — the unintentional spread of false information — and “disinformation,” which involves the deliberate creation and dissemination of falsehoods to cause harm or chaos. MPs warned that both phenomena have been magnified by advances in AI, particularly generative models capable of producing convincing fake images, videos, and written content at scale.
Onwurah described the government’s lack of urgency as “deeply troubling,” saying the response on AI regulation and digital advertising was particularly weak. “It’s disappointing to see no real commitment to act,” she said. “The committee is not convinced by the government’s argument that the Online Safety Act already covers generative AI. The technology is developing so fast that more will clearly need to be done to tackle its effects on online misinformation.”
She added that social media platforms’ advertising-based business models remain a key part of the problem. “Without addressing the systems that reward and amplify sensational or false content, we cannot stop the cycle of misinformation that threatens social stability,” she said.
The 2024 riots, which began after a flurry of misleading online claims about the Southport attack, were among the worst episodes of civil unrest in recent British history. The violence spread rapidly across major cities, fuelled by viral misinformation and false rumours that inflamed racial tensions and distrust. Analysts say the same social and digital conditions that enabled that unrest still exist today — and that without firmer regulation, similar events could easily erupt again.
While ministers insist the Online Safety Act provides sufficient protection, experts and campaigners argue that the legislation remains untested against the full force of AI-driven content creation and algorithmic amplification. The committee’s report highlights the growing challenge of balancing free expression with public safety in a digital landscape where falsehoods can reach millions within minutes.
For now, the government maintains that its current framework is adequate. But the growing chorus of voices from Parliament, regulators, and civil society suggests otherwise. Unless stronger safeguards are put in place, the UK risks reliving the chaos of 2024 — a warning that, if ignored, could carry grave consequences for both public trust and national cohesion.




























































































