Published: 30 January 2026. The English Chronicle Desk. The English Chronicle Online.
The growing influence of AI-generated news has prompted calls for mandatory AI news “nutrition” labels to guide public understanding. Experts warn that without clear markers, AI systems could distort the perception of current events and undermine trust in journalism. The Institute for Public Policy Research (IPPR) said these labels should show the exact sources and datasets used, including peer-reviewed studies and established news organisations, ensuring readers know how AI answers are compiled. Such transparency could prevent misinformation and strengthen accountability in AI-driven reporting.
The IPPR urged UK regulators to implement licensing schemes requiring AI companies to pay publishers for content used in training and answering queries. Roa Powell, senior research fellow at IPPR, explained that AI firms are becoming the new internet “gatekeepers,” shaping public knowledge while profiting from journalism. She said fair compensation and clear rules are essential to protect pluralism, trust, and the sustainability of independent news outlets. Powell stressed that AI news cannot replace human editorial judgement and warned that reliance on unlicensed material risks harming smaller publishers and local media.
Under proposed guidelines, AI-generated news would carry standardised labels indicating which sources were consulted, whether peer-reviewed research or professional journalism. The IPPR said these labels could appear alongside summaries on platforms such as ChatGPT, Google AI overviews, Google Gemini, and Perplexity. The thinktank tested these AI tools by entering 100 news queries and analysing over 2,500 resulting links. Findings revealed significant variation in sourcing, with some platforms heavily relying on licensed outlets like the Guardian and Financial Times, while others used content from publishers without agreements, raising ethical and financial concerns.
The IPPR highlighted that Google’s AI summaries now reach roughly two billion users monthly, influencing how people consume news and reducing traffic to original publishers. A quarter of internet users reportedly consult AI tools for information, making source transparency crucial. The thinktank warned that unregulated AI news could lock out smaller publications, as licensed organisations may dominate AI-generated answers. This could undermine diversity in the media ecosystem and centralise influence in the hands of a few major outlets.
To address these challenges, the IPPR recommended a licensing framework allowing publishers to negotiate compensation collectively with AI firms. The UK Competition and Markets Authority (CMA) could oversee enforcement, leveraging new powers to prevent platforms like Google from scraping content without permission. Collective licensing deals would ensure broad participation from diverse media organisations, balancing AI innovation with the sustainability of local and investigative journalism.
The report emphasised that copyright law should remain intact to support a thriving licensing market. It also urged the government to back alternative business models for journalism independent of AI companies, including funding for the BBC and local news providers. Powell explained that public investment in investigative reporting could maintain journalistic integrity in an AI-driven era, while licensing alone may not guarantee a healthy ecosystem. New revenue streams would also reduce dependency on tech giants, preventing sudden loss of income if copyright protections weakened.
Testing revealed discrepancies in AI source usage. ChatGPT and Gemini, for example, did not include BBC journalism, as the broadcaster had restricted AI access. Conversely, Google AI overviews and Perplexity used BBC content without formal licensing, illustrating the complex legal and ethical landscape. Other findings showed the Guardian, with an OpenAI licensing agreement, appeared in nearly 60% of ChatGPT responses, while the Financial Times also featured prominently. Publications without deals were cited less than 4% of the time, highlighting inequities in AI visibility and potential bias in user-facing answers.
The IPPR report further suggested that AI companies should disclose financial relationships with news organisations, as these deals may influence answer prominence. If licensed outlets dominate AI responses, smaller and local news providers risk being excluded, potentially reducing the range of voices and perspectives accessible to users. Powell warned that while licensing could partially offset lost advertising revenues, it would not fully sustain a healthy media landscape, making public support for investigative and local reporting vital.
To future-proof journalism, the thinktank recommended combining licensing frameworks with AI “nutrition” labels and public funding initiatives. Labels would provide clear information on source origin, methodology, and editorial oversight, allowing readers to critically evaluate AI news. Government backing for innovative AI projects at organisations like the BBC could also foster a safer, more reliable integration of AI into the media. By implementing these measures, UK policymakers could ensure that AI news strengthens, rather than undermines, journalistic standards and public trust.
The IPPR concluded that without these reforms, AI-generated news risks eroding transparency, plurality, and accountability. Labels, licensing, and public support for independent journalism would collectively create a balanced AI news ecosystem. Powell emphasised that careful regulation and ethical frameworks are necessary to harness AI’s potential responsibly, preserving the integrity of UK media for both national and global audiences. As AI increasingly shapes how people access information, transparent sourcing and fair compensation will become essential pillars of the evolving news landscape.




























































































