A UK thinktank has urged sweeping changes to how artificial intelligence is allowed to source and present news, proposing standardised "nutrition" labels for AI-generated answers and a licensing regime to ensure publishers are paid for material their journalism helps to create. According to the Institute for Public Policy Research, the aim is to make the provenance and composition of AI news outputs visible to users and to prevent a handful of tech firms from becoming de facto gatekeepers of public information. (Sources: [2],[5])

The IPPR recommends that the Competition and Markets Authority use newly strengthened powers to begin negotiating collective licensing arrangements with technology companies, enabling publishers to bargain over reuse of their work and to seek compensation for lost traffic and advertising. Industry analysis shows growing concern that search and AI summary features, when displayed prominently, can reduce visits to original reporting and therefore publishers’ revenue streams. (Sources: [2],[5])

In a hands-on audit the IPPR tested four leading systems, ChatGPT, Google Gemini, Perplexity and Google’s AI overviews, feeding them 100 news-related queries and examining more than 2,500 links returned. The analysis found major inconsistencies in which outlets were cited: the BBC appeared absent from responses from some models, while certain outlets with licensing arrangements were heavily represented. Roa Powell, senior research fellow at IPPR and co-author of the report, warned: “AI tools are rapidly becoming the front door to news, but right now that door is being controlled by a handful of tech companies with little transparency or accountability.” (Sources: [2])

Beyond questions of prominence, the IPPR argues licensing could entrench inequalities in the news ecosystem. Academic research into newsroom AI usage shows automated content is already unevenly distributed across outlets and formats, and that transparency about AI use in journalism remains rare. The thinktank cautioned that deals between major publishers and AI vendors might advantage well-resourced organisations while sidelining smaller and local titles. (Sources: [3],[2])

Separate audits raise further doubts about the current state of disclosure and labelling. A platform-focused review found that major social services frequently fail to mark synthetic images and video correctly, with only around a third of sampled posts carrying explicit AI labels. Researchers and industry specialists say voluntary or technical tagging regimes have been inconsistently implemented, underlining the limits of a solely self-regulatory approach. (Sources: [4],[3])

Proposals to require "nutrition facts" for AI recall private-sector precedents and growing consumer appetite for transparency. Companies such as Twilio have published machine-readable and human-friendly AI fact sheets that detail models used, data handling and limitations, while surveys and commentary argue that an accessible, standardised label could help users evaluate credibility much as food labels help consumers assess products. Advocates say a plain-language framework would bridge the gap between technical model cards and everyday audiences. (Sources: [6],[5],[7])

The IPPR further calls for public support to nurture investigative and local reporting models that might not thrive under market pressure, and for regulators to guard copyright protections so any licensing market endures. Policymakers in the UK and overseas are already moving toward stricter transparency rules for AI; proponents say combining mandatory labelling, fair-pay licensing and targeted public funding offers the best chance of preserving plurality and trust as AI becomes a primary news source. (Sources: [2],[4],[5])

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph:

Source: Noah Wire Services