When a journalism professor at the University of Quebec at Montreal spent a month getting his daily news exclusively from seven AI chatbots, the results were alarming and instructive about the current state of automated news delivery. According to the account published by The Conversation and summarised by Futurism, Jean‑Hugues Roy asked each service the same precise prompt every day in September: “Give me the five most important news events in Québec today. Put them in order of importance. Summarize each in three sentences. Add a short title. Provide at least one source for each one (the specific URL of the article, not the home page of the media outlet used). You can search the web.” The output included hundreds of links, but only a minority pointed to actual, correctly described articles. [1]
Roy recorded 839 URLs produced by the chatbots, of which only 311 linked to working articles; many links were incomplete or broken, and in 18% of cases the models either hallucinated sources or pointed to non‑news pages such as government sites or interest groups. Even among the working links, fewer than half matched the summaries the chatbots presented, with numerous instances of partial accuracy, misattribution, and outright plagiarism. One striking example saw xAI’s Grok assert that a toddler had been “abandoned” by her mother “in order to go on vacation,” a claim Roy says “was reported nowhere.” Roy also noted instances where chatbots invented non‑existent public debate, writing that an incident “reignited the debate on road safety in rural areas” when, he concluded, “To my knowledge, this debate does not exist.” [1]
Roy’s experiment is consistent with broader research showing systemic flaws in AI assistants’ handling of news. A study by the European Broadcasting Union and the BBC analysed 3,000 AI responses from models including ChatGPT, Copilot and Gemini and found that 81% contained issues and 45% contained significant errors, ranging from factual inaccuracies to fabricated or missing sources. Industry reporting has similarly warned that the prevalence of errors increases when models are permitted web access to provide up‑to‑date answers. [3][6][5]
Part of the problem is the data pipeline feeding these models. A NewsGuard analysis found that 67% of top‑quality news websites deliberately block AI chatbots, forcing models to rely more heavily on lower‑quality sources. According to NewsGuard, that reliance on sites with lower trust scores amplifies the risk that AI will access and repeat false or misleading material rather than the vetted reporting publishers offer. Axios reported NewsGuard’s findings as part of a wider trend showing the frequency of chatbots producing misinformation rising from 18% in August 2024 to 35% by September 2025, coinciding with expanded internet access for models. [2][5]
The tendency of large language models to oversimplify or misrepresent material is not limited to current affairs. Research published in Royal Society Open Science examined nearly 4,900 AI‑generated summaries of scientific papers and found LLMs were five times more likely than humans to generalise results, glossing over critical methodological details and nuance. Such behaviour risks turning complex, contingent reporting into misleadingly confident narratives, a problem that becomes especially dangerous when AI’s outputs are treated as authoritative news. [4]
Technical and sociopolitical factors also compound the issue. Investigative reporting on recent model updates shows that retraining or ideological slants can rapidly alter a chatbot’s outputs; Time reported on the model Grok shifting towards extreme rhetoric after a right‑wing retraining, illustrating how manipulation, groupthink and bias can degrade reliability. Taken together, these forces mean that AI‑produced news can reflect not only factual errors but also the priorities and blind spots of the systems that produce it. [7]
Publishers, platforms and developers face a choice about how to respond. News organisations that block automated scraping argue that restricting access protects journalistic standards and their business models, yet doing so can push models toward inferior sources; developers who grant wide web access seek freshness but inherit the web’s misinformation and broken links. Roy’s month‑long experiment suggests that, absent structural fixes to data sourcing, verification and transparency, AI chatbots remain an unsafe substitute for professional journalism rather than a reliable news provider. [1][2][3]
📌 Reference Map:
##Reference Map:
- [1] (Futurism / The Conversation) - Paragraph 1, Paragraph 2, Paragraph 7
- [3] (European Broadcasting Union / BBC) - Paragraph 3, Paragraph 6
- [6] (Tom's Guide / EBU study coverage) - Paragraph 3
- [2] (NewsGuard) - Paragraph 4, Paragraph 8
- [5] (Axios) - Paragraph 4
- [4] (Royal Society Open Science / LiveScience summary) - Paragraph 5
- [7] (Time) - Paragraph 6
Source: Noah Wire Services