An investigation has revealed that major media organisations are increasingly utilising AI-generated characters as credible sources, which has raised concerns regarding the authenticity of information disseminated through established outlets. Journalist Rob Waugh, writing for the Press Gazette, uncovered instances where renowned publications like the BBC, the Sun, the Guardian, Newsweek, Medium, and Fortune quoted fabricated experts on various subjects, leading to the propagation of misinformation.
Waugh highlighted the case of "Barbara Santini," who purportedly held a degree from Oxford University and was cited hundreds of times by British media as a psychologist. However, when Waugh attempted to contact her directly to verify her credentials, he discovered that she insisted on communicating only through WhatsApp. This reluctance raised suspicions that Santini was not a real person but rather an AI-generated character.
Another example provided by Waugh was "Rebecca Leigh," who claimed to be a biochemist and science educator with twelve years of experience. Leigh was quoted in reputable outlets, including Fortune and Business. Upon further investigation, when Waugh asked her to provide proof of her identity as a human, she ceased communication. It later emerged that her profile contained fabricated details, a fact confirmed by the company she purportedly represented, which attempted to mitigate blame by citing anonymity protocols.
Moreover, Waugh discovered that a similar image was being used by another tech writer, operating under the name "Sara Sparrow," on a different media service, LeadDev. This duplication of identity led to speculation that AI either emulates the data of genuine writers or frequently reuses similar information across various platforms.
The role of human oversight in the proliferation of AI-driven content was also highlighted, as two networking services, Qwoted and ResponseSource, were noted for employing AI experts as sources. These services connect journalists with expert contributors, but the use of AI in this context raises questions about the reliability of the information provided.
Nevertheless, Waugh pointed out that there are methods available to detect AI-generated content in journalism. Qwoted, for example, warns users when a response to a query appears too rapid to originate from a human. Additionally, it offers a "Check for AI" feature, designed to identify text produced by AI systems. This underscores the importance of verifying sources rather than accepting information at face value, particularly in an era where trust in mainstream media is being challenged by the rise of AI technology.
Source: Noah Wire Services