The Mississippi Free Press has said it removed an opinion column published on 7 April after discovering that it had been generated with artificial intelligence and submitted under a false identity. The outlet said the writer’s invoice did not match the name on the byline, prompting checks of the person’s email trail, social media links and résumé details. Those checks turned up dead or non-existent accounts, while the profile image supplied for the author also appeared to be AI-generated. The newsroom later found a series of similar submissions from other supposed new writers, all of them apparently produced outside the United States, though none of those were published.

In an editor’s note, the paper’s voices editor said the episode exposed a failure of judgement on his part and underlined the need for stronger verification before accepting freelance opinion work. He said the publication expects columnists, like reporters, to submit original work that can be checked and trusted, and argued that journalism carries a different level of responsibility from posts on social media or personal blogs. The paper said it had already withdrawn three forthcoming columns after spotting warning signs that looked similar to the now-removed piece.

The newsroom said it is now preparing a formal artificial intelligence policy, along with staff training aimed at improving detection and review practices. It also plans to tighten editorial standards for opinion submissions, place more emphasis on Mississippi-based subjects and expand its pool of local freelance writers. The editor acknowledged that AI detectors are not dependable enough to serve as a simple fix, and said the organisation would have to rely instead on closer scrutiny and a stronger emphasis on authentic voice. The outlet said it would continue to publish without using artificial intelligence in its reporting or commentary.

The case comes amid a broader scramble by publishers, courts and academic journals to manage the risks of synthetic text. In Mississippi, federal Judge Henry T. Wingate acknowledged in 2025 that staff had used AI in drafting a court order that contained factual errors, while The Washington Post reported that other federal judges faced similar problems with AI-assisted legal documents. Inside Higher Ed has also reported a rise in academic submissions containing invented citations generated by AI. The Mississippi Free Press itself has previously written about plagiarism allegations and fraudulent emails, both of which speak to the same underlying pressure on editors to verify not only what is written, but who is writing it.

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph:

Source: Noah Wire Services