A Canadian public relations executive has argued that the rush to police artificial intelligence is creating its own problem: human writing is increasingly being treated with suspicion. Jennifer Farr, senior account director at Earnscliffe, said she pitched an op-ed to a major Canadian publication, only to learn that the draft had been rejected after being flagged by an AI detection tool, despite having been written collaboratively with her client in a live video meeting. Her account captures a growing unease in communications and publishing, where the appearance of polish can now be mistaken for machine authorship.

The concern is not hard to understand. As generative AI becomes more widely used, editors and publishers are under pressure to avoid running material that was created by software rather than a person. Yet AI detectors have their own limitations. Research and industry explainers note that these systems rely heavily on statistical patterns, which makes them prone to false positives when human prose happens to look too structured or predictable.

That creates a particular headache for agencies and other collaborative writing environments. Drafts are often shaped through discussion, editing and repeated tightening, producing clean copy that can resemble the style associated with AI-generated text. Analysts have also warned that some detectors may penalise non-native English writing and other forms of straightforward, formal prose, while still struggling to identify AI text that has been lightly edited to sound more human.

Farr’s point is that authenticity has become harder to define in practice. In her view, the question is no longer simply whether a piece was written by a person or a model, but whether the process behind it was transparent, credible and defensible. That ambiguity matters because the industry still lacks a reliable rulebook for separating genuine human drafting from machine-assisted writing.

Academic research has added weight to that uncertainty. A recent study published on ScienceDirect found that most AI detector findings were false, reinforcing doubts about how much confidence publishers should place in automated screening. The broader lesson, according to reviewers of the technology, is that detection tools may be useful as a warning system, but they are not yet precise enough to serve as a final arbiter of authorship.

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph:

Source: Noah Wire Services