Ars Technica has set out a reader-facing policy on generative AI, drawing a clear line between assistance and authorship. In the explanation, the publication says its journalism remains human-written and that AI cannot take the place of the judgement, creativity or originality that editors and reporters bring to their work. According to the policy, any use of AI sits within a supervised workflow, with people making every editorial decision.

The newly published guidance also broadens that principle beyond text. Ars Technica says the rules cover research, source attribution, imagery, audio and video, reflecting an effort to define where machine tools may help and where they may not. The policy says AI-generated material, when used as an example, is separated visually and disclosed as close to the material as possible.

The publication says the standards are not a fresh invention but a formal public explanation of practices that have governed its newsroom since generative AI became available. It added that the point of publishing the policy is to make its internal rules visible to readers rather than asking them to take them on trust. Ars Technica also said it will update the document if its practices change in a material way, with those changes noted on the policy page.

The move comes amid wider media and platform debates over synthetic content and disclosure. Ars Technica itself has recently reported on organisations taking harder lines on AI-generated material, including Bandcamp’s ban on music produced wholly or substantially by AI, while the outlet has also faced scrutiny over its own coverage standards in a separate retracted story earlier this year. Together, those episodes underline how quickly publishers are being forced to turn broad principles about AI into specific newsroom rules.

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph:

Source: Noah Wire Services