According to the original report, The New York Times filed a lawsuit in the U.S. District Court for the Southern District of New York on Friday, accusing Perplexity AI of copying, distributing and displaying millions of its articles without permission to power the startup’s generative “answer engine.” The complaint, the paper says, contends Perplexity’s responses sometimes reproduce Times material verbatim and that the product functions as a commercial substitute for the newspaper’s journalism. [1][2][4]

The Times argues the conduct violates the US Copyright Act by appropriating its expressive, original journalism , from news and opinion to culture, business and lifestyle coverage , and that the startup persisted despite repeated objections. “Perplexity has engaged in illegal conduct that threatens this legacy and impedes the free press’s ability to continue playing its role in supporting an informed citizenry and a healthy democracy,” the complaint states. [1][5]

Perplexity promotes itself as an alternative to traditional search, pitching an “intelligent research assistant” that streamlines information gathering and reduces the need to click through to full reporting. The Times and several other publishers say that pitch masks the reality that Perplexity’s answers rely on proprietary reporting and, at times, produce fabricated or falsely attributed material , so-called “hallucinations” , presented alongside the paper’s trademarks. The complaint seeks damages and an injunction to stop the alleged copying and misuse. [1][2][3]

The dispute joins a wave of litigation between news organisations and AI firms over how proprietary content is used to build and power generative systems. Publishers including the Chicago Tribune, Dow Jones and Encyclopaedia Britannica have separately challenged Perplexity’s practices; Reuters and other outlets note the cases reflect broader tensions over whether indexing publicly available pages is lawful when the resulting product reproduces or substitutes for paid journalism. [1][2][3][5]

Perplexity has previously articulated a contrasting position, saying it indexes publicly available web pages rather than scraping to train foundation models, and the company has run programmes aimed at sharing revenue with publishers. Industry observers say the lawsuits will test the boundary between lawful indexing and the unauthorised commercial exploitation of copyrighted news content, and could shape how generative AI products cite, compensate or licence source material. [2][5]

Legal outcomes will turn on factual demonstrations of how Perplexity sources, stores and serves content, and whether its answers constitute permissible transformation or unlawful substitution. According to the original report, the Times alleges both large-scale copying and instances where Perplexity displayed fabricated text attributed to the newspaper , allegations that, if proved, strengthen the paper’s claims for injunctive relief. [1][2][3]

The case is likely to be watched closely by news organisations, AI companies and courts for precedent on content use, attribution and economic harm. Industry data shows publishers are increasingly pursuing litigation as one path to protect revenue streams and compel licensing arrangements; the Times says this action is part of that broader effort to hold AI firms accountable for unlicensed use of journalism. [5][2]

📌 Reference Map:

##Reference Map:

  • [1] (JURIST) - Paragraph 1, Paragraph 2, Paragraph 3, Paragraph 6
  • [2] (Reuters) - Paragraph 1, Paragraph 3, Paragraph 4, Paragraph 6, Paragraph 7
  • [3] (The Guardian) - Paragraph 3, Paragraph 4, Paragraph 6
  • [4] (TheWrap) - Paragraph 1
  • [5] (TechCrunch) - Paragraph 2, Paragraph 5, Paragraph 7
  • [6] (Yahoo) - Paragraph 1

Source: Noah Wire Services