The New York Times has launched a major federal suit against Perplexity AI in the Southern District of New York, accusing the artificial‑intelligence answer engine of large‑scale, systematic intellectual property violations that go beyond typical copyright claims. According to the complaint, filed this month, the paper alleges that Perplexity scraped millions of articles, multimedia and other content from nytimes.com using automated crawlers, then copied, distributed and displayed that material to power its generative products. [1][2][5]

The core of The Times' case frames the conduct as "massive, methodical, and unlawful" copyright infringement, with the complaint asserting that Perplexity's outputs are verbatim or strikingly similar to original Times journalism and therefore supplant users' need to visit the newspaper's site. The legal filing available via Courthouse News details allegations that the startup used the Times' stories, videos, podcasts and images both to train models and to generate the responses shown to users. TheWrap and TechCrunch report that The Times says it raised concerns with Perplexity repeatedly for nearly two years before filing suit. [1][4][3][5]

The lawsuit advances a novel legal theory by asserting trademark claims under the Lanham Act alongside copyright counts. According to the complaint, Perplexity's interface sometimes displays The New York Times' registered marks alongside AI‑generated text that the paper says is fabricated, partial or misleading, creating false attribution and, the Times alleges, diluting and tarnishing its brand. The complaint argues those practices risk associating errors or omissions with the newspaper's reputation for accuracy. Engadget and the original filing both emphasise that trademark dilution and misleading attribution are central to the Times' expanded theory of harm. [1][4][7]

The Times frames the alleged harm as both qualitative and economic. Industry coverage notes the paper claims Perplexity's use of its journalism undermines traffic, subscription incentives and advertising revenue by providing users with comprehensive answers that reduce clicks to original reporting. Axios has reported similar pushback from other news organisations, including a separate Chicago Tribune suit, suggesting a pattern of publishers seeking redress for perceived commercial displacement by answer engines. [1][2][6]

If the court accepts the Times' trademark theory, the case could expand the frontiers of intellectual property litigation against generative AI. Legal observers and the complaint itself argue that a ruling for the plaintiff could hold AI operators liable not only for unauthorised copying but also for the commercial consequences of attributing erroneous AI outputs to third‑party brands. TechCrunch and the court filing both note that such an outcome would force engineers and companies to rethink how sources and trademarks are identified and displayed in automated answers. [1][4][5]

The litigation comes amid a rising wave of publisher enforcement. Reporting shows the Chicago Tribune has filed a related copyright suit and other outlets are scrutinising AI firms that ingest newsroom content; industry coverage describes a growing coalition of legacy media pushing for clearer rules and compensation mechanisms. TheWrap and Engadget detail how publishers contend that repeated warnings to Perplexity went unheeded, a fact the Times highlights to support claims of wilful infringement. [3][7][6]

Practical consequences for AI developers are immediate. Commentators recommend tighter attribution controls, robust IP‑clearance procedures, filtration for famous trademarks and technical measures to prevent or flag hallucinations so that third‑party names are not used to lend credibility to unreliable outputs. The complaint itself urges courts to require injunctive relief and damages while urging operators to document compliance policies and trademark‑use procedures. Tech reporting stresses that firms will need both legal and engineering responses. [1][5][3]

The case will test how established doctrines of copyright and trademark law apply to automatically generated content and the automated use of third‑party marks. According to the filings and contemporaneous reporting, a successful claim by The New York Times could broaden liability for GenAI platforms and reshape industry practice; conversely, a defence victory could preserve broader latitude for automated indexing and model training. The legal fight is now joined in court, where judges will confront questions about attribution, consumer confusion and the commercial role of generative systems. [1][4][2]

##Reference Map:

  • [1] (JD Supra) - Paragraph 1, Paragraph 2, Paragraph 3, Paragraph 4, Paragraph 5, Paragraph 7, Paragraph 8
  • [2] (The Guardian) - Paragraph 1, Paragraph 4, Paragraph 8
  • [3] (TheWrap) - Paragraph 2, Paragraph 6, Paragraph 7
  • [4] (Courthouse News Service (complaint pdf)) - Paragraph 2, Paragraph 3, Paragraph 5, Paragraph 8
  • [5] (TechCrunch) - Paragraph 1, Paragraph 2, Paragraph 5, Paragraph 7
  • [6] (Axios) - Paragraph 4, Paragraph 6
  • [7] (Engadget) - Paragraph 3, Paragraph 6

Source: Noah Wire Services