The New York Times on Friday sued Perplexity AI, accusing the San Francisco startup of illegally copying, distributing and displaying millions of the newspaper’s articles without permission to train and operate its generative AI tools. According to the original report, the complaint also alleges Perplexity has displayed New York Times content alongside the paper’s registered trademarks, creating the misleading impression that fabricated AI-generated material is attributable to the newspaper. [1][2]
The Times says Perplexity’s business model depends on large-scale scraping of publisher content, including paywalled material, to power its answer-engine and generative features. Industry data shows other major publishers have made similar claims against Perplexity in recent months. [1][2]
Perplexity is already the target of a wave of lawsuits. Publishers and reference firms , including Dow Jones and the New York Post, the Chicago Tribune, Merriam‑Webster and Encyclopedia Britannica , have brought suits alleging copyright infringement and trademark misuse by the AI firm. Reuters reporting notes several plaintiffs say Perplexity’s product reproduces material verbatim or diverts traffic and revenue from original sites. [1][3][4]
Those cases sit alongside other legal and commercial disputes: social media company Reddit has sued over alleged data scraping, and Amazon has filed suit claiming Perplexity covertly accessed user accounts for an AI shopping feature. Perplexity has denied many of these allegations, saying it indexes publicly available pages rather than scraping to build foundation models. [1][3]
Perplexity has attracted heavy investment while under legal pressure , raising roughly $1.5bn over three years and closing a $200m round in September that valued it at about $20bn, according to reporting. High‑profile backers include Nvidia and Jeff Bezos, underscoring how capital has flowed into AI firms even as legal risks mount. [1][2]
The New York Times also invokes the Lanham Act, alleging false association and trademark violations where AI‑generated “hallucinations” are displayed alongside the paper’s marks. Legal experts say such claims amplify the debate over whether traditional IP and trademark law can police the new behaviours of generative AI systems. According to the original reporting, plaintiffs seek injunctive relief and damages to prevent further use of their content. [1][2][3]
The unfolding litigation illustrates escalating tensions between content owners and AI developers over the legal and ethical boundaries of training and deploying generative systems. Industry observers say these cases could set important precedents for how courts treat large‑scale scraping, attribution, and the commercial use of proprietary content in AI products. [1][2][3]
📌 Reference Map:
##Reference Map:
- [1] (The Guardian) - Paragraph 1, Paragraph 2, Paragraph 3, Paragraph 4, Paragraph 5, Paragraph 6, Paragraph 7
- [2] (Reuters) - Paragraph 1, Paragraph 2, Paragraph 5, Paragraph 6, Paragraph 7
- [3] (Reuters) - Paragraph 3, Paragraph 4, Paragraph 6, Paragraph 7
- [4] (CNBC) - Paragraph 3
Source: Noah Wire Services