When lawmakers and rights holders consider how to regulate generative artificial intelligence, they would do well to recall the online‑piracy battles of the early 2000s, a new European Parliament‑commissioned report argues. According to the original report, heavy reliance on litigation and ad‑hoc enforcement proved a blunt instrument against file‑sharing services such as Napster; the paper’s author, Professor Christian Peukert, contends that repeating that approach for AI would be economically damaging and socially counterproductive. [1][2]

The report, titled "The Economics of Copyright and AI", sets out a central premise: restricting access to vast amounts of published material for model training will slow innovation and reduce public welfare. Industry data and historical comparisons show that litigation against distribution platforms produced only temporary shifts in behaviour until lawful, convenient services met consumer demand , a pattern the report suggests could recur if policymakers attempt to block AI access to copyrighted content. [1]

Peukert recommends a statutory, compulsory licensing regime as the most practicable solution. Under this model, AI developers would gain a guaranteed right to use published works for training while an independent authority would determine fair royalty rates to compensate creators. The report frames this as a pragmatic way to balance widespread model‑building needs with creators’ economic interests. [1]

The analysis emphasises scale as a decisive factor: unlike music‑streaming deals negotiated with a finite set of labels, AI training typically requires ingesting billions of texts, images and videos from the open web. The report notes that individually negotiating licences with millions of rightsholders is effectively impossible and would disproportionately disadvantage smaller AI startups. [1][2]

Perhaps most controversially, the report rejects "opt‑out" systems that let rightsholders exclude works from training datasets. It argues that opt‑outs create gaps that bias models and reduce overall societal value, and from an economic‑welfare perspective ranks opt‑out as the least desirable policy outcome , worse even than the status quo. The paper therefore recommends against permitting opt‑outs for training use. [1]

The report also reinterprets the Napster precedent. Whereas early courts rejected statutory licences as a way for infringers to "pay to keep breaking the law", Peukert argues the modern context differs: generative AI delivers large net benefits to society, estimated in some studies at tens of billions annually, and a regime permitting continued operation subject to fees could preserve that value while funding rights holders. According to the report, such a reversal of logic is justified by the net social gains from AI deployment. [1]

Other recent scholarship and policy analysis map complementary approaches. One academic paper proposes multilayered pre‑training filtering pipelines to shift protection from post‑training detection to pre‑training prevention, aiming to safeguard creator rights while enabling AI innovation; legal commentators note striking differences between the EU’s closed list of exceptions and the US’s flexible fair‑use approach, each with distinct implications for enforceability and market certainty. These perspectives underline that any European statutory scheme would need technical, administrative and cross‑jurisdictional design work to be effective. [3][5][6]

Whether rights holders and legislators will accept compulsory licensing remains uncertain. The report provides a policy road‑map rooted in economic welfare modelling and historical lessons, but implementation would require political agreement on scope, remuneration and governance , and careful calibration to avoid unintended market distortions. As the author warns, repeating the litigation‑first playbook from the Napster era risks the same cycle of costly enforcement followed by piecemeal remedies; the alternative he offers is statutory licensing coupled with independent rate‑setting and mechanisms to ensure broad access for training. [1][2][3]

##Reference Map:

  • [1] (TorrentFreak / Peukert, European Parliament) - Paragraph 1, Paragraph 2, Paragraph 3, Paragraph 4, Paragraph 5, Paragraph 6, Paragraph 8
  • [2] (TorrentFreak summary) - Paragraph 1, Paragraph 4, Paragraph 8
  • [3] (arXiv paper) - Paragraph 7
  • [5] (EU Directive on Copyright in the Digital Single Market) - Paragraph 7
  • [6] (Hannes Snellman analysis) - Paragraph 7

Source: Noah Wire Services