For decades the media business revolved around distribution; then it shifted to monetisation. Today a more fundamental contest is unfolding: who owns the raw material that teaches the machines now shaping culture, commerce and information? According to the original report, generative AI systems were trained on vast troves of journalism, photographs, books and archives , much of it created and maintained by media organisations , and that realisation has catalysed a broad industry push to set the rules of engagement. [1]
What began as quiet unease has hardened into litigation, licensing negotiations and sharper regulatory scrutiny. Leading publishers have chosen different tactics: some, such as The New York Times and Getty Images, have taken legal routes asserting unauthorised copying; others, including Axel Springer, the Associated Press and Reuters, have pursued licensing deals that grant controlled access to archives in return for payment and usage limits. Industry leaders now say training data itself is infrastructure with economic value that can no longer be treated as free. [1]
Those commercial and legal fights are multiplying. Recent lawsuits include Ziff Davis’ claim against OpenAI alleging unauthorised use of publisher content, Entrepreneur Media’s suit against Meta over training of large language models, and the Chicago Tribune’s complaint against Perplexity AI for distributing its journalism in ways the paper says undercut traffic and ad revenue. These actions reflect an industry strategy that mixes courtroom pressure with bargaining for licensing terms. [6][5][2]
Hollywood and entertainment companies are also front and centre. Disney has both pushed back against alleged unauthorised training and moved to participate on its own terms: the company sent a cease-and-desist to Google accusing it of using Disney content to train models without compensation, while separately announcing a reported $1 billion investment and three-year partnership with OpenAI to permit controlled use of its characters in AI-generated short videos. Such moves illustrate the dual approach studios are taking , litigate where they contend infringement has occurred, and strike commercial deals that monetise their intellectual property. [4][3][7]
Regulators and courts are beginning to weigh in, complicating the landscape further. European policymakers are debating how copyright exceptions for text and data mining should apply to machine learning, while U.S. courts face arguments over whether large-scale training is fair use or mass infringement. Governments are also considering rules that would force AI developers to disclose training datasets, a transparency measure that could reshape how AI systems are built and how creators are compensated. The report notes that these debates may determine whether media companies can convert their archives into bargaining power. [1]
Media executives argue the stakes extend beyond near-term revenue. Newsrooms spent decades building trusted archives and original reporting; if AI systems replicate reportage without attribution or payment, incentives to fund investigative work could weaken and public accountability suffer. The industry frames its campaign as one of sustainability: seeking compensation, consent and accountability so that the institutions that create high-quality content can survive and continue to underpin the credibility of future AI outputs. [1]
At the same time, commercial partnerships raise questions about market concentration and creative control. Deals that let major studios or publishers license characters or archives to dominant AI developers could accelerate new storytelling forms , as Disney’s reported partnership with OpenAI promises , but critics warn such arrangements could lock smaller creators out of value chains or entrench a few companies’ influence over cultural production. The tension between protecting creators and enabling innovation is playing out in courts, boardrooms and regulatory forums. [3][4][1]
The shape of the next phase is becoming clearer even as battles continue: the era of wholesale, unrestricted training on media content is waning. Lawsuits will likely proceed slowly, licensing markets will expand and be renegotiated, and regulators will refine rules through debate and legislative processes. Industry sources say the objective is not to halt technological progress but to shift media organisations from passive suppliers of data to active participants in the AI economy , defining boundaries around consent, compensation and accountability so that AI develops in ways that preserve editorial integrity and sustainable creative ecosystems. [1]
📌 Reference Map:
##Reference Map:
- [1] (Marketing Edge) - Paragraph 1, Paragraph 2, Paragraph 5, Paragraph 6, Paragraph 8
- [6] (Reuters) - Paragraph 3
- [5] (Reuters) - Paragraph 3
- [2] (Axios) - Paragraph 3
- [7] (Washington Post) - Paragraph 4
- [4] (Axios) - Paragraph 4, Paragraph 7
- [3] (Reuters) - Paragraph 4, Paragraph 7
Source: Noah Wire Services