The question publishers are circling is no longer whether to use artificial intelligence in their operations, but what their journalism is actually worth to the companies building AI products around it. That is the argument at the heart of Omar Oakes’ latest column, which frames the issue as a commercial one as much as an editorial one: if media content helps power systems valued in the hundreds of billions, what price should attach to it, and what happens when publishers fail to set that price for themselves?
Three very different responses to that dilemma are now emerging. According to Le Monde, the French newspaper has struck licensing arrangements with OpenAI, Perplexity and Meta, presenting the deals as a way to protect rights, secure compensation and preserve editorial independence as AI platforms expand. Le Monde says the arrangement with Perplexity is non-exclusive and designed to drive readers back to its journalism through direct links, while its later deal with Meta was cast as part of a wider effort to defend content rights and fair revenue distribution. By contrast, Wikipedia has moved in the opposite direction, banning large language models from generating or rewriting articles on its English-language platform, while still allowing limited use for copy-editing and translation under human review. Business Insider, meanwhile, has reportedly taken a more performative approach, announcing a quarterly prize for staff AI use even as its parent company, Axel Springer, has been cutting jobs and publicly declaring itself committed to AI.
Taken together, the three responses underline how differently organisations are valuing the same technological shift. Le Monde appears to see premium journalism as an asset that can be licensed and monetised, but only if the brand remains trusted enough to retain reader traffic. Wikipedia’s editors are treating the site’s curated knowledge base as something to be protected from machine-written contamination. And Insider’s AI prize, as Oakes argues, looks less like a strategy than a sign that some media groups are keen to appear forward-looking before they have worked out what they are defending, or why.
Wikipedia’s move is especially telling because it reflects a deeper anxiety than simple mistrust of AI output. The site’s volunteer editors concluded that machine-generated text frequently conflicts with its standards of accuracy, verifiability and neutrality, and recent reporting says the policy shift followed a vote within the community. Yet the broader challenge may be whether AI threatens Wikipedia more by reading it than by writing for it, since the site’s role as the web’s baseline reference layer depends on being used, not just kept clean. In that sense, the ban is defensive, but not necessarily a long-term answer to the platform’s relevance.
The wider warning for publishers is that delaying the conversation does not avoid the economics. If their content is highly valuable to AI companies, then every licensing deal struck elsewhere risks setting a benchmark they may later struggle to beat. If, on the other hand, their output is largely interchangeable and easily replicated, then AI is exposing a structural weakness that predated the technology. Oakes’ point is that many media businesses are still avoiding that reckoning by counting tools, prizes or adoption rates instead of answering the more uncomfortable question: what is the content worth, and to whom?
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
- Paragraph 1: [2], [3]
- Paragraph 2: [2], [3], [4], [5], [6], [7]
- Paragraph 3: [2], [3], [7]
- Paragraph 4: [4], [5], [6], [7]
- Paragraph 5: [2], [3], [4], [5], [6], [7]
Source: Noah Wire Services