Anthropic is pressing ahead with a key fair-use defence in one of the most closely watched copyright disputes over artificial intelligence, arguing that training its Claude chatbot on song lyrics was a transformative use protected by US law. The case, brought by Universal Music Group, Concord and ABKCO, has become a test of how far AI developers can go when they build models using copyrighted material without permission.

The publishers first sued Anthropic in October 2023 in federal court in Tennessee, accusing the company of copying at least 500 songs, including works associated with the Beach Boys and the Rolling Stones, and using them to train Claude. Since then, the litigation has narrowed. According to reporting by TechSpot, a California judge rejected an effort by the publishers in March 2025 to stop Anthropic from using lyrics in training, even as the court put in place safeguards to keep Claude from reproducing song lyrics in its outputs.

Anthropic’s latest filing, as described in the Spanish-language report, argues that copyright law is meant to encourage new creations rather than lock up existing works, and that Claude fits that purpose because it enables new kinds of output. The company says a defeat would reach well beyond music, potentially affecting other AI uses in areas such as medical research and environmental work. It also contends that the publishers have not shown real economic harm, pointing out that lyrics are already widely available online.

A central part of Anthropic’s case is the claim that many of the alleged infringements were not organic user requests but attempts by the publishers or their agents to provoke the chatbot into reproducing protected material. The company says discovery showed that, in a sample of 5 million prompts, more than 83% of lyric reproductions were generated by the plaintiffs or people working for them. Anthropic argues that courts should not reward what it describes as manufactured infringement.

The broader legal backdrop remains unsettled. In another copyright case involving AI training, a judge has indicated that the use of protected works may, in some circumstances, fall within fair use, but courts have not treated that as a blanket rule. The publishers have also dropped their contributory and vicarious infringement claims, leaving fair use and alleged Digital Millennium Copyright Act violations tied to the removal of copyright-management information as the main issues.

The stakes extend beyond Anthropic. In a separate dispute involving music AI company Suno, the contradiction between legal arguments and commercial licensing has become a fresh flashpoint. Suno has said there is no viable market for training-data licences, yet it struck an agreement with Warner Music Group. That apparent tension is now being cited by major labels as evidence that AI developers should pay for the material they use.

The next major moment in the Anthropic case is the summary-judgment hearing set for 15 July. For the technology and music industries alike, the outcome could shape whether AI training on copyrighted lyrics remains a defensible form of innovation or becomes a practice that requires permission and payment.

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph:

Source: Noah Wire Services