A growing coalition of prominent musicians has voiced strong opposition to the use of their music for training artificial intelligence (AI) large language models (LLMs). Esteemed artists including Elton John, Billie Eilish, Paul McCartney, Radiohead, Sting, and Dua Lipa have joined the ranks of creators concerned about the implications of AI systems ingesting vast amounts of internet content, including music, to develop AI-generated outputs.
AI LLMs operate by processing extensive datasets through a mechanism known as Transformer Architecture, fine-tuning their ability to generate new content. In the case of musical material, this involves translating music into symbolic representations comprehensible to the AI, raising particular concerns among musicians about how their work is utilised. Key issues include the creation of deepfake audio, unauthorised vocal cloning, the replication of musical styles in AI-generated songs, infringements on artists' personality rights, and the potential loss of royalty income due to widespread unlicensed use.
These concerns have catalysed united actions within the music community. Over 1,000 British musicians, such as Kate Bush, Cat Stevens, and Annie Lennox, released a silent album titled "Is This What We Want?" The tracklist spelled out a protest message against proposed UK government regulations on AI and copyright, which these artists believe would effectively legalise the appropriation of their music for AI training purposes. This protest aligns with broader industry efforts, including open letters signed by hundreds of musicians, denouncing what are described as the "predatory" and "irresponsible" practices of using copyrighted works to train AI models. The artists stress that such approaches devalue human creativity, violate artists’ rights, and threaten the sustainability of the music ecosystem.
Fundamental questions about copyright law are at the centre of this debate. Legal discussions focus on whether the act of training LLMs on copyrighted music constitutes copyright infringement. Different jurisdictions vary in their approaches; for example, Singapore and the European Union have introduced legal exemptions to encourage AI development, while the UK and Hong Kong are considering opt-out systems allowing rights holders to exclude their works from AI training datasets. However, practical issues arise with these opt-out mechanisms, particularly given the prevalence of already-published works lacking machine-readable opt-out markers.
The concept of fair use, particularly in the United States where it is interpreted more broadly, also plays a critical role in these disputes, though many legal challenges have surfaced regarding its application. Central to international copyright frameworks, including the Berne Convention, is the three-step test which examines whether AI training conflicts with normal exploitation of a work and whether it prejudices the legitimate interests of the author. This standard will likely necessitate judicial interpretation on a case-by-case basis.
Another contentious area involves whether AI-generated music outputs can themselves infringe copyright. Musicians have expressed concerns that AI creations which mimic or replicate existing voices and styles undermine the originality and effort invested in human artistry. An example includes AI-generated covers, such as a piano version imitating Taylor Swift’s style, which have become increasingly common.
Legal actions have escalated with major record labels such as Universal Music Group, Sony Music Entertainment, and Warner Records filing copyright infringement lawsuits against AI startup companies like Suno and Udio. These lawsuits claim that the AI companies trained their models using copyrighted recordings without authorisation and seek substantial damages. The labels argue this represents large-scale unlicensed copying incompatible with fair use provisions. Conversely, the AI companies contend their activities fall under fair use or intermediate processing protections in U.S. law and accuse the record labels of efforts to limit competition. Evidence presented by the record labels includes AI-generated tracks that closely resemble or nearly duplicate copyrighted songs.
In parallel to legal battles, some musicians are proactively experimenting with technical methods to impede AI training processes. For instance, musician Benn Jordan, known as The Flashbulb, advocates the use of a tool called "Poisonify," which embeds inaudible audio modifications that disrupt AI learning without affecting human listeners. Additionally, artists and industry bodies call for transparency from AI companies about training data and seek the development of licensing frameworks to secure fair remuneration for music usage. The music industry’s sophisticated licensing infrastructure currently encompasses various rights types managed by collecting societies worldwide, suggesting a potential model for AI training licences.
Another pertinent legal consideration relates to AI-generated content's lack of human authorship, raising questions about the presence or absence of copyright protection for such outputs. Should AI-created content become widespread in commercial and consumer markets due to lower production costs, this may further alter the economic landscape faced by musicians.
Over the 21st century, the music industry has confronted numerous challenges, including file-sharing disputes involving platforms like Napster and Pirate Bay, piracy in the streaming era, and battles over fair compensation from online streaming services. Now, musicians find themselves engaged in a new frontier involving AI, with many expressing significant concern about the implications for the future of human musical creativity.
The Mondaq is reporting that these developments highlight an evolving and complex dialogue between creative professionals and the advancing capabilities of AI technology, underscoring urgent questions about copyright, compensation, and artistic integrity in a digitised world.
Source: Noah Wire Services