The music industry is currently facing significant challenges regarding the unauthorized use of content to train generative AI models. This situation has prompted major record labels, including Sony Music, to take action against the proliferation of deepfake media, with the label recently reporting demands to eliminate a staggering 75,000 instances of deepfake content — a stark indication of the scale of the issue.
Concerns over AI-generated music highlight that, although the technology is becoming increasingly accessible and sophisticated, it often features certain identifiable anomalies. According to Pindrop, an information security firm specializing in voice analysis, AI-generated music can be distinguished by "subtle irregularities in frequency variation, rhythm and digital patterns that aren't present in human performances." This suggests that even when AI-generated songs mimic the style of well-known artists, they retain telltale signs that could help in their detection.
Platforms such as YouTube and Spotify are at the forefront of this battle against misinformation in music. Sam Duboff, Spotify's lead on policy organisation, emphasised the importance of improving their detection tools, indicating that the platform takes the issue "really seriously." YouTube has announced that it is "refining" its own capabilities to detect AI-generated content and may reveal improvements in the coming weeks.
However, the music industry’s battle extends beyond deepfakes. There is increasing alarm over the use of copyrighted material to train generative AI models such as Suno, Udio, and Mubert. Last year, several major labels filed a lawsuit in federal court against Udio's parent company, alleging that it developed its technology using "copyrighted sound recordings for the ultimate purpose of poaching the listeners, fans and potential licensees" from the original artists. As of now, this litigation and a similar case against Suno filed in Massachusetts have yet to move forward substantially.
At the heart of the legal disputes is the complex issue of fair use, which allows limited use of copyrighted material without prior permission. Joseph Fishman, a law professor at Vanderbilt University, pointed out that this area remains fraught with uncertainty, as different courts may interpret these laws variably. Furthermore, it is unclear whether new court rulings will affect ongoing iterations of AI-generated models, which continue to be trained on protected material.
Legislative efforts to safeguard artistic rights appear to be making slow progress. Various bills have been introduced in the U.S. Congress, but concrete outcomes remain elusive. Some states, including Tennessee, have enacted legislation to protect against deepfakes, reflecting the concerns rooted in the influential country music community. However, broader regulatory movements face obstacles, notably from political figures like Donald Trump, who advocates for deregulation in AI.
In the UK, potential legislative changes are under consideration, with the Labour government evaluating policies that would permit AI developers to utilise content found online unless rights holders specifically opt out. This has raised significant concerns among artists, leading over a thousand musicians, including prominent figures such as Kate Bush and Annie Lennox, to collaborate on a protest album titled "Is This What We Want?" featuring recorded silence to highlight their discontent.
Amid these developments, analysts like Jeremy Goldman have noted that the fragmented nature of the music industry may hinder cohesive solutions to these issues. He cautioned that as long as the industry remains unorganised, the impacts of AI on music creation are likely to persist, complicating the ongoing struggle between innovation and intellectual property rights in the realm of music.
Source: Noah Wire Services