Music-sharing platform SoundCloud has taken a firm stance on the use of artists' content in the realm of artificial intelligence, announcing that it “has never used artist content to train AI models.” In a bid to address the concerns that have emerged following changes to its terms of service last year, SoundCloud asserts it is making a “formal commitment” that any future application of AI on its platform will prioritise consent, transparency, and artist control.
The clarification comes amidst growing anxieties within the artist community over the implications of SoundCloud’s evolving legal framework. Last February, updates to the platform's terms included clauses implying that artists’ works could be used for training AI technologies unless explicitly stated otherwise. This prompted backlash, with many artists wary that such broad language could permit the deployment of their content without proper authorisation. Elaborating on the company’s previous missteps, CEO Eliah Seton acknowledged in a statement that “the language in the Terms of Use was too broad and wasn’t clear enough. It created confusion, and that’s on us.”
SoundCloud’s current terms will soon be revised to ensure clearer protections for its users. The impending updates will explicitly state that the platform will not employ any artists' content to train generative AI tools—which could potentially mimic their style or likeness—without affirmative consent provided through an opt-in mechanism. This pivot emphasises SoundCloud's intent to place artists at the forefront of new technologies, providing them with agency over how their works are used in conjunction with AI.
Despite these reassurances, some critics find the revisions insufficient. Ed Newton-Rex, a noted tech ethicist who first raised concerns regarding the terms, argues that the amended language could still allow for models trained on artist work that, while not directly replicating their style, might still pose competitive threats in the marketplace. He voiced his scepticism on social media, suggesting that the policy should simply declare, “We will not use Your Content to train generative AI models without your explicit consent.”
The push for clearer boundaries around the use of copyrighted material in AI training is not limited to SoundCloud. A recent U.S. court ruling underscored the legal complexities surrounding this subject, affirming that the unauthorised use of copyrighted works for AI development does not constitute 'fair use.' This precedent highlights the growing need for robust legal frameworks as the evolving tech landscape intersects with intellectual property rights.
Furthermore, awareness is rising across the industry about the importance of consent. Notably, Sony Music Group has proactively contacted numerous AI and music streaming companies, asserting its decision to opt-out of any AI training involving its content without prior agreement. As industry giants grapple with the implications of AI, a consensus is forming around the necessity of explicit agreements that safeguard the creative rights of artists while navigating new technological frontiers.
The current landscape signals an urgent call for updated legislation, such as the proposed Generative AI Copyright Disclosure Act in the U.S., which seeks to enhance transparency and explicitly requires companies to disclose the copyrighted works used when training AI. As the music and tech industries evolve, ensuring that artists retain control over their intellectual property remains paramount, particularly in a world increasingly influenced by generative AI.
The path forward for platforms like SoundCloud entails a careful balancing act: harnessing the potential of AI while respecting and protecting the creative rights of artists, ensuring that creators have a decisive voice in how their works are integrated into the rapidly changing digital ecosystem.
Reference Map
- Paragraph 1, 2
- Paragraph 2
- Paragraph 4
- Paragraph 5
- Paragraph 6
- Paragraph 7
- Paragraph 8
Source: Noah Wire Services