AI-generated music is rapidly reshaping the soundscape of streaming services, provoking a mix of fascination, frustration and urgent calls for regulation from musicians and listeners alike. According to The Guardian’s recent round-up of reader responses, tracks produced wholly or partly by generative models are already seeding playlists and radio stations on platforms such as Spotify, and opinions among music fans and creators are sharply divided. [1] [6]
The commercial stakes are now clear. Industry data and investigative reporting show AI-created songs are not just experiments: some have charted at the top of Spotify and Billboard lists, and companies monetising synthetic output have generated substantial streaming revenues. Analysis by SeatPick estimates that leading AI “artists” have earned six-figure sums on Spotify this year, with the top AI act pulling in the equivalent of roughly £123,000 from tens of millions of streams. These figures underline why labels, artists and streaming platforms are confronting the issue with greater urgency. [7]
Legal and rights disputes have followed. The label FAMM has asked for a share of royalties after alleging that a viral TikTok track used an AI-generated approximation of singer Jorja Smith’s voice without permission; the recording was later removed from streaming services. According to reporting, FAMM argues both versions of the song infringe Smith’s rights and exploit her collaborators, illustrating how existing copyright frameworks are being tested by synthetic replication. [2]
Platforms are responding in different ways. Deezer has introduced an “AI-generated content” tag and says it has developed detection tools that label fully synthetic tracks, remove them from editorial playlists and algorithmic recommendations, and exclude many suspected fraudulent plays from royalty calculations. The company told AP that its systems identified that as many as 18% of daily uploads were fully AI-generated in a recent period, and later internal figures indicated the share of AI submissions has risen much higher, to one-third of new tracks at times. Deezer also reports that a large majority of flagged plays are treated as fraudulent and do not generate payments to creators. [5] [4]
The difficulty of distinguishing synthetic from human-made music is borne out by consumer research. A Deezer–Ipsos survey of 9,000 listeners across eight countries found 97% could not reliably tell AI-composed music from human composition, while 73% said tracks should be labelled and many wanted the option to filter out AI content. Those findings reflect the demands voiced by readers in The Guardian, who urged greater transparency, mandatory labelling and opt-out protections for artists whose catalogues have been used to train models. [6] [1]
Musicians and cultural commentators contributing to The Guardian emphasised nuance: some see AI as an assistive tool that democratises access to professional-sounding production for home recordists, while others warned it risks hollowing out livelihoods and diluting the emotional content listeners seek. “There’s no heart in music generated entirely by AI,” one reader wrote, while another noted practical, positive applications, using AI to isolate lost stems and complete a mix, that would have been impossible before. These mixed experiences point to a middle ground in which AI is used as an aid rather than a substitute for human creativity. [1]
Evidence of abusive use is mounting. Cases such as the appearance of a fake AI band mimicking King Gizzard & the Lizard Wizard on Spotify, and the proliferation of thousands of synthetic tracks uploaded daily to platforms, illustrate the scale and the ways AI can be misused to impersonate artists or game recommendation systems. Spotify removed the imitation of King Gizzard & the Lizard Wizard after concluding it violated its artist impersonation policy, and said no royalties were paid in that instance. Still, the velocity of uploads and the economic incentives create persistent risks for artists and consumers. [3] [4]
Policy responses proposed by contributors and industry actors converge on a few practical measures: enforceable labelling of AI-generated content, clear opt-out mechanisms for artists whose work has been scraped for training, stronger detection and takedown systems for impersonation and fraud, and a transparent payment regime that credits human creators when their material has been used as training data. As The Guardian readers argued, and as platform and survey data confirm, transparency and choice for listeners, together with legal clarity for rights-holders, will be essential if streaming ecosystems are to remain sustainable for living musicians. [1] [5] [6] [2]
📌 Reference Map:
##Reference Map:
- [1] (The Guardian) - Paragraph 1, Paragraph 6, Paragraph 8
- [7] (SeatPick / Music industry analysis) - Paragraph 2
- [2] (The Guardian reporting) - Paragraph 3, Paragraph 8
- [5] (AP News / Deezer) - Paragraph 4, Paragraph 8
- [4] (MusicRadar / Deezer data) - Paragraph 4, Paragraph 7
- [6] (Reuters / Deezer–Ipsos survey) - Paragraph 5, Paragraph 8
- [3] (MusicRadar) - Paragraph 7
Source: Noah Wire Services