Spain has asked prosecutors to examine whether major social media companies have committed criminal offences by permitting their artificial intelligence systems to produce and spread sexual images of children, the government announced on Tuesday. The cabinet decision targets X, Meta and TikTok and follows an expert report the administration said identified the growing use of deepfakes and manipulated images to create explicit material that can be disseminated rapidly and opaquely online. According to the government, the action is intended to protect children’s safety and end what it described as the impunity of large platforms.
The move comes amid separate, high-profile probes across Europe into AI-driven image generation. Ireland’s Data Protection Commission has opened an inquiry into X to determine whether the Grok chatbot and related generative features breached EU data-protection rules by handling personal data to create sexualised images, including those that appear to involve minors. The European Commission meanwhile has launched its own investigation into Grok and X’s recommender systems after researchers reported millions of sexualised images were generated in a short period, with a subset alleged to depict children.
Madrid said its request to the attorney general would seek criminal scrutiny of platforms that “allow their massive dissemination with a speed and opacity that greatly hinders detection and prosecution, while also facilitating the formation of networks that produce, share, and monetise this content”, language drawn from the government’s expert analysis. Officials argued algorithms can amplify harm by making abusive material harder to trace and by enabling organised networks to exploit and profit from it.
The Spanish package extends beyond criminal referrals. The government is preparing legislation to hold tech firms accountable for hateful and harmful content and intends to prohibit social media use by under-16s, placing Spain among a growing number of states that are tightening rules on children’s access to online platforms. Canberra last year enacted a ban for those under 16, and lawmakers in Britain, France and Greece have also debated stricter controls in response to rising public concern about digital harms to young people.
Tech companies contacted by the Spanish government issued familiar assurances that child sexual abuse material is forbidden on their services and that they deploy systems to detect and remove such content. Meta said it could not comment on the proposed criminal inquiry without further details but reiterated a strong stance against child sexual exploitation and non-consensual intimate imagery. TikTok described such material as “abhorrent” and said it invests in technology to prevent abuse. X was also approached by authorities and European regulators are already probing whether its safeguards are sufficient.
The Spanish initiative has provoked robust pushback from some platform owners and founders, who argue that proposed rules risk curbing online freedoms and constituting overreach. The government rejected such characterisations, saying that large global platforms should not be allowed to flood citizens’ devices with propaganda or to shelter criminal behaviour under the cover of technological complexity. With parallel EU and national inquiries under way, the episode is likely to sharpen debates over where responsibility for AI-generated content should lie and how far regulation should go to prevent digital sexual violence against children.
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
- Paragraph 1: [2], [1]
- Paragraph 2: [3], [4]
- Paragraph 3: [1], [4]
- Paragraph 4: [1], [2]
- Paragraph 5: [1], [3]
- Paragraph 6: [1], [7]
Source: Noah Wire Services