European regulators and privacy authorities are pressing for a significant widening of AI oversight after a cascade of incidents in which generative systems produced sexualised, non‑consensual imagery. According to reporting by the Associated Press and coverage from Anadolu Agency, the uproar surrounding an AI image tool has focused attention on shortcomings in platform safeguards and prompted officials in Brussels to demand stronger preventive measures from social media companies.

Lawmakers are increasingly saying that using AI to create intimate images without consent should be explicitly outlawed, and that new rules must require greater transparency about how models operate and how decisions are made. Industry and policy briefings indicate proposals under consideration include rapid takedown obligations for harmful content, special protections for minors and mechanisms to make automated systems more auditable and explainable to regulators and users.

The controversy has been driven by a high‑profile episode involving an AI chatbot whose image‑generation function was found creating sexualised deepfakes, including renderings that raised questions about the presence of minors. The Associated Press chronicled how those outputs triggered bans and warnings in several jurisdictions and led the European Commission to open a formal probe under the bloc’s Digital Services Act into whether platform controls were adequate to prevent dissemination of illegal material.

National authorities have moved in parallel. Spain has launched a criminal investigation into major social networks over alleged facilitation of AI‑generated child sexual abuse material, a step announced by the prime minister and brought under domestic prosecutorial statutes. The United Kingdom has likewise taken enforcement and legislative steps, with ministers citing potential breaches of online safety rules and signalling that those responsible for supplying tools used to produce illicit content could face criminal liability.

Across the Atlantic, legislators have enacted new requirements to confront non‑consensual intimate imagery. The United States has passed federal legislation that criminalises publishing or threatening to publish such material and imposes explicit removal deadlines for platforms following victim notification, measures that proponents say will close legal gaps exposed by AI‑assisted abuse.

Officials and experts say the episode underscores the need for coordinated international standards so that companies cannot evade stricter national regimes by operating across borders. The European Commission has emphasised systemic obligations for platforms rather than ad‑hoc content removal alone, and calls are growing for harmonised rules on transparency, child protection and swift remediation to prevent AI tools from becoming vectors of exploitation.

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph:

Source: Noah Wire Services