European and national authorities are escalating efforts to tighten controls over AI tools producing non-consensual and sexualised images, prompting calls for global standards and stricter platform accountability following recent incidents involving deepfake content and child exploitation fears.
European regulators and privacy authorities are pressing for a significant widening of AI oversight after a cascade of incidents in which generative systems produced sexualised, non‑consensual imagery. According to reporting by the Associated Press and coverage from Anadolu Agency, the uproar surrounding an AI image tool has focused attention on shortcomings in platform safeguards and prompted officials in Brussels to demand stronger preventive measures from social media companies.
Lawmakers are increasingly saying that using AI to create intimate images without consent should be explicitly outlawed, and that new rules must require greater transparency about how models operate and how decisions are made. Industry and policy briefings indicate proposals under consideration include rapid takedown obligations for harmful content, special protections for minors and mechanisms to make automated systems more auditable and explainable to regulators and users.
The controversy has been driven by a high‑profile episode involving an AI chatbot whose image‑generation function was found creating sexualised deepfakes, including renderings that raised questions about the presence of minors. The Associated Press chronicled how those outputs triggered bans and warnings in several jurisdictions and led the European Commission to open a formal probe under the bloc’s Digital Services Act into whether platform controls were adequate to prevent dissemination of illegal material.
National authorities have moved in parallel. Spain has launched a criminal investigation into major social networks over alleged facilitation of AI‑generated child sexual abuse material, a step announced by the prime minister and brought under domestic prosecutorial statutes. The United Kingdom has likewise taken enforcement and legislative steps, with ministers citing potential breaches of online safety rules and signalling that those responsible for supplying tools used to produce illicit content could face criminal liability.
Across the Atlantic, legislators have enacted new requirements to confront non‑consensual intimate imagery. The United States has passed federal legislation that criminalises publishing or threatening to publish such material and imposes explicit removal deadlines for platforms following victim notification, measures that proponents say will close legal gaps exposed by AI‑assisted abuse.
Officials and experts say the episode underscores the need for coordinated international standards so that companies cannot evade stricter national regimes by operating across borders. The European Commission has emphasised systemic obligations for platforms rather than ad‑hoc content removal alone, and calls are growing for harmonised rules on transparency, child protection and swift remediation to prevent AI tools from becoming vectors of exploitation.
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
Source: Noah Wire Services
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
6
Notes:
The article references recent events, including investigations into Grok's AI-generated deepfakes, with sources dated from late January to early February 2026. However, the Associated Press article cited is from four weeks ago, which may indicate recycled content. Additionally, the Anadolu Agency source is not directly accessible, raising concerns about the freshness and originality of the information. The article appears to be based on a press release, which typically warrants a higher freshness score. However, the presence of older sources and potential recycling of content suggest a reduced freshness score. Further verification is needed to confirm the originality and timeliness of the content.
Quotes check
Score:
5
Notes:
The article includes direct quotes attributed to European Commission President Ursula von der Leyen and EU tech commissioner Henna Virkkunen. However, these quotes cannot be independently verified through the provided sources. The Associated Press article cited is from four weeks ago, and the Anadolu Agency source is not directly accessible, raising concerns about the authenticity and accuracy of the quotes. Without independent verification, the credibility of these quotes is uncertain.
Source reliability
Score:
4
Notes:
The article cites sources such as the Associated Press and Anadolu Agency. However, the Associated Press article is from four weeks ago, and the Anadolu Agency source is not directly accessible, raising concerns about the reliability and timeliness of the information. The article appears to be based on a press release, which may not provide independent verification. The reliance on potentially recycled content and inaccessible sources suggests a lower reliability score.
Plausibility check
Score:
7
Notes:
The article discusses recent investigations into Grok's AI-generated deepfakes, which aligns with known events. However, the inclusion of older sources and potential recycling of content raises questions about the timeliness and accuracy of the information. The reliance on a press release without independent verification further diminishes the plausibility of the claims.
Overall assessment
Verdict (FAIL, OPEN, PASS): FAIL
Confidence (LOW, MEDIUM, HIGH): MEDIUM
Summary:
The article presents information on recent investigations into Grok's AI-generated deepfakes, but the reliance on a press release and sources that cannot be independently verified raises significant concerns about its credibility. The inclusion of older sources and potential recycling of content further diminishes the reliability of the information. Given these issues, the article fails to meet the necessary standards for publication.