UNICEF has urged governments worldwide to make the creation, possession and distribution of AI-generated sexual images of children a criminal offence after research found the scale of the problem to be vast and growing. According to the organisation's Disrupting Harm Phase 2 study, led by UNICEF Innocenti in partnership with ECPAT International and INTERPOL, at least 1.2 million children across 11 surveyed countries had their images manipulated into sexually explicit deepfakes in the past year. The charity stressed the immediacy of the threat, warning that perpetrators can now produce realistic sexual content involving children without the child’s participation or knowledge. (Sources: UNICEF press release, Decrypt).
The household survey behind the report canvassed roughly 11,000 children and found that in some nations the prevalence equated to one in 25 children. Many respondents expressed anxiety about the technology: in several countries up to two‑thirds of young people said they worried AI could be used to generate fake sexual images or videos of them, although levels of concern differed markedly between states. According to UNICEF, this spread and the emotional toll it inflicts mean “Deepfake abuse is abuse, and there is nothing fake about the harm it causes.” (Sources: UNICEF press release, Decrypt).
The appeal for new criminal law comes amid a wave of regulatory scrutiny and high‑profile probes into AI tools and platforms. French prosecutors recently raided X’s Paris offices as part of an inquiry into alleged child sexual abuse material linked to Grok, the platform’s AI chatbot, and other jurisdictions have opened investigations or imposed bans. A report by the Center for Countering Digital Hate cited thousands of alleged sexualised images produced by Grok over an 11‑day period, while the European Commission has launched a formal probe into whether X breached EU digital rules by failing to prevent illegal outputs. UNICEF described these developments as a “profound escalation of the risks children face in the digital environment.” (Sources: Decrypt, Los Angeles Times, Decrypt).
Industry watchdogs and law enforcement groups are reporting sharp rises in AI‑made child sexual abuse material online, compounding the picture painted by UNICEF. The Internet Watch Foundation said it verified more than a thousand AI‑produced videos in the first half of 2025, up from virtually none the prior year, while the UK’s IWF and other organisations have flagged thousands of suspected items on forums and dark‑web marketplaces, many confirmed as criminal. Government and policing statistics also show large volumes of traditional CSAM reports remain a major challenge, and experts warn that generative video models have further lowered the technical barrier for abusers. (Sources: The Guardian, The Guardian, Decrypt).
UNICEF has set out policy and technical measures it says are urgently required. The organisation called for national laws to expand definitions of child sexual abuse material to include AI‑generated content and for states to criminalise creation, procurement, possession and distribution of such material. It also urged mandatory child‑rights due diligence by AI developers, including pre‑release safety testing for open‑source models, and said digital platforms must adopt safety‑by‑design approaches and work to prevent circulation of abusive imagery. UNICEF added: “The harm from deepfake abuse is real and urgent. Children cannot wait for the law to catch up.” (Sources: UNICEF press release, Decrypt).
The recommendations arrive as regulators from the Philippines to Indonesia and Malaysia have taken direct action against specific AI services and as UK and Australian authorities examine platform practices. Legal change, industry design standards and stronger moderation enforcement together form the package UNICEF insists is necessary to curb the phenomenon; without coordinated reform, advocates warn, children will continue to be victimised by imagery created without their consent and often without any realistic route to redress or removal. (Sources: Decrypt, Los Angeles Times, UNICEF).
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
- Paragraph 1: [2], [1]
- Paragraph 2: [2], [1]
- Paragraph 3: [1], [5], [1]
- Paragraph 4: [3], [4], [1]
- Paragraph 5: [2], [1]
- Paragraph 6: [1], [5], [2]
Source: Noah Wire Services