New research and law‑enforcement figures paint a stark picture of how generative artificial intelligence is being turned into a tool for sexual abuse against children, with advocacy groups and policing bodies warning of a rapid escalation that regulators and platforms have so far struggled to contain. According to UNICEF, coordinated studies and agency analysis show a dramatic rise in AI‑manipulated images and videos that depict children in sexualised scenarios, prompting urgent calls to treat such material as child sexual abuse material regardless of whether the imagery was created from real footage. (Sources: UNICEF, UNICEF UK).
Evidence gathered by charities and watchdogs suggests the scale of the problem has multiplied within months. The Internet Watch Foundation documented a surge in AI‑made child sexual abuse videos in 2025, finding more than a thousand illegal clips in the first half of the year where previously there had been almost none, and classifying a large share as the most severe form of abuse. Industry and policing sources in several countries report record numbers of online child sexual exploitation cases, with a large proportion now involving AI tools that can produce fabricated but highly realistic imagery at speed. (Sources: IWF; UK policing data).
A joint study conducted by UNICEF with INTERPOL and ECPAT across 11 countries estimates that at least 1.2 million children had their images manipulated into explicit deepfakes in the previous year, a prevalence the agencies summarise as roughly one child in every 25. UNICEF and partners emphasise that the harm extends beyond the images themselves: detection and removal are often slow, and the viral, persistent nature of digital content deepens trauma and the risk of secondary victimisation. (Sources: UNICEF; UNICEF UK).
“When a child’s image or identity is used, that child is directly victimised,” a UNICEF representative said, underlining the agency’s position that AI‑generated sexual content is not a technical novelty but a form of abuse with real victims. Afrooz Kaviani Johnson, Child Protection Specialist at UNICEF Headquarters, added that “Many experience acute distress and fear upon discovering that their image has been manipulated into sexualised content,” stressing the psychological damage and loss of agency that survivors describe. These observations mirror findings by specialist child‑protection organisations, which report widespread shame, stigma and long‑term harm among young people whose likenesses are weaponised online. (Sources: UNICEF; UNICEF UK).
Surveys of public attitudes and offender behaviour suggest both demand and permissive social attitudes are fueling the spread of deepfakes. Research cited by policing bodies found rapidly growing awareness and concern about deepfake abuse, while separate studies indicate a troubling minority of the public view the creation or sharing of intimate deepfakes as morally acceptable or remain neutral, particularly among younger men who consume pornography. Campaigners and senior police officers argue that platforms are not doing enough to prevent the creation and circulation of this material and call for technology firms to deploy proactive detection and automated blocking at scale. (Sources: NPCC reporting; UK policing data; IWF).
Experts point to multiple structural drivers: widely available generative models that can be misused with minimal technical skill, commercial incentives for platforms to roll out attention‑grabbing features without adequate safeguards, and low levels of AI literacy among parents, teachers and children that make early detection and reporting more difficult. The IWF and other groups have urged governments to require AI products to be safe by design, while UNICEF and allied UN bodies call for legal definitions of child sexual abuse material to be updated to explicitly include AI‑generated content and for criminal penalties to cover its creation, possession and distribution. (Sources: IWF; UNICEF; UNICEF UK).
Governments in several jurisdictions are beginning to respond with legal and regulatory measures aimed at curbing the worst abuses, while investigators continue to probe alleged platform failings in high‑profile cases. Yet UN officials stress that lawmaking alone will not be sufficient; enforcement, platform accountability, public education and cultural change are all necessary to reduce demand and protect children online. “Initially, we got the feeling that they were concerned about stifling innovation, but our message is very clear: with responsible deployment of AI, you can still make a profit, you can still do business, you can still get market share,” a senior UN official said, framing the agencies’ argument that child safety can be built into technology without foreclosing its benefits. (Sources: UNICEF; IWF; UNICEF UK).
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
- Paragraph 1: [2], [4]
- Paragraph 2: [3], [6]
- Paragraph 3: [2], [4]
- Paragraph 4: [2], [4]
- Paragraph 5: [2], [5]
- Paragraph 6: [6], [2]
- Paragraph 7: [2], [3], [4]
Source: Noah Wire Services