The United Nations children's agency has urged governments to make the creation, possession and distribution of AI-generated sexual images of children a criminal offence, saying the scale of the problem demands immediate legal and technological responses. According to UNICEF, the practice of using artificial intelligence to fabricate sexualised images of minors has surged, prompting calls to broaden legal definitions of child sexual abuse material to cover synthetic content. (Paragraph 1 sources: UNICEF press release).

UNICEF cited research across 11 countries in which at least 1.2 million children reported having their images manipulated into sexually explicit deepfakes over the past year, a figure the agency used to underline the extent of victimisation and the cross-border nature of the harm. The organisation warned that existing statutes in many jurisdictions do not expressly cover AI-generated material, leaving a gap predators can exploit. (Paragraph 2 sources: UNICEF press release).

The agency singled out so-called "nudification" techniques, where software strips or alters clothing in photographs to produce fabricated nude or sexualised images, and issued a stark appeal to policymakers and platform operators. "The harm from deepfake abuse is real and urgent. Children cannot wait for the law to catch up," UNICEF said in a statement. (Paragraph 3 sources: UNICEF press release).

London has moved ahead of other capitals with new legislation that explicitly criminalises the use of AI tools to produce child sexual abuse images, making the United Kingdom the first country to enact such measures. The UK law criminalises creating, possessing or distributing AI systems or manuals intended to generate abusive imagery and carries prison terms for offenders, a change framed by ministers as closing legal loopholes exploited by offenders. (Paragraph 4 sources: UK government announcement, The Guardian).

Regulators and safety organisations are also being given wider powers to scrutinise AI models. The government has authorised designated bodies to test systems for their capacity to produce abusive imagery, a change welcomed by groups such as the Internet Watch Foundation, which said enhanced testing and legal clarity are essential as AI imagery grows more extreme. (Paragraph 5 sources: The Guardian, IWF).

Beyond criminal law, UNICEF urged developers to adopt safety-by-design practices and for digital companies to invest in detection technologies and stronger moderation to curb circulation of abusive material. International co-operation has become part of the response: the UK and US have pledged to work together on capabilities to detect and limit AI-generated child sexual abuse images and have called on other countries to join the effort. Industry, non-governmental bodies and governments are being positioned as complementary actors in a strategy that blends legislation, technical defences and cross-border collaboration. (Paragraph 6 sources: UNICEF press release, UK–US joint pledge, UK government announcement).

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph:

Source: Noah Wire Services