In November 2023, the UK Safer Internet Centre (UKSIC) reported an emerging concern: schoolchildren in the United Kingdom were using artificial intelligence (AI) image generators to create indecent images of their peers. This development has raised alarms among educators and child protection organisations, highlighting the need for immediate intervention to prevent further misuse of this technology.
The UKSIC, a child protection organisation comprising the Internet Watch Foundation (IWF), SWGfL, and Childnet, received reports from schools indicating that students were generating images that legally constitute child sexual abuse material. These reports suggest that children might be exploring AI image generators out of curiosity or sexual exploration, without fully understanding the potential consequences of their actions. The generated images can quickly spread online, leading to potential misuse, including blackmail or further exploitation. (saferinternet.org.uk)
David Wright, Director at UKSIC and CEO at SWGfL, emphasised the urgency of addressing this issue. He stated, "We are now getting reports from schools of children using this technology to make, and attempt to make, indecent images of other children." He further noted that while AI technology has significant potential for good, the misuse observed should not be surprising, given the increasing accessibility of AI generators to the public. Wright called for immediate steps to prevent the problem from escalating, urging schools to review their filtering and monitoring systems and to seek support when dealing with such incidents. (saferinternet.org.uk)
Victoria Green, CEO of the Marie Collins Foundation, a charity supporting children affected by technology-assisted sexual abuse, highlighted the potential long-term impact of such imagery. She remarked, "Whatever the intent, the impact of AI generated imagery on the person depicted can be lifelong." Green cautioned that even if the images were not created with harmful intent, once shared, they could end up in the wrong hands, potentially being used by sex offenders to shame and silence victims. (saferinternet.org.uk)
In response to the growing concern, the UK government has taken steps to address the issue. In September 2023, the UK and the United States pledged to combat the rise of AI-generated child sexual abuse images. Home Secretary Suella Braverman and US Homeland Security Secretary Alejandro Mayorkas committed to developing and funding new capabilities to stop the spread of such imagery. They called for international collaboration to tackle the alarming rise in AI-generated images of children being sexually exploited. (gov.uk)
Furthermore, in February 2025, the UK became the first country to create new AI sexual abuse offences to protect children from predators generating AI images. The new legislation made it illegal to possess, create, or distribute AI tools designed to generate child sexual abuse material, with penalties of up to five years in prison. It also criminalised the possession of AI 'paedophile manuals' that teach individuals how to use AI to sexually abuse children, punishable by up to three years in prison. (gov.uk)
These legislative measures aim to address the misuse of AI technology in creating and distributing child sexual abuse material, reflecting a concerted effort by UK authorities to protect children from online exploitation.
Source: Noah Wire Services