The children's commissioner for England, Dame Rachel de Souza, has called for a total ban on artificial intelligence (AI) apps that create sexually explicit images of children. In a report published on Monday, Dame Rachel highlighted growing concerns over the unchecked use of these apps, which she said are disproportionately targeting girls and young women through "bespoke" applications designed to work specifically on female bodies.
Dame Rachel expressed alarm over the phenomenon of "nudification," where AI technology edits photos of real people to make them appear naked, including generating sexually explicit deepfake images of children. She warned that children are increasingly altering their online behaviour to avoid falling victim, with many girls choosing to avoid posting images or engaging online due to fear that acquaintances or strangers could manipulate their photos using such technologies. Speaking about the impact on children, she said, "They fear that anyone – a stranger, a classmate, or even a friend – could use a smartphone as a way of manipulating them by creating a naked image using these bespoke apps."
The commissioner called on the government to take stronger action, including imposing legal responsibilities on developers of generative AI tools to identify and mitigate risks to children. She also urged the establishment of a systematic process to remove sexually explicit deepfake images of children from the internet and proposed that deepfake sexual abuse should be formally recognised as a form of violence against women and girls.
Paul Whiteman, general secretary of the school leaders' union NAHT, endorsed these concerns, stating, "This is an area that urgently needs to be reviewed as the technology risks outpacing the law and education around it."
In response, a government spokesperson reiterated that creating, possessing, or distributing child sexual abuse material is illegal and described the content as "abhorrent." The spokesperson noted that the government had introduced new offences designed to tackle AI-generated child sexual abuse material, making it illegal to possess, create, or distribute AI tools meant to produce such content. "Under the Online Safety Act platforms of all sizes now have to remove this kind of content, or they could face significant fines," the spokesperson added. They also highlighted that the UK is the first country globally to introduce specific AI child sexual abuse offences.
Figures from the Internet Watch Foundation, partly funded by technology companies, reveal a sharp rise in reports of AI-generated child sexual abuse material, with 245 reports made in 2024 compared to 51 in 2023—a 380% increase.
Additionally, media regulator Ofcom recently published the final version of its Children's Code, which imposes legal requirements on platforms hosting pornography or content encouraging self-harm, suicide, or eating disorders. These platforms must now implement more robust age verification systems or face significant fines. Dame Rachel, however, criticised the code, claiming it "prioritises business interests of technology companies over children's safety."
The ongoing concerns reflect tension between emerging digital technologies and existing legal frameworks, with calls for more comprehensive regulation to address the risks posed by AI tools in online environments frequented by children.
Source: Noah Wire Services