The Children's Commissioner for England, Dame Rachel de Souza, has issued a call for a ban on apps that use artificial intelligence (AI) to create deepfake images depicting the sexual abuse of children. In a recent report, Dame Rachel highlighted growing concerns among teenage girls who feel frightened by the availability of apps that can digitally manipulate their faces onto pornographic images without their consent.

Dame Rachel warned that these AI-driven technologies, which are increasingly accessible through mainstream app stores, pose "alarming risks" to young people, who may unwittingly become victims of such crimes. She expressed concern over the ease with which strangers, classmates, or even friends could exploit smartphones to manipulate images and create harmful content.

"Children have told me they are frightened by the very idea of this technology even being available, let alone used," Dame Rachel said. "They fear that anyone – a stranger, a classmate, or even a friend – could use a smartphone as a way of manipulating them by creating a naked image using these bespoke apps."

She also revealed that many girls have resorted to avoiding posting pictures or engaging online altogether to minimise the risk of being targeted by deepfake technology. "We cannot sit back and allow these bespoke AI apps to have such a dangerous hold over children's lives," Dame Rachel added.

While it is already illegal to create or share sexually explicit images of children, the Commissioner pointed out that the AI technology involved in generating such images is not currently against the law. As a result, she is urging the government to impose stronger legal responsibilities on developers of generative AI tools that enable the creation of deepfake abuse images. Additionally, she advocates for deepfake abuse to be formally recognised within the law as a form of sexual violence.

In response, the government stated: "Creating, possessing or distributing child sexual abuse material, including AI-generated images, is abhorrent and illegal. Under the Online Safety Act, platforms of all sizes now have to remove this kind of content, or face significant fines."

The discussion raises critical questions about the regulation of emerging AI technologies and their impact on the safety and well-being of children in digital spaces.

Source: Noah Wire Services