The UK’s safeguarding minister, Jess Phillips, has demanded urgent action from Elon Musk’s X over Grok after the AI service was found to be generating sexualised deepfake images of people, including children. Speaking to The Mirror, Ms Phillips said the police must "relentlessly pursue perpetrators who create or distribute these images for their own sick purposes" and warned that if X fails to act the Government will step in. [1][2]
Phillips described X’s decision to limit Grok’s image-generation features to paying users as "a pathetic half-measure", arguing that "tools that create vile, degrading, non-consensual images should never exist – to paying or non-paying users and X must stop hiding behind excuses and work with the regulator to comply with the law. Lives are being wrecked by this abuse and women and girls bear the brunt." She reiterated that UK law makes creating, possessing or distributing child sexual abuse images, including AI-generated material, illegal and punishable by significant custodial sentences. [1][2]
X’s owner has said punishments will follow for anyone using Grok to produce illegal content. Elon Musk was quoted as saying: "Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content." X’s parent company and xAI have acknowledged lapses in safeguards and said they are urgently fixing them, while limiting some features to subscribers. Industry observers and regulators, however, say monetisation does not address the underlying safety failure. [1][6][5]
Ofcom has made "urgent contact" with X and xAI, signalling regulator concern about whether the platform is meeting its legal duties to protect UK users. According to The Guardian, the watchdog has been investigating reports that Grok produced sexualised images of children and is scrutinising X’s compliance with the Online Safety regime and other legal obligations. Government officials have said they expect Ofcom not to hesitate in using its enforcement powers, which can include fines running into the tens of millions of pounds. [1][5]
The controversy has occurred against a backdrop of recent UK policy moves to tighten protections for young people and to criminalise non-consensual AI-generated intimate material. A government strategy on violence against women and girls includes plans to outlaw "nudification" apps that create fake nude images, and separate legislation being prepared will add offences criminalising the creation and sharing of explicit deepfakes without consent. The Ministry of Justice has said these measures will be brought into force urgently. [3][4][1]
The scale and character of the problem have been documented by investigative reporting. An AI forensics analysis released around the time of Grok’s rollout found a non-trivial share of generated images between late December and early January included minors in sexualised contexts, a finding that helped prompt international scrutiny and calls for stronger safeguards. Critics say platform-level restrictions, moderation policies and swift regulatory action are all needed to prevent further harm. [7][6]
Campaigners and high-profile individuals have also publicly condemned Grok’s outputs and demanded immediate changes. According to The Mirror, celebrities including Maya Jama have called on X to stop generating such pictures of them. Ministers and campaign groups say a combined approach, criminal law, regulator enforcement and platform reform, is required to stop the creation, possession and distribution of non-consensual intimate imagery, whether produced by humans or algorithms. [1][3][5]
📌 Reference Map:
- [1] (Mirror) - Paragraph 1, Paragraph 2, Paragraph 3, Paragraph 5, Paragraph 7
- [2] (Mirror summary) - Paragraph 1, Paragraph 2
- [3] (GOV.UK) - Paragraph 5, Paragraph 7
- [4] (GOV.UK) - Paragraph 5
- [5] (The Guardian) - Paragraph 3, Paragraph 4, Paragraph 7
- [6] (Associated Press) - Paragraph 3, Paragraph 6
- [7] (AP/AI Forensics report) - Paragraph 6
Source: Noah Wire Services