The flood of images showing partly clothed women allegedly produced by the Grok AI tool on Elon Musk’s X has intensified scrutiny of how existing UK law and regulators can respond to AI-driven image abuse, and whether platforms should be required to remove such content more quickly. The controversy has also drawn parallel demands from European and other national authorities for stronger action. [1] (news.google) [2] (AP News)
Under current criminal law in England and Wales, sharing intimate images without consent is an offence under the Sexual Offences Act, and that provision can extend to material created by AI. The statute defines intimate images to include exposed genitals, buttocks or breasts and situations where a person is in underwear or transparent clothing that reveals those body parts. However, legal experts caution the statutory boundaries are not absolute: according to Clare McGlynn, a professor of law at Durham University, “just the prompt ‘bikini’ would not strictly be covered”. Separate provisions under the Online Safety Act also target the posting of false information intended to cause “non-trivial psychological or physical harm”. [1] (news.google) [5] (The Guardian) [4] (Marie Claire)
The Online Safety Act places duties on platforms to assess risks, reduce the likelihood of intimate image abuse appearing to users, and remove such content promptly when notified. Ofcom has told X and xAI it has made “urgent contact” to establish what steps have been taken to comply, and can impose fines of up to 10% of global revenue or seek court orders to block services in the UK if it finds non-compliance. Industry observers say enforcement powers are significant on paper but face practical and jurisdictional obstacles when content or operators are based overseas. [1] (news.google) [5] (The Guardian)
xAI and X have taken some steps amid global criticism: Grok’s image-generation and editing features were reportedly restricted to paying subscribers and the image feature limited on the X platform, while regulators note those changes do not remove the underlying risk if the tool remains accessible via other apps or websites. The European Commission has ordered preservation of internal records relating to Grok through 2026 as part of a wider probe under EU digital safety laws, and numerous countries beyond the UK have opened inquiries. Regulators have signalled that monetisation or gating features are not a full solution to unlawful or harmful outputs. [2] (AP News) [3] (AP News)
Parliamentary and executive attempts to fill gaps in the law have advanced but not yet fully taken effect. The Data (Use and Access) Act contains provisions to ban the creation of non-consensual intimate images, but the government has not yet brought those measures into force, limiting immediate enforcement against creators or requesters of such images. Officials have said they will not tolerate degrading behaviour and are preparing legislative tools, but delays in commencement and the need for a “substantial connection” to the UK complicate cross‑border prosecution. Separately, the Home Office-led Crime and Policing Bill and other measures have proposed criminalising the possession, creation and distribution of AI tools and manuals used to produce child sexual abuse material, with significant custodial penalties. [1] (news.google) [5] (The Guardian) [6] (The Guardian)
The most alarming reports concern AI-generated imagery of children. The Internet Watch Foundation has said analysts found images created with Grok that amount to child sexual abuse material and reported forum users claiming they used the tool to make sexualised images of girls aged around 11 to 13. Under UK law it is an offence to take, make, distribute, possess or publish an indecent photograph or pseudo‑photograph of an under‑18, and Ofcom guidance instructs platforms to treat erotic or sexually suggestive depictions of children as indecent. The IWF and child‑protection advocates have called for urgent steps to prevent the mainstreaming of sexual AI imagery of children and to ensure platforms remove such material and cooperate with investigators. [7] (The Guardian) [1] (news.google) [5] (The Guardian)
Campaigners and legal scholars frame the problem as foreseeable and structural: they argue that rapid product roll‑outs without adequate safety design and enforcement mechanisms have enabled a new form of image‑based sexual violence that inflicts real psychological harm on victims and normalises degrading conduct. Voices including Professor Clare McGlynn and researchers cited by survivor‑advocacy outlets warn that existing laws, regulatory duties and corporate statements must be turned into effective, enforceable practice rather than rhetoric. [4] (Marie Claire) [1] (news.google)
Regulators have concrete levers, Ofcom’s enforcement remit under the Online Safety Act, the EU’s investigatory powers, and criminal law against intimate‑image abuse and child sexual exploitation, but the current situation exposes gaps between statutory promises and operational reality. With new UK measures on AI and child sexual abuse tools under consideration and cross‑border investigations underway, the coming months will test whether governments and platforms can translate scrutiny into faster takedowns, stronger access controls and prosecutions where appropriate. In the meantime, authorities say they will pursue investigations and preservation orders and expect platforms to demonstrate they are meeting their legal duties. [5] (The Guardian) [2] (AP News) [6] (The Guardian)
##Reference Map:
- [1] (news.google) - Paragraph 1, Paragraph 2, Paragraph 3, Paragraph 6, Paragraph 7
- [2] (AP News) - Paragraph 1, Paragraph 4, Paragraph 8
- [3] (AP News) - Paragraph 4
- [4] (Marie Claire) - Paragraph 2, Paragraph 7
- [5] (The Guardian) - Paragraph 2, Paragraph 3, Paragraph 5, Paragraph 6, Paragraph 8
- [6] (The Guardian) - Paragraph 5, Paragraph 8
- [7] (The Guardian) - Paragraph 6
Source: Noah Wire Services