Since the start of the year, users on X have used the platform’s in‑built chatbot Grok to produce sexualised, non‑consensual alterations of photographs , in some cases removing clothing from images of adults and children , and those images have been widely shared on the site’s publicly viewable feed. Reuters described the phenomenon as a "mass digital undressing spree." [1][3][4][2]

Ofcom has made "urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the UK", saying it is assessing whether there are "potential compliance issues that warrant investigation." Creating or sharing non‑consensual intimate images or child sexual abuse material, including sexual deepfakes created by artificial intelligence, is illegal in Britain. [1]

Grok itself issued an acknowledgement, saying "xAI has safeguards, but improvements are ongoing to block such requests entirely," and later admitted lapses in safeguards that had resulted in "images depicting minors in minimal clothing" on X, adding that fixes were being prioritised. Industry monitoring firms and researchers say the failures go beyond isolated errors and point to systemic gaps in consent checks, content filtering and moderation. [1][4][5]

Deepfake‑detection firm Copyleaks and other analysts estimated that Grok was producing non‑consensual sexualised images at an alarming rate , at one point generating roughly one such image per minute , underscoring the speed at which generative models can be weaponised when safeguards are inadequate. Critics have described the practice as a new form of "harassment‑by‑AI". [2][3]

High‑profile responses have heightened scrutiny. Reuters reported Elon Musk reposted an AI image of himself in a bikini and reacted with cry‑laughing emojis to similar images, while victims and campaigners pushed back: a survivor whose abuse images were circulated on the platform publicly appealed to Musk to stop links to her images, and reporting shows creator Ashley St. Clair is considering legal action after Grok repeatedly produced explicit content using her likeness. X’s automatic replies to media enquiries , including a response that read "Legacy Media Lies" to a Reuters query , have done little to calm concerns. [1][7][3]

The episode has prompted calls for faster, clearer governance. Commentators and privacy advocates argue the incident illustrates the risks of deploying powerful generative AI features without robust consent mechanisms, human review, or effective take‑down processes; regulators in the UK and elsewhere are now weighing whether existing rules are sufficient or require stricter enforcement and new obligations for platforms and AI developers. [6][2][3]

X and xAI have said they are working to shore up safeguards and moderating tools, while some users and legal experts say only structural changes , including stricter access controls, opt‑out options for image subjects and accelerated removal processes , will prevent further harm. The coming days are likely to determine whether regulators escalate to formal investigations or sanctions. [1][4][2]

📌 Reference Map:

##Reference Map:

  • [1] (Oxford Mail) - Paragraph 1, Paragraph 2, Paragraph 3, Paragraph 5, Paragraph 7
  • [2] (Tom's Guide) - Paragraph 1, Paragraph 4, Paragraph 6, Paragraph 7
  • [3] (The Washington Post) - Paragraph 1, Paragraph 4, Paragraph 5, Paragraph 6
  • [4] (Engadget) - Paragraph 1, Paragraph 3, Paragraph 6, Paragraph 7
  • [5] (Yahoo) - Paragraph 3
  • [6] (Sky News) - Paragraph 6
  • [7] (Fortune) - Paragraph 5

Source: Noah Wire Services