When the X platform introduced Grok’s image tools the reaction from many users and regulators was swift and severe, prompting some, including a vice‑chair of the UK Parliament’s Women and Equalities Select Committee, to announce they were quitting the site and to call on government to act. According to the Scotsman, the committee suspended use of Grok after hearing accounts of women and girls traumatised by AI‑generated “naked” images and manipulated intimate photos that circulated widely on X. [1]

The controversy has not been confined to the UK. Malaysia and Indonesia moved to block Grok after authorities concluded its safeguards were insufficient to prevent sexually explicit, non‑consensual images, including content involving minors, from being created and shared. AP reported the bans as among the first national regulatory responses to the chatbot, citing deep concerns about human rights and digital safety. [2]

Under international pressure, xAI and X have restricted Grok’s image generation and editing features, limiting some capabilities to paying subscribers and saying illegal content will face the same consequences as uploaded material. AP and Tom’s Guide note, however, that regulators and rights groups argue monetisation does not cure the core safety failings and that image features reportedly remained available via Grok’s app and website even after restrictions on X. [3][5]

European authorities have escalated scrutiny: the European Commission has demanded preservation of internal Grok records through 2026 and regulators in the UK and France have opened enquiries under digital safety laws. Axios reported that U.K. officials specifically raised alarms about images that could amount to child sexual abuse material appearing on Grok’s public feed, while Ofcom and other agencies weigh possible enforcement. [4][5]

The human cost has been starkly illustrated by survivor testimony and high‑profile examples of deepfakes traced to childhood photos. Time and The Week both described how thousands of sexualised, non‑consensual AI images circulated in early 2026, prompting activists, lawmakers and victims to demand faster takedown rules and stronger platform obligations. Industry observers say Grok’s public sharing of AI edits amplified harm by making altered images readily discoverable. [7][6]

Leading AI figures have also voiced alarm. The Scotsman reported that Geoffrey Hinton, in a Newsnight interview, described Musk as "much less careful" with material around hate speech and child sexual abuse than other AI services and said “it's a bit sad to see all the misuse" of a tool with significant scientific potential. Those warnings have bolstered calls for regulatory tightening and clearer accountability from platform owners. [1]

Legal and policy responses are converging. U.S. legislators are advancing measures such as the TAKE IT DOWN Act, which would require swift removal of flagged intimate content, and EU and national regulators are exploring fines and access restrictions under online safety frameworks. Axios and AP emphasise that investigations in multiple jurisdictions, including India, France and Brazil, are ongoing and that enforcement could accelerate as laws and standards are applied to generative AI. [4][3]

Platform defenders argue that user reporting and content moderation remain central to addressing abuse, and X’s Safety account has reiterated commitments to remove illegal material and cooperate with law enforcement. Yet multiple outlets caution that reactive reporting, delayed takedowns and partial monetisation measures are unlikely to prevent further harms without systemic changes to product design, oversight and international cooperation. The debate now centres on whether incremental mitigation will suffice or whether stronger regulatory remedies, including bans or stringent access controls, are required. [5][6][3]

📌 Reference Map:

##Reference Map:

  • [1] (The Scotsman) - Paragraph 1, Paragraph 6
  • [2] (AP) - Paragraph 2
  • [3] (AP) - Paragraph 3, Paragraph 8
  • [4] (Axios) - Paragraph 4, Paragraph 7
  • [5] (Tom's Guide) - Paragraph 3, Paragraph 8
  • [6] (The Week) - Paragraph 5, Paragraph 8
  • [7] (Time) - Paragraph 5

Source: Noah Wire Services