X has restricted image generation and editing on its Grok AI to paying subscribers after a wave of criticism over the tool’s use to create sexually explicit deepfakes, particularly images that digitally remove clothing from women and, in some reports, appear to involve minors. According to The Daily Jagran, the change followed mounting public outrage and what the report described as alleged pressure from UK authorities. [1]
The limitation is platform-specific: paid, verified X accounts can still request image edits within X, while non-paying users retain access to Grok’s image features via the standalone Grok app and website. That distinction has prompted questions about how effective the move will be in stemming abuse. Industry reporting noted that the feature’s prior design , including the public display of generated images and a so-called "spicy mode" , exacerbated the spread of explicit content. [1][4][2]
Government and regulatory bodies in Europe and beyond have reacted strongly. The European Commission condemned the images as "illegal," "appalling," and "disgusting," and has opened inquiries; it has also demanded that X retain all Grok-related data through 2026 as part of a broader probe under digital safety rules, according to AP and Axios. Investigations are reported to be under way in multiple countries, including France, Malaysia, India and Brazil. [2][3]
In the UK, Ofcom and government officials have warned of fines, regulatory action or restrictions under the Online Safety Act if platforms fail to address intimate-image abuse. The Guardian explained that while the Online Safety Act imposes duties on platforms to act, other measures , such as a separate Data (Use and Access) Act that would criminalise creating or requesting "nudified" images , are not yet in force, complicating enforcement against individual creators. [1][5]
Women who say they were targeted by AI-driven edits have described serious harm. The Daily Jagran and BBC reporting cited affected women who said they felt humiliated and dehumanised, contributing to calls for tighter governance of generative AI tools. Those accounts helped spur political reactions, with UK Prime Minister Keir Starmer saying on radio: 'X need to get their act together and get this material down. And we will take action on this because it’s simply not tolerable.' [1][6]
X’s decision to restrict image capabilities to subscribers has reduced some explicit outputs, but critics and regulators argue the changes do not go far enough because alternate routes on X’s desktop site and app, and the separate Grok app, still permit creation or sharing of such images. Reporting by AP and Axios characterised the restrictions as a partial response that left legal and safety questions unresolved. [2][3]
X has not published a detailed public justification for the policy shift; company statements cited in coverage framed the measure as a product adjustment to address misuse. Journalistic coverage underscores a wider debate about platform responsibility, the speed of AI product roll-outs, and whether current laws and enforcement mechanisms are adequate to prevent intimate-image abuse facilitated by generative AI. [1][2][5]
📌 Reference Map:
##Reference Map:
- [1] (The Daily Jagran) - Paragraph 1, Paragraph 2, Paragraph 5, Paragraph 7
- [2] (AP) - Paragraph 2, Paragraph 3, Paragraph 6, Paragraph 7
- [3] (Axios) - Paragraph 3, Paragraph 6
- [4] (Al Jazeera) - Paragraph 2
- [5] (The Guardian) - Paragraph 4, Paragraph 7
- [6] (Dexerto) - Paragraph 5
Source: Noah Wire Services