1. Body Copy

The UK’s communications regulator has opened a formal investigation into Elon Musk’s social media platform X after allegations that its AI chatbot Grok was used to generate and share sexualised and non‑consensual images, including material that may amount to child sexual abuse. Ofcom said it will assess whether X breached the Online Safety Act 2023, with potential sanctions ranging from fines to a ban if breaches are found. According to The Guardian and the Associated Press, ministers and regulators have described the content as “vile” and potentially illegal, fuelling urgent scrutiny of the platform’s safety controls.

The controversy has spilled into parliament. Oxford East MP Anneliese Dodds raised the issue during questions, invoking concerns about an “organised campaign of intimidation against female staff at Ofcom” and urging condemnation of the images’ circulation. Oxford Mail reported Dodds as saying: "I agree with the Secretary of State. The production of these disgusting images amount not to freedom of speech but to freedom to abuse, harass and commit crime." Ministers have echoed that tone: Technology Secretary Liz Kendall characterised the content as “vile” and insisted no one should live in fear of having their image sexually manipulated by technology.

Government officials and regulators have stressed the gravity of claims that some generated images included sexualised depictions of children. The Guardian and AP report that descriptions aired in parliament referenced alleged criminal imagery of children as young as 11, and that such material would plainly fall within existing criminal offences and the Online Safety Act’s remit. Ofcom’s investigation is explicitly tasked with determining whether X’s systems and moderation meet the statutory duties to protect users from illegal and harmful content.

X has responded with product changes, restricting Grok’s image‑creation features to paying subscribers on the platform , a move that critics say merely monetises abuse rather than prevents it. The Associated Press and TechRadar note that the feature reportedly remains accessible via Grok’s separate app and website for some free users, and rival UK AI firms have publicly argued that no current image generator can be rendered wholly misuse‑proof without far stronger safeguards. Industry figures describe the subscription restriction as insufficient while legal and regulatory processes proceed.

The fallout has been international. Malaysia and Indonesia temporarily blocked Grok amid concerns about its misuse to produce explicit, non‑consensual images; those governments cited violations of privacy and human dignity in their decisions. Domestically, several politicians and public figures have publicly quit X in protest, arguing they will no longer drive traffic to a site “that actively enables sexual exploitation of women and children.” The global response underscores how quickly trust in new generative tools can collapse when safety mechanisms are seen to fail.

The episode has sharpened calls for tougher regulation of AI image tools and clearer enforcement of existing laws. British AI firms and safety advocates are urging radical transparency and stricter access controls; some commentators believe the UK should use the Online Safety Act and forthcoming legislative measures to set a global standard. Reporting in Windows Central and TechRadar indicates that ministers are considering rapid enforcement and legal measures to criminalise non‑consensual intimate image generation where necessary, while also warning platforms they cannot “self‑regulate” their way out of responsibility for harms.

Source Reference Map

Story idea inspired by: [1]

Sources by paragraph:

Source: Noah Wire Services