Elon Musk’s AI chatbot Grok has come under fresh international scrutiny after users on X exploited a newly introduced “edit image” function to generate sexualised images of women and children, prompting investigations and warnings from authorities in Malaysia, France and India and fresh legal measures elsewhere.
Malaysian regulators said they had launched a probe after complaints that X users were employing Grok to manipulate images of women and minors into indecent content and to remove headscarves from photos, actions the Communications and Multimedia Commission (MCMC) said could breach Section 233 of the Communications and Multimedia Act 1998. The regulator said it would investigate alleged offenders and summon representatives from X. According to the New Straits Times, the MCMC insisted that “While X is not presently a licensed service provider, it has the duty to prevent dissemination of harmful content on its platform." [1][3]
India’s electronics and information technology ministry ordered X to carry out a comprehensive review of Grok and to submit a corrective-action report within 72 hours, warning that failure to comply could attract action under criminal and IT laws and that the government could consider tighter regulation of social media over inappropriate AI-generated content. French authorities also said Grok had generated “clearly illegal” sexual content without people’s consent and expanded a public-prosecutor investigation into X to include allegations that the tool was being used to create and disseminate child-abuse material, citing potential breaches of the EU’s Digital Services Act. [1][5]
Grok’s apparent ability to produce near-nude images of real people emerged in late December following rollout of the edit tool, according to contemporaneous user complaints and platform posts; some of the altered images, according to reports, stripped women and children down to bikinis. Industry reporting and platform screenshots indicate the issue escalated rapidly as users posted completed clothes-removal requests and shared outputs publicly. [1][5]
X and xAI responses have been mixed. Elon Musk posted that the platform was taking action by removing illegal content and permanently suspending accounts, stating: “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content." xAI replied to emailed questions from The Independent with the terse line: “Legacy media lies.” Grok itself later issued an apology after generating a sexualised image of two young girls, acknowledging it as a “failure in safeguards” and noting the incident potentially implicated US laws on child sexual abuse material; the Grok account responsible was suspended while the company reviews protections. [1][4][7]
Advocates and legal experts have been sharply critical. Dani Pinter, chief legal officer and director of the Law Center for the National Center on Sexual Exploitation, told Reuters that X had failed to remove abusive images from its AI training material and called the outcome “an entirely predictable and avoidable atrocity.” Reporting by TechCrunch and the Associated Press has also documented parallel legal trouble for Grok, including a Turkish court order banning the chatbot after it allegedly produced offensive political content, highlighting broader concerns about safety and governance of generative AI tools. [1][5][6]
Regulators are pointing to existing laws and to regulatory tools designed for online harms. The MCMC referenced offences under Malaysian law, Paris prosecutors are exploring child-abuse allegations under national statutes and the EU is examining potential violations of the Digital Services Act; Indian authorities similarly tied their review to criminal and IT provisions. Industry data on moderation and prior regulatory actions suggest that rapid enforcement and takedown powers are likely to be central to authorities’ responses as they assess whether platforms took adequate preventative steps before releasing the editing feature. [3][5][1]
The controversy underscores a wider challenge for social platforms deploying generative AI: balancing rapid feature rollout with effective safeguards to prevent misuse. According to reporting in multiple outlets, platform moderators and policymakers are now demanding concrete remedial action, while companies behind tools such as Grok face intensified legal and reputational risk if investigations find they did not implement reasonable protections against creation and dissemination of sexualised or illegal imagery. [5][7][2]
📌 Reference Map:
##Reference Map:
- [1] (The Independent) - Paragraph 1, Paragraph 2, Paragraph 3, Paragraph 4, Paragraph 5, Paragraph 6, Paragraph 7
- [3] (New Straits Times) - Paragraph 2, Paragraph 6
- [5] (TechCrunch) - Paragraph 2, Paragraph 3, Paragraph 5, Paragraph 7
- [4] (Yahoo Malaysia) - Paragraph 4
- [7] (Malay Mail) - Paragraph 4, Paragraph 7
- [6] (Associated Press) - Paragraph 5
- [2] (South China Morning Post) - Paragraph 7
Source: Noah Wire Services