Malaysia’s communications regulator has launched legal proceedings against the social media platform X and its AI unit xAI, saying the companies failed to remove AI-generated content that it alleges is obscene and harmful to users. The Malaysian Communications and Multimedia Commission (MCMC) said it had identified the misuse of Grok to generate and disseminate harmful content, including “obscene, sexually explicit, indecent, grossly offensive, and nonconsensual manipulated images,” and that, “Content allegedly involving women and minors is of serious concern... Such conduct contravenes Malaysian law and undermines the entities’ stated safety commitments,” according to The Manila Times. [1]

Regulators in Kuala Lumpur said they had issued formal notices to X and xAI demanding the removal of such material and the implementation of technical and moderation safeguards,but received what they regard as inadequate responses and so moved to court. The MCMC has described its action as a preventive and proportionate measure while legal and regulatory processes are ongoing and has warned access to Grok will remain restricted until demonstrable compliance with Malaysian law is shown. [6],[2]

The controversy centres on Grok Imagine, the text-to-image function within the Grok chatbot, and a so-called “spicy mode” that critics say has enabled users to create sexualised and non-consensual images, including deepfakes of women and, in some cases, minors. A report cited by news organisations found a non-trivial share of sampled outputs contained sexually suggestive depictions of minors, sparking alarm and regulatory responses across multiple jurisdictions. According to the Associated Press and investigative reports, Grok’s image tool produced problematic outputs despite recent measures to limit image generation to paying users. [4],[2]

The Malaysian action is part of a wider international backlash. Indonesia also temporarily blocked Grok, and regulators in the United Kingdom, the European Union, France, India and several other countries have opened probes or called for curbs on the tool, with Britain’s technology minister promising legislation to criminalise "nudification apps" and Ofcom investigating potential breaches of child-protection rules. Governments have warned that current user-initiated reporting systems alone are insufficient to prevent the creation and spread of illegal material. [3],[5],[2]

xAI and X have largely declined detailed public comment; media enquiries have reportedly been met with what appears to be an automated dismissive reply. Elon Musk has publicly criticised some government responses, describing them in heated terms, while his firms have said they are taking steps to restrict image-generation features to identifiable paying users as part of a mitigation strategy. Independent experts and campaigners say such measures fall short because identification of users does not stop the underlying capability to produce harmful deepfakes nor the ease with which such images can be shared. [2],[5]

Malaysian law provides broad powers to police online harms and prohibits obscene and pornographic material,with regulators pointing to specific legislation when explaining their actions. The MCMC has urged the public to report harmful content and, where appropriate, to file police reports,while signalling it remains open to engagement with X Corp and xAI provided the companies demonstrate compliance. Observers say the case will test how domestic legislation and emerging international norms for AI safety can be enforced against cross-border technology services. [6],[5]

As investigations proceed, industry data and forensic analyses cited by reporters suggest the Grok episode highlights wider regulatory gaps around generative AI tools that can produce realistic but harmful imagery at scale. Policymakers in multiple jurisdictions are now weighing a mix of enforcement actions, platform obligations and new criminal laws to deter misuse, even as technology firms argue for measured approaches that preserve innovation and free expression. The outcome of Malaysia’s legal action will be watched closely as regulators seek practical remedies that go beyond takedown notices to address systemic design risks. [4],[3],[2]

Source Reference Map

Source: Noah Wire Services