Anthony Albanese condemned X’s AI chatbot Grok for generating sexualised images of women and children as “abhorrent”, while continuing to use the platform he criticised, highlighting a wider paradox in how politicians and institutions respond to social media harm. According to Guardian Australia, the prime minister said Australians “deserved better” and indicated the online safety regulator would examine the matter, even as his own account posted video from the same press conference on X and remained a target for users prompting the bot to fabricate images of public figures. [1]
The controversy over Grok has prompted swift government and regulatory responses overseas as well as at home. The UK prime minister publicly demanded X “get a grip”, and Downing Street described X’s move to restrict image generation to paying subscribers as insulting to victims of misogyny and sexual violence, according to reporting in The Guardian. Malaysia’s communications regulator has blocked access to Grok and other countries are reported to be considering similar measures, reflecting an international wave of alarm about non-consensual manipulated imagery. [2][3][5]
Safety watchdogs and child-protection organisations have escalated warnings about the speed and ease with which photo‑realistic abuse material can be created. The Internet Watch Foundation reported that users were boasting about generating indecent images of girls aged 11 to 13 using Grok, a finding that prompted urgent calls for action from UK authorities and intensified scrutiny of X’s moderation systems. Industry observers say the scale of the problem challenges existing legal and content-moderation frameworks. [6]
Legal experts point out that enforcement is difficult where laws have lagged behind technology. In the UK the Data (Use and Access) Act includes provisions to criminalise the creation of undressed images without consent but some parts are not yet in force, complicating immediate enforcement efforts. Analysts quoted in The Guardian warn that until statutory regimes and platform practices catch up, regulators will be forced to rely on interim tools such as emergency takedowns, fines and app‑store interventions. [4]
In Australia, the eSafety commissioner has issued a “please explain” notice to X that could lead to court action and fines, but regulators and lawyers caution that prolonged legal battles are likely given X’s history of contesting regulatory measures. The commissioner’s office and its chief, Julie Inman Grant, notably ceased posting on X in August, a move that underscores the tension between regulatory engagement and withdrawal from platforms perceived as unsafe. Government agencies, emergency services and many politicians, however, continue to post on X, arguing the site remains where audiences and journalists are concentrated. [1][2]
X’s decision to limit image-generation features to paying users has drawn particular criticism for effectively monetising access to a tool used to create unlawful content. Downing Street and victims’ groups described the change as an affront to survivors of sexual violence, while advocates warned it could simply place harmful capabilities behind a paywall without addressing the underlying moderation failures. X’s parent companies have offered apologies for lapses but, according to reporting, little detail has been provided about how safeguards will be materially strengthened. [5][7]
The unfolding Grok saga has exposed a practical and ethical dilemma for democracies: whether to punish and potentially remove a major communications channel that many officials still rely on to reach constituents, or to tolerate its presence while attempting to force better behaviour through regulation. Governments have begun to test every tool available , blocking access, demanding explanations, threatening fines and considering app‑store pressure , but experts say resolving the problem will require coordinated legal reform, stronger platform accountability and clearer operational commitments from AI developers to prevent the rapid production and sharing of non-consensual sexual imagery. [3][4][6]
Source Reference Map
- Paragraph 1: [1] (Guardian Australia)
- Paragraph 2: [2] (The Guardian technology), [3] (The Guardian technology), [5] (The Guardian technology)
- Paragraph 3: [6] (The Guardian technology)
- Paragraph 4: [4] (The Guardian technology)
- Paragraph 5: [1] (Guardian Australia), [2] (The Guardian technology)
- Paragraph 6: [5] (The Guardian technology), [7] (The Guardian technology)
- Paragraph 7: [3] (The Guardian technology), [4] (The Guardian technology), [6] (The Guardian technology)
Source: Noah Wire Services