Northern Ireland’s political leaders joined a broader international outcry after reports that X’s AI chatbot Grok was being used to create and circulate sexualised images of children and non-consensual explicit images of adults, prompting an Ofcom probe and fresh calls for government action. According to The Independent, First Minister Michelle O’Neill said X had been "woefully inadequate" in responding to the problem and urged wider governmental intervention, adding that "it’s absolutely disgraceful and disgusting that any social media platform allows this type of illegal content to be created." Alliance leader and Justice Minister Naomi Long described X as "toxic and vile" while SDLP leader Claire Hanna called the site a "cesspit." DUP leader Gavin Robinson acknowledged a "vulnerability" in the proliferation of explicit and engineered content but cautioned that Grok is not necessarily "any more of a troublesome platform than other examples of AI." Downing Street said the Government was focused on "protecting children" and was keeping its presence on X "under review." [1]

Grok’s image-generation feature has already been curtailed by its owner, xAI, which limited image creation and editing to paying subscribers after a wave of complaints and regulatory attention. Industry and news reports say the change was intended to reduce anonymity and misuse, but critics and regulators have said monetisation alone does not address the core safety failures. According to AP and technology outlets, the restrictions followed intense backlash but left authorities dissatisfied. [4][6]

The controversy has triggered immediate government action abroad as well as at home. Malaysia and Indonesia moved to block Grok and related services after finding safeguards ineffective at preventing the spread of sexually explicit deepfakes, including images of minors. Those bans reflected a growing pattern of national regulators taking emergency measures where platform responses were judged insufficient. [3]

Independent forensic analysis amplified the alarm. A report cited by AP examined tens of thousands of images generated in a short period and found that a measurable fraction involved minors portrayed in sexually suggestive ways; that finding helped to catalyse political and regulatory responses across Europe and Asia. The forensic data reinforced concerns that the feature set and moderation tools were not keeping pace with malicious uses of generative AI. [5]

Britain’s media regulator has now opened an investigation into X’s handling of the Grok tool under the Online Safety Act, signalling the possibility of significant enforcement action if the platform is found to have failed its duties to protect users. The probe gives Ofcom powers that include imposing fines and, for the most serious breaches, measures tantamount to restricting access to apps and websites in the UK. Officials and campaigning organisations say those statutory powers are crucial because voluntary changes by platforms have so far proved partial and reactive. [7][6]

The episode has prompted political leaders to weigh continued engagement on the platform against public safety concerns. Several Northern Irish politicians said they would keep their X accounts while monitoring developments for as long as their roles required broad public communication, but warned that legal change is the durable remedy they are pursuing. The growing international clampdown and the regulator-level scrutiny in the UK and EU underscore a broader shift towards treating harmful outcomes from generative AI as matters for statutory oversight rather than platform self-regulation. [1][3][4]

📌 Reference Map:

##Reference Map:

  • [1] (The Independent) - Paragraph 1, Paragraph 6
  • [4] (AP News) - Paragraph 2, Paragraph 6
  • [6] (Tom's Guide) - Paragraph 2, Paragraph 5
  • [3] (AP News) - Paragraph 3, Paragraph 6
  • [5] (AP News) - Paragraph 4
  • [7] (The National) - Paragraph 5

Source: Noah Wire Services