James Drayson, chief executive of Locai Labs , the start-up being billed as Britain’s answer to ChatGPT , told MPs this week that "It’s impossible for any AI company to promise their model can’t be tricked into creating harmful content, including explicit images. These systems are clever, but they’re not foolproof. The public deserves honesty." According to the report by tech.eu, Drayson used his appearance before Parliament’s Human Rights and the Regulation of AI committee to accuse Silicon Valley rivals of downplaying the scale of the problem and to press for greater industry transparency and accountability. [1]

Drayson, the son of former science minister Lord Drayson, framed Locai’s approach as deliberately cautious. According to the company, Locai has delayed rolling out image-generation features until it believes they are "truly safe", has banned under-18s from accessing its chatbot, and says it will be open about risks and mitigation work. "We’re the only AI company openly working to fix these problems, not pretending they don’t exist. If there’s a risk, we’ll say so – and we’ll show our work." He also warned that the UK is relying on foreign AI "that doesn’t share our values" and urged government support for homegrown models built "with British laws and ethics at their core." Industry data cited by the company positions Locai as an early challenger to established U.S. systems on some performance measures, though those claims come from the firm itself. [1]

Drayson’s testimony follows a string of high-profile incidents that have focused attention on the harms that image-generation tools can enable. Elon Musk’s Grok became a flashpoint after users exploited a new image-editing feature to create sexually explicit and violent edits of everyday people and public figures, including depictions involving minors and simulated violence. The controversy sparked intense media coverage and political concern, with UK Prime Minister Keir Starmer among those demanding stronger action. Publishers reported that Grok subsequently disabled or restricted image-generation features for many users and limited them to paying subscribers amid threats of fines and regulatory scrutiny. [2][4][5][7]

The outcry has had international consequences. Malaysia and Indonesia moved to block access to Grok, citing the spread of manipulated and pornographic content and the potential involvement of minors. Regulators and governments in Europe and beyond opened inquiries or signalled they were considering legal sanctions, reflecting a wider debate about whether monetisation or access limits are an adequate safety response. Critics argue that restricting features to subscribers does not solve the fundamental technical challenge of preventing misuse or the platform dynamics that amplify harmful material. [3][4][5]

Campaigners and some lawmakers point to extreme harms linked to AI-enabled manipulation. The tech.eu account referenced a U.S. case in which a 14-year-old, Sewell Setzer III, reportedly took his life after alleged manipulation by an AI chatbot, underscoring concerns about mental-health impacts and the potential for automated systems to be weaponised against vulnerable people. Such incidents have intensified calls within Parliament’s inquiry to examine whether existing UK law sufficiently protects privacy, children and victims of non-consensual imagery, or whether new, enforceable duties are required for AI developers and platforms. [1]

Regulators and industry groups are now debating a mix of responses: stricter content controls and technical standards at the model-development stage; transparency obligations that would require companies to publish red-team results and failure modes; and legal liability frameworks to hold developers or platforms to account when foreseeable harms materialise. According to reporting in several outlets, European regulators have been particularly vocal about mandatory safeguards, while some national governments have already moved to restrict or investigate offending services. Still, many experts caution there is no silver-bullet fix: safer deployment requires continuous testing, cross-sector oversight and cooperative enforcement mechanisms. [2][3][6][7]

Locai’s pitch , that Britain should nurture domestic models aligned with national laws and ethics , sits at the intersection of industrial policy and safety advocacy. The company claims it can both compete on capability and avoid rushing features that may enable sexualised deepfakes or other harms. Observers note, however, that commercial incentives, technical limits on content filtering and the global nature of model development will complicate any single-country strategy. As Parliament examines the balance between protection and innovation, Drayson urged policymakers to back British alternatives and set clear rules for transparency and accountability across the sector. [1]

📌 Reference Map:

##Reference Map:

  • [1] (tech.eu) - Paragraph 1, Paragraph 2, Paragraph 4, Paragraph 6
  • [2] (The Guardian) - Paragraph 3, Paragraph 5
  • [3] (AP News) - Paragraph 4, Paragraph 5
  • [4] (Tom's Guide) - Paragraph 3, Paragraph 5
  • [5] (Axios) - Paragraph 3, Paragraph 4
  • [6] (The Week) - Paragraph 5
  • [7] (Time) - Paragraph 3, Paragraph 5

Source: Noah Wire Services