Growing political pressure in Westminster is mounting for binding rules to govern the most powerful artificial intelligence systems, with more than 100 parliamentarians publicly urging the government to act. According to the original report, a cross‑party group, supported by former defence and AI ministers, warned that unregulated superintelligent models could pose risks to national and global security. [1][2][3]

The campaign, coordinated by the non‑profit Control AI and backed by tech figures including Skype co‑founder Jaan Tallinn, asks Prime Minister Keir Starmer to adopt a firmer, more independent stance on regulation rather than following the US approach. Control AI says its aim is to ensure mandatory safeguards , including testing standards and limits on self‑training systems , are put in place for frontier models. [1][3][5]

Frontier AI scientists and academic experts cited by campaigners, such as Yoshua Bengio, have warned that governments are trailing developers and that the world may need to decide by 2030 whether to allow highly advanced systems to self‑train. Industry data and engagement records from Control AI show extensive briefings for parliamentarians last year, with roughly a third of those consulted indicating support for binding measures. [1][3][7]

Campaigners are calling for a package of measures: global agreements to limit development of superintelligence, mandatory independent testing standards, and a watchdog to scrutinise public‑sector AI use. The group argues such steps are necessary because private companies currently set the pace with limited external oversight. [1][3]

Ministers and government officials counter that AI is already subject to existing regulatory frameworks and that a proportionate, innovation‑friendly approach remains appropriate. Critics, however, say that relying on current laws lacks the urgency required by rapid advances in model capabilities and that new, binding rules are needed within the next two years. [1][2]

Legislative momentum is also visible in the House of Lords, where Conservative peer Lord Holmes of Richmond has introduced the Artificial Intelligence (Regulation) Bill to establish an AI Authority tasked with assessing and monitoring risks to the economy and society. According to reports, proponents frame the measure as a human‑centred response to harms including online abuse and other social impacts of technology. [4]

Control AI has published guidance for civic engagement and urged organisations to contact policymakers to press for legislation, stressing that , in its view , no current law adequately protects the British public from the kinds of AI risks now being discussed. The campaign says it will continue outreach to build cross‑party support. [6][3][7]

##Reference Map:

  • [1] (DIG. Watch) - Paragraph 1, Paragraph 2, Paragraph 3, Paragraph 4, Paragraph 5
  • [2] (The Guardian) - Paragraph 1, Paragraph 5
  • [3] (Control AI statement) - Paragraph 2, Paragraph 3, Paragraph 4, Paragraph 7
  • [4] (Evening Standard) - Paragraph 6
  • [5] (Geo.tv) - Paragraph 2
  • [6] (Control AI: How to Help) - Paragraph 7
  • [7] (Control AI: Engagement Learnings) - Paragraph 3, Paragraph 7

Source: Noah Wire Services