A broad coalition of former officials, technical experts and public figures has published a detailed framework aimed at limiting the power of advanced artificial intelligence and restoring human oversight to its development and deployment. According to the Pro-Human AI Declaration on its website, the initiative lays out five central principles intended to shape law and practice: keeping humans in control, preventing concentrated corporate power, protecting the human experience, preserving individual liberty and holding AI developers legally responsible. (Sources: humanstatement.org, protectwhatshuman.org)
The declaration recommends concrete constraints on future systems, including a moratorium on the deployment of so-called superintelligent architectures until there is scientific consensus and democratic approval, the requirement that powerful models include reliable off-switches, and an outright ban on self-replicating or self-improving AI designs. Speaking for the campaign, MIT physicist Max Tegmark framed the approach with a medical analogy: "AI should not be released into the world until it is proven safe, just as drugs are rigorously tested before approval." (Sources: humanstatement.org, protectwhatshuman.org)
Backers say the effort is deliberately non-partisan and grassroots in tone, drawing on a campaign brand that urges public participation to "protect what’s human" and to ensure AI serves rather than replaces people in households, workplaces and communities. The movement presents itself as a middle road between blanket bans and unfettered commercial development, pressing for commonsense regulation that foregrounds dignity and family life. (Sources: protectwhatshuman.org, secureainow.org)
The declaration's legal focus aligns with concurrent U.S. legislative activity seeking to create liability pathways and federal standards. Senators have introduced proposals that would allow victims to sue AI companies for harms caused by their systems, while separate bipartisan bills would authorise a federal institute to set technical standards intended to spur innovation and enhance safety. The combined push from activists and lawmakers signals growing momentum for enforceable rules rather than voluntary industry norms. (Sources: durbin.senate.gov, hickenlooper.senate.gov)
Organisations advocating for robust oversight have also urged complementary measures such as greater transparency at frontier AI firms, export controls on advanced AI chips and resistance to any federal preemption that would block stronger state-level safeguards. Advocates argue that patchwork regulation without accountability will leave gaps in areas from national security to children's safety, where the declaration calls for mandatory pre-deployment testing of systems designed for minors. (Sources: secureainow.org, protectwhatshuman.org)
The Pro-Human Declaration arrives amid a growing global conversation about governance: international summits and national proposals have sought cooperative solutions while signatories stress that cross-partisan agreement on guardrails is essential if AI is to expand human capabilities rather than undermine them. Organisers say the initiative is intended to shape both domestic policy debates and wider discussions about export controls, research standards and democratic oversight. (Sources: elysee.fr, humanstatement.org)
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
- Paragraph 1: [3], [2]
- Paragraph 2: [3], [2]
- Paragraph 3: [2], [4]
- Paragraph 4: [6], [7]
- Paragraph 5: [4], [2]
- Paragraph 6: [5], [3]
Source: Noah Wire Services