The White House has ordered all federal agencies to stop using Anthropic’s artificial intelligence services, after the company refused Pentagon demands to remove safety restrictions on its Claude model, a move the administration says creates a supply-chain risk to national security. According to AP reporting, Defence Secretary Pete Hegseth and President Donald Trump framed the company’s refusal as endangering troops and compromising the military’s operational needs. [6],[2]
The dispute hardened after Pentagon negotiators sought contract language that would permit broader military use of Anthropic’s systems, including applications the company has said would enable mass domestic surveillance or fully autonomous weapons. Anthropic’s leadership declined to waive those limits on ethical grounds, prompting the Pentagon to designate the firm a security risk and to cancel a roughly $200m arrangement, Axios reported. [2],[7]
President Trump amplified the confrontation on social media, posting on Truth Social: "The Leftwing extremists at Anthropic have made a DISASTROUS MISTAKE by trying to STRONG-ARM the Department of War and forcing them to follow their Terms of Service instead of our Constitution," and adding, "WE will determine our Country’s future – NOT some out-of-control, Radical Left AI firm led by people who don’t understand what the real world is like." The administration used that rhetoric to justify its directive to halt federal use of Anthropic tools. [6],[2]
Anthropic has pushed back, announcing plans to sue the Pentagon and insisting it will not abandon the safety guardrails it says are essential to prevent misuse. The company argues the new contractual language, while presented as a compromise, would in practice undermine the stated protections by leaving them vulnerable to broad waiver or reinterpretation. Axios and AP both quoted Anthropic executives saying the firm cannot in conscience accede to demands that would permit its models to be repurposed for indiscriminate surveillance or lethal autonomous systems. [4],[6]
With Anthropic sidelined, the Pentagon has moved to secure alternatives. OpenAI said it reached an agreement to provide models to the Defence Department under explicit "red lines" that prohibit domestic mass surveillance and reaffirm human responsibility for use of force, a framework OpenAI’s CEO described as consistent with the safety boundaries advanced by other firms. Industry sources told Axios the department is exploring OpenAI as a replacement partner. [3],[2]
The clash has reignited broader tensions over how commercial AI companies balance ethical constraints against national security demands. Critics of the administration’s action warned it risks chilling investment and could force major cloud and chip partners to reconsider ties with Anthropic, while former officials described the blacklisting as heavy-handed. Supporters of the ban argue that military planners need adaptable tools without legal or contractual fetters that could impede operations. The episode underlines the thorny policy choices governments face as advanced generative models become central to both civilian and defence capabilities. [5],[2]
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
- Paragraph 1: [6], [2]
- Paragraph 2: [2], [7]
- Paragraph 3: [6], [2]
- Paragraph 4: [4], [6]
- Paragraph 5: [3], [2]
- Paragraph 6: [5], [2]
Source: Noah Wire Services