Defense Secretary Pete Hegseth told Anthropic's chief executive this week that the firm must remove limitations on its Claude artificial intelligence for use by the U.S. military by Friday or face the loss of a Pentagon contract, according to reporting by the Associated Press and Axios. The dispute centres on whether the privately developed model should operate inside defence systems without the safety constraints the company has insisted upon.
Officials warned Anthropic that the department could sever ties, label the company a supply chain risk or invoke the Defense Production Act to compel broader access to the technology if necessary, Axios and The Washington Post reported. Pentagon sources described the interaction with Anthropic as high-stakes; a senior official told Axios the meeting was a "s–t-or-get-off-the-pot" moment for the company.
Anthropic CEO Dario Amodei has repeatedly said he will not permit the deployment of the company’s models for fully autonomous targeting or for large-scale domestic surveillance, arguing such uses cross ethical lines. In a recent essay he warned that "A powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow," positions documented by the Associated Press and The Washington Post.
The disagreement has particular urgency because Claude is currently the only advanced model approved for some of the military’s classified networks, a situation outlined by multiple outlets. The Pentagon, according to Axios and NDTV, has already contracted with other AI developers and wants tools available for "all lawful use" in operational contexts; Anthropic has been reluctant to accept that open-ended permission.
Analysts and legal experts cited in reporting say the clash highlights broader governance questions as defence adoption accelerates. Georgetown University specialists note the firm’s limited leverage compared with peers that have accepted the department’s terms, while civil liberties lawyers urge stronger oversight if tools that can surveil Americans are used, as noted by The Washington Post and the Associated Press.
If Anthropic refuses to alter its guardrails the Pentagon could terminate its up-to-$200 million contract or pursue extraordinary measures to secure access, Reuters-style reporting in national outlets suggests. The outcome will test whether commercial developers can impose lasting ethical limits on military applications of generative AI, or whether strategic and security pressures will push those boundaries aside, as outlined by Yahoo Finance and other reports.
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
- Paragraph 1: [2], [3]
- Paragraph 2: [3], [4]
- Paragraph 3: [2], [4]
- Paragraph 4: [3], [5]
- Paragraph 5: [4], [2]
- Paragraph 6: [6], [7]
Source: Noah Wire Services