Anthropic has balked at the Pentagon’s latest attempt to change the terms of a roughly $200 million contract for its Claude artificial intelligence, saying the revised language would erode protections against military use for mass surveillance and for fully autonomous weapons. According to AP reporting, the company declined the department’s proposed edits and criticised the text as failing to safeguard civilian privacy and human oversight.
The standoff hardened after Defence Secretary Pete Hegseth told Anthropic’s chief executive that the department expected Claude to be available “for all lawful purposes,” and warned that refusal could lead to contract termination, a designation as a “supply chain risk,” or even invocation of extraordinary powers to compel cooperation. AP and other reporting note officials insist they do not intend illegal surveillance or autonomous weaponisation, but that the military needs operational flexibility.
Anthropic’s CEO, Dario Amodei, said negotiations had seen “virtually no progress” on the company’s red lines, particularly around using its models for broad domestic monitoring or removing human control from weapons systems. Axios and AP describe a tight deadline set by the Pentagon, after which the firm could face severe consequences if it does not accept broader classified use. Despite the pressure, Amodei signalled the company remains willing to continue talks while defending its ethical limits.
The dispute has exposed broader legal and political questions about where ethical boundaries should sit when private AI firms supply tools to national security agencies. Legal experts warn that using the Defence Production Act to force changes to safety features or ethical terms would be historically novel and legally fraught, while advocates and some lawmakers are calling for congressional scrutiny of any push toward unfettered military use. Reporting shows a coalition of groups has urged Congress to investigate, and senators from both parties have voiced concern about surveillance and lethal‑force applications.
Some observers see the clash as emblematic of a larger tug‑of‑war between commercial AI developers’ public safety commitments and the Pentagon’s demand for adaptable tools. Industry and civil‑society critics argue that voluntary corporate safeguards may not be sufficient and that statutory rules are needed to set clear limits on military and domestic surveillance uses of advanced models. Those calls for formal regulation have been amplified by the prospect of one major developer removing or softening internal constraints.
The Pentagon stresses the clause allowing use for “all lawful purposes” is a standard requirement for classified contracts and not aimed at endorsing unlawful activity, according to AP coverage. Defence officials say flexibility is necessary for a range of operations conducted under established law; Anthropic retorts that caveats and legal exceptions in the proposed text could be interpreted to sidestep the company’s intended safeguards.
With both sides publicly signalling a willingness to keep negotiating even as deadlines loom, the outcome will shape how far private AI suppliers can bind their technology’s use when they engage with national security customers. The dispute is likely to prompt closer legislative and public scrutiny of the legal tools the government might deploy to secure AI capabilities,and whether those tools should override corporate ethical constraints.
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
- Paragraph 1: [2]
- Paragraph 2: [2], [4]
- Paragraph 3: [3], [2]
- Paragraph 4: [4], [5]
- Paragraph 5: [5], [6]
- Paragraph 6: [2], [6]
- Paragraph 7: [3], [5]
Source: Noah Wire Services