Anthropic said on Thursday it would not abandon the ethical limits it has placed on its artificial intelligence systems, rejecting a Pentagon demand to grant the US military unfettered use of its models. "These threats do not change our position: we cannot in good conscience accede to their request," Anthropic chief executive Dario Amodei said in a statement, and the company reiterated it would not accept terms it sees as permitting mass domestic surveillance or fully autonomous weapons. According to reporting by the Associated Press, Amodei stressed the firm remains open to talks but criticised recent contract language for lacking necessary safeguards.

The Defence Department had given Anthropic a stark deadline, warning that failure to agree by the specified hour could prompt the government to invoke the Cold War–era Defense Production Act or to designate the company a supply chain risk. Legal experts and commentators cited by news organisations note that using the DPA to compel changes to a product’s safety features or ethical guardrails would be highly unusual and legally contested. The Pentagon has also signalled competitive pressure by indicating rival models have been cleared for classified use.

The Pentagon has sought to reassure critics that it does not intend to use commercial AI for illegal domestic surveillance or to field fully autonomous lethal systems, and officials emphasise the department must retain the ability to employ tools for "all lawful purposes." Anthropic, however, has said those assurances are insufficient without contractual limits that would explicitly bar certain applications. Axios reports the two sides remain at odds over language and safeguards even as negotiations continue.

Anthropic, founded in 2021 by former employees of a rival AI firm, has built its reputation on a "safety-first" approach to model development. The company notes it has already provided models to the Pentagon and US intelligence agencies for defensive purposes but insists on a bright line against uses it judges to be incompatible with democratic norms, such as sweeping domestic surveillance and removing human oversight from weapons systems. Those positions underscore why Anthropic remains the sole major supplier resisting full integration into a classified military AI network.

The company's stance has drawn public support from tech workers and civil society groups calling for limits on military and surveillance applications of advanced AI. More than 200 employees at Google and OpenAI signed an open letter backing Anthropic's refusal to allow its tools to be used for domestic surveillance or fully autonomous warfare, while advocacy organisations have urged congressional scrutiny of the dispute. Lawmakers from both parties have expressed concern about any push to require unrestricted military access to commercial systems.

If the impasse persists, analysts predict a likely legal confrontation that could force courts to weigh the reach of emergency procurement powers against private companies' assurances about product safety and ethics. Observers cited in coverage say the episode highlights broader tensions as the Pentagon accelerates AI adoption: balancing operational flexibility with constitutional, legal and ethical limits will fall increasingly to Congress and the judiciary if permanent agreements cannot be reached. Meanwhile Anthropic says it will not "knowingly provide a product that puts America's warfighters and civilians at risk."

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph:

Source: Noah Wire Services