Pages from the Anthropic web site and the corporate’s logos are displayed on a pc display screen in New York on Thursday, Feb. 26, 2026.
Patrick Sison/Related Press
cover caption
toggle caption
Patrick Sison/Related Press
The Pentagon is headed for a showdown with Anthropic, one of many world’s strongest AI firms, over the army use of its AI mannequin after Anthropic’s CEO rejected the Protection Division’s ultimatum that it loosen security restrictions or be blacklisted from profitable army work.
At stake are lots of of thousands and thousands of {dollars} in contracts and entry to a number of the most superior AI instruments on the planet. This is what to know concerning the battle and what the implications could possibly be.
The Pentagon and Anthropic don’t see eye-to-eye on how AI needs to be utilized in warfare
For months, Anthropic CEO Dario Amodei has insisted that Anthropic’s AI mannequin, Claude, should not be used for mass surveillance within the U.S. or to energy completely autonomous weapons, reminiscent of a drone that makes use of AI to kill targets with out human approval. He has described these makes use of as “completely illegitimate” and says they’re “vibrant purple traces” for the corporate.
The Pentagon says that it doesn’t intend to make use of Anthropic’s instruments for surveillance or autonomous weapons. However it says that it is less than a contractor like Anthropic to make selections about how its expertise is used, and says AI firms together with Anthropic want to permit the U.S. authorities to make use of their instruments “for all lawful functions.”
“Legality is the Pentagon’s duty as the tip consumer,” a senior Pentagon official who declined to offer their title informed NPR this week.
Dario Amodei, CEO and co-founder of Anthropic, on the World Financial Discussion board in Davos, Switzerland, Jan. 23, 2025.
Markus Schreiber/Related Press
cover caption
toggle caption
Markus Schreiber/Related Press
On Thursday, Amodei stated Anthropic couldn’t settle for the Pentagon’s newest modifications to the phrases of its contract.
“I imagine deeply within the existential significance of utilizing AI to defend the US and different democracies, and to defeat our autocratic adversaries,” the CEO wrote in a prolonged assertion concerning the deadlock. “Anthropic understands that the Division of Battle, not personal firms, makes army selections. We have now by no means raised objections to explicit army operations nor tried to restrict use of our expertise in an advert hoc method,” he stated.
“Nonetheless, in a slender set of instances, we imagine AI can undermine, reasonably than defend, democratic values,” Amodei continued. He described home mass surveillance and absolutely autonomous weapons as makes use of which can be “merely exterior the bounds of what as we speak’s expertise can safely and reliably do.” These makes use of “have by no means been included in our contracts with the Division of Battle, and we imagine they shouldn’t be included now,” he added.
Amodei’s rejection comes as Anthropic’s relationship with the Pentagon has grown more and more acrimonious. At a gathering on Tuesday between Protection Secretary Pete Hegseth and Amodei, Hegseth threatened to punish the corporate if it doesn’t bend to the administration’s calls for, in keeping with two individuals with direct data of the assembly who weren’t licensed to talk publicly.
Protection Secretary Pete Hegseth stands exterior the Pentagon in a file picture from January 2026.
Kevin Wolf/Related Press
cover caption
toggle caption
Kevin Wolf/Related Press
One individual near the dialogue stated Hegseth dangled the potential for canceling Anthropic’s $200 million contract with the Protection Division, whereas a Pentagon official stated repercussions may embrace forcing Anthropic to permit the federal authorities to make use of its AI mannequin in opposition to its will and successfully blacklisting Anthropic from working with the U.S. army.
“These threats don’t change our place: we can’t in good conscience accede to their request,” Amodei wrote on Thursday. “However given the substantial worth that Anthropic’s expertise gives to our armed forces, we hope they rethink.”
The Pentagon has given Anthropic a tough deadline
In a submit on X on Thursday, Pentagon spokesman Sean Parnell warned that Anthropic had till Friday afternoon earlier than the Pentagon would take motion.
“They’ve till 5:01 PM ET on Friday to determine. In any other case, we’ll terminate our partnership with Anthropic and deem them a provide chain danger for DOW,” Parnell wrote, utilizing the Pentagon’s rebranded “Division of Battle” acronym.
The Division of Battle has no real interest in utilizing AI to conduct mass surveillance of People (which is illegitimate) nor can we need to use AI to develop autonomous weapons that function with out human involvement. This narrative is pretend and being peddled by leftists within the media.… https://t.co/3pjWZ66aXz
— Sean Parnell (@SeanParnellASW) February 26, 2026
Anthropic stated on Thursday the Pentagon had despatched the corporate new contract language in a single day that, within the firm’s view, “made nearly no progress on stopping Claude’s use for mass surveillance of People or in absolutely autonomous weapons.”
The assertion continued: “New language framed as compromise was paired with legalese that may enable these safeguards to be disregarded at will. Regardless of DOW’s current public statements, these slender safeguards have been the crux of our negotiations for months.”
Anthropic stated it is able to proceed negotiations and is “dedicated to operational continuity for the Division and America’s warfighters.”
What’s a “provide chain danger”?
Deeming Anthropic a provide chain danger could be uncommon, in keeping with Geoffrey Gertz, a senior fellow on the Middle for a New American Safety. The designation has “historically been used for international adversary expertise,” he stated, reminiscent of Chinese language telecommunications firm Huawei.
It is unclear precisely how far-reaching the Pentagon designation could be. It may imply that different Pentagon contractors could be prohibited from utilizing Anthropic’s instruments of their work for the Pentagon, or it may prohibit them from utilizing Anthropic’s instruments in any respect. That second case could be significantly damaging to the corporate, Gertz stated.
On the identical time, the Pentagon has threatened to invoke the Protection Manufacturing Act to drive Anthropic to take away its guardrails. That too could be a unprecedented step, Gertz stated. The Protection Manufacturing Act is designed to offer the federal government management over sure business sectors in extraordinary circumstances. It’s “historically evoked very hardly ever in true emergency disaster conditions,” he stated. The aim on this case, presumably, could be to make use of the act to compel Anthropic to loosen restrictions on the usage of its AI instruments.
Gertz famous that these two threats in opposition to Anthropic seem like considerably contradictory: “It is this humorous combine the place they each are such a danger that they should be kicked out of all techniques, and so important that they should be compelled to be a part of the system it doesn’t matter what,” he stated.
No matter occurs on the finish of as we speak, this battle might be removed from over
The Pentagon’s contract with Anthropic is price as a lot as $200 million, a comparatively small portion of the corporate’s $14 billion in income. Whereas the Pentagon has related contracts with different AI firms together with Google, OpenAI and xAI, Anthropic was the primary to be cleared for labeled use after protection officers deemed it probably the most superior and safe mannequin for delicate army functions.
If the contract have been merely cancelled, that could be the tip of it, Gertz stated. But when the Pentagon both tries to compel Anthropic to take away its guardrails or hits it with a wider supply-chain-risk designation, then the corporate will virtually definitely need to battle again, he predicts.
“Actually if the Pentagon seeks to escalate it,” Gertz stated, “I believe we’ll see extra authorized fights.”
NPR’s Bobby Allyn contributed to this report.
