Secretary of Struggle Pete Hegseth generally seems as if he’s extra within the optics of taking part in the a part of a navy chief than he’s in really being a navy chief.
Possibly that’s why he has chosen a Hollywood-esque excessive midday — or, at the very least, late afternoon — showdown for his deepening dispute with the AI firm Anthropic. Hegseth has given Anthropic till 5:01 pm on Friday to reply to his calls for that the corporate give the US navy full and unfettered entry to its AI, or face penalties that would threaten its survival. Anthropic has thus far refused, and on Thursday night CEO Dario Amodei stated in a press release that the corporate “can’t in good conscience accede to their request.”
What’s unfolding this week is the largest confrontation between the US authorities and a tech firm over AI ethics since Google workers rebelled in opposition to working with the Pentagon in 2018. However with AI much more superior and much more important to each the American financial system and American protection than it was eight years in the past, the stakes now are a lot better — definitely for Anthropic itself, but additionally for the query of simply who has ultimate management over an existential expertise. (Disclosure: Future Good is funded partly by the BEMC Basis, whose main funder was additionally an early investor in Anthropic. They don’t have any editorial enter into our content material.)
This has all raised loads of questions, beginning with:
What does the Pentagon really need?
Anthropic is already a provider for the Pentagon, having signed a $200 million contract in July to offer superior AI for nationwide safety challenges, and its chatbot Claude was the primary AI mannequin that might be deployed on the federal government’s confidential networks. However the division now insists that Anthropic signal a contract permitting its Claude AI for use for “all lawful functions.”
That may sound effective — it has “lawful” within the phrases, in any case — however what it means in observe is that Anthropic would don’t have any say over particular person use instances, no potential to overview how Claude is being utilized in categorised settings, and no proper to limit particular purposes. It will be the navy that will resolve tips on how to deploy Anthropic’s AI expertise.
Okay, but when Anthropic is already supplying its AI to the navy, why ought to the corporate get to resolve how that AI is used? It’s not just like the Pentagon has to name up Boeing earlier than it makes use of certainly one of its jets in a navy strike.
Hmm, do you presently work on the Pentagon press division? Because it occurs, that’s exactly the analogy that Hegseth reportedly introduced to Anthropic’s Amodei in a tense assembly on Tuesday.
So, why gained’t Anthropic play ball?
It’s not being absolutely recalcitrant. Even past the $200 million Pentagon contract, Anthropic has already been deeply concerned in authorities work, together with in extra direct navy makes use of like missile protection. Anthropic has been probably the most outspoken proponents of the concept that the US is in a civilizational race with China over AI supremacy. Whereas Anthropic has a (principally if not fully) deserved repute as probably the most safety-minded of the key AI labs, they’re not a bunch of bleeding-heart softies.
Anthropic’s insurance policies enable its fashions for use as a part of focused navy strikes, overseas surveillance, and even drone strikes when a human approves the ultimate name. However it has maintained two particular “purple traces” it gained’t cross: absolutely autonomous weapons, which means AI programs that choose and have interaction targets with no human concerned, and mass home surveillance of Americans. Amodei stated in his assertion that “AI-driven mass surveillance presents critical, novel dangers to our elementary liberties,“ whereas frontier AI programs had been “merely not dependable sufficient to energy absolutely autonomous weapons.”
It’s not that Anthropic would by no means be concerned in constructing deadly autonomous weapons. Simply have a look at Ukraine — the realities of recent warfare have made all of it however inevitable that such weapons and programs shall be constructed. However Anthropic doesn’t consider the fashions are able to carrying this out successfully at present.
So, what’s occurring is that the Pentagon is demanding Anthropic enable it to make use of Claude for a use Anthropic says Claude can’t even do now?
How did this all occur?
Issues began going sideways after the operation in early January that resulted within the seize of Venezuelan President Nicolas Maduro. Claude, in accordance with reporting by Axios, was deployed through the operation by means of a platform operated by the very military-friendly AI firm Palantir. Quickly after the operation, an Anthropic worker reportedly requested a Palantir counterpart how Claude may need been used within the operation, apparently in a method that indicated Anthropic may need an issue with it. Palantir then allegedly flagged the dialogue for the Pentagon.
The Pentagon was already reportedly sad with Anthropic’s insistence on its purple traces, and the corporate has not been included thus far on the GenAI.mil platform the division constructed out in late 2025. At a speech in January, Hegseth pointedly stated that “we won’t make use of AI fashions that gained’t permit you to struggle wars.”
That brings us to the Friday 5:01 pm showdown.
If Anthropic sticks to its weapons, what can the Pentagon do?
It may merely cancel the $200 million contract, which it might be in its rights to do. Hegseth isn’t fallacious to say that suppliers as a rule don’t dictate authorities coverage. That might be a minor monetary bummer for Anthropic, however the firm is presently valued at $380 billion, so I feel it might be okay. Different AI corporations like xAI appear very happy to take Anthropic’s place.
However Hegseth doesn’t appear able to take this comparatively rational plan of action. As a substitute, he’s speaking as if he needs to make an instance out of Anthropic and show that it’s the Trump administration that may inform US AI corporations tips on how to act.
The Pentagon has threatened to make use of the Protection Manufacturing Act, a Chilly Struggle-era regulation that permits the president to compel corporations to just accept protection contracts. Previously that’s meant issues like bolstering home manufacturing of essential provides, as through the Covid pandemic, when President Trump invoked it to power further ventilator manufacturing. However intentionally utilizing it to focus on a home firm over a coverage dispute about AI security guidelines — and primarily power Anthropic to coach what some are calling a “Struggle Claude” — can be unprecedented and definitely result in drawn-out authorized wrangling.
So, that’s not good for Anthropic, AI security, and perhaps even the rule of regulation. However even worse, for Anthropic at the very least, can be the final choice: designating Anthropic a “provide chain danger.” This label — usually reserved for corporations from adversary nations, like China’s Huawei — would prohibit each protection contractor from utilizing Anthropic’s merchandise. Since a lot of America’s largest companies maintain navy contracts, this might successfully poison practically all of Anthropic’s enterprise enterprise and probably torpedo a deliberate IPO. Axios has reported that the Pentagon has already began by asking Boeing and Lockheed Martin to evaluate their reliance on Claude.
Wait, I’m confused. So, primarily, the Pentagon is saying that Anthropic is likely to be each a critical provide chain danger, however, additionally, it want to compel the corporate to let it use Claude in nearly any method it sees match?
Sure, as Vox contributing editor and Argument workers author Kelsey Piper put it: “It’s patently ridiculous to each declare that Claude poses a nationwide safety risk and in addition that it’s so vital for wartime manufacturing you need to nationalize the corporate.”
Amodei has refused to again down, and far of the AI world is on his facet. That features rivals like Jeff Dean of Google and voices like Dean Ball, a former Trump AI adviser, who wrote on X that what the Pentagon is contemplating would symbolize “the strictest laws of AI being thought-about by any authorities on Earth, and all of it comes from an administration that payments itself (and legitimately has been) deeply anti-AI-regulation.” What appears clear is that, if the Pentagon efficiently compels compliance — whether or not by means of the DPA, provide chain blacklisting, or business stress — it would set up that no American AI firm can keep unbiased security restrictions in opposition to authorities calls for. Except Congress does what it ought to do and passes legal guidelines constraining how the Pentagon makes use of deadly AI, we might be headed for a really darkish future certainly — and one out of our management.
Replace, February 26, 2026, 6:45 pm: This piece has been up to date to incorporate Anthropic CEO Dario Amodei’s assertion.