Headlines
On February 13, the Wall Road Journal reported one thing that hadn’t been public earlier than: the Pentagon used Anthropic’s Claude AI in the course of the January raid that captured Venezuelan Chief Nicolás Maduro.
It mentioned Claude’s deployment got here by Anthropic’s partnership with Palantir Applied sciences, whose platforms are broadly utilized by the Protection Division.
Reuters tried to independently confirm the report – they could not. Anthropic declined to touch upon particular operations. The Division of Protection declined to remark. Palantir mentioned nothing.
However the WSJ report revealed yet another element.
Someday after the January raid, an Anthropic worker reached out to somebody at Palantir and requested a direct query: how was Claude really utilized in that operation?
The corporate that constructed the mannequin and signed the $200 million contract needed to ask another person what their very own software program did throughout a navy assault on a capital metropolis.
This one element tells you the whole lot about the place we really are with AI governance. It additionally tells you why “human within the loop” stopped being a security assure someplace between the contract signing and Caracas.
How massive was the operation
Calling this a covert extraction misses what really occurred.
Delta Power raided a number of targets throughout Caracas. Greater than 150 plane had been concerned. Air protection methods had been suppressed earlier than the primary boots hit the bottom. Airstrikes hit navy targets and air defenses, and digital warfare property had been moved into the area, per Reuters.
Cuba later confirmed 32 of its troopers and intelligence personnel had been killed and declared two days of nationwide mourning. Venezuela’s authorities cited a dying toll of roughly 100.
Two sources advised Axios that Claude was used in the course of the lively operation itself, although Axios famous it couldn’t verify the exact function Claude performed.
What Claude may even have achieved
To know what might have been taking place, that you must know one technical factor about how Claude works.
Anthropic’s API is stateless. Every name is unbiased i.e. you ship textual content in, you get textual content again, and that interplay is over. There is no persistent reminiscence or Claude working constantly within the background.
It is much less like a mind and extra like an especially quick advisor you may name each thirty seconds: you describe the scenario, they offer you their greatest evaluation, you grasp up, you name once more with new info.
That is the API. However that claims nothing concerning the methods Palantir constructed on prime of it.
You may engineer an agent loop that feeds real-time intelligence into Claude constantly. You may construct workflows the place Claude’s outputs set off the following motion with minimal latency between advice and execution.
Testing These Situations Myself
To know what this really seems like in observe, I examined a few of these situations.

each 30 seconds. indefinitely.
The API is stateless. A classy navy system constructed on the API does not must be.
What that may appear like when deployed:
Intercepted communications in Spanish fed to Claude for immediate translation and sample evaluation throughout a whole lot of messages concurrently. Satellite tv for pc imagery processed to establish car actions, troop positions, or infrastructure modifications with updates each couple of minutes as new photos arrived.
Or real-time synthesis of intelligence from a number of sources – indicators intercepts, human intelligence experiences, digital warfare information – compressed into actionable briefings that might take analysts hours to supply manually.

educated on situations. deployed in Caracas.
None of that requires Claude to “resolve” something. It is all evaluation and synthesis.
However whenever you’re compressing a four-hour intelligence cycle into minutes, and that evaluation is feeding straight into operational selections being made at that very same compressed timescale, the excellence between “evaluation” and “decision-making” begins to break down.
And since it is a categorised community, no one exterior that system is aware of what was really constructed.
So when somebody says “Claude cannot run an autonomous operation” – they’re most likely proper concerning the API stage. Whether or not they’re proper concerning the deployment stage is a totally completely different query. And one no one can presently reply.
Hole between autonomous and significant
Anthropic’s laborious restrict is autonomous weapons – methods that resolve to kill with out a human signing off. That is an actual line.
However there’s an unlimited quantity of territory between “autonomous weapons” and “significant human oversight.” Take into consideration what it means in observe for a commander in an lively operation. Claude is synthesizing intelligence throughout information volumes no analyst might maintain of their head. It is compressing what was a four-hour briefing cycle into minutes.

this took 3 seconds.
It is surfacing patterns and suggestions sooner than any human crew might produce them.
Technically, a human approves the whole lot earlier than any motion is taken. The human is within the course of. However the course of is now transferring so quick that it turns into unattainable to guage what’s in it in quick paced situations like a navy assault.When Claude generates an intelligence abstract, that abstract turns into the enter for the following resolution. And since Claude can produce these summaries a lot sooner than people can course of them, the tempo of all the operation hastens.
You may’t decelerate to consider carefully a couple of advice when the scenario it describes is already three minutes previous. The data has moved on. The following replace is already arriving. The loop retains getting sooner.

90 seconds to resolve. that is what the loop seems like from inside.
The requirement for human approval is there however the means to meaningfully consider what you are approving shouldn’t be.
And it will get structurally worse the higher the AI will get as a result of higher AI means sooner synthesis, shorter resolution home windows, much less time to suppose earlier than performing.
Pentagon and Claude’s arguments
The Pentagon needs entry to AI fashions for any use case that complies with U.S. legislation. Their place is actually: utilization coverage is our drawback, not yours.
However Anthropic needs to take care of particular prohibitions – no totally autonomous weapons and prohibiting mass home surveillance of Individuals.
After the WSJ broke the story, a senior administration official advised Axios their partnership/settlement was underneath overview and that is the explanation Pentagon said:
“Any firm that might jeopardize the operational success of our warfighters within the area is one we have to reevaluate.”
However sarcastically, Anthropic is presently the one industrial AI mannequin authorized for sure categorised DoD networks. Though, OpenAI, Google, and xAI are all actively in discussions to get onto these methods with fewer restrictions.
The actual combat past arguments
In hindsight, Anthropic and the Pentagon is likely to be lacking all the level and considering coverage languages may remedy this challenge.
Contracts can mandate human approval at each step. However, that doesn’t imply the human has sufficient time, context, or cognitive bandwidth to really consider what they’re approving. That hole between a human technically within the loop and a human really in a position to suppose clearly about what’s in it’s the place the actual danger lives.
Rogue AI and autonomous weapons are most likely the later set of arguments.
At this time’s debate must be – would you name it “supervised” whenever you put a system that processes info orders of magnitude sooner than people right into a human command chain?
Closing ideas
In Caracas, in January, with 150 plane and real-time feeds and selections being made at operational pace and we do not know the reply to that.
And neither does Anthropic.
However quickly, with fewer restrictions in place and extra fashions on these categorised networks, we’re all going to search out out.
All claims on this piece are sourced to public reporting and documented specs. We’ve got no private details about this operation. Sources: WSJ (Feb 13), Axios (Feb 13, Feb 15), Reuters (Jan 3, Feb 13). Casualty figures from Cuba’s official authorities assertion and Venezuela’s protection ministry. API structure from platform.claude.com/docs. Contract particulars from Anthropic’s August 2025 press launch. “Visibility into utilization” quote from Axios (Feb 13).