Anthropic Said No to Killer Robots. The Pentagon Replaced Them in a Week.
Defense contractors are purging Claude from their systems. xAI and OpenAI are moving in with no ethical restrictions. The AI safety experiment just got its verdict.
The Pentagon ordered every contractor, supplier, and partner in the U.S. military's supply chain to stop doing business with Anthropic. Within days, at least 10 defense tech companies dropped Claude, and Lockheed Martin is expected to follow. The reason? Anthropic refused to let its AI be used in autonomous weapons or mass surveillance of Americans.
Now xAI and OpenAI are filling that gap — with zero ethical restrictions attached.
The Deal That Fell Apart
Here's what actually happened in the negotiations, according to The Atlantic's reporting from sources inside the talks.
Anthropic and the Pentagon were close to a deal. The company had a $200 million contract and was the only AI provider operating on classified military networks. Then Defense Secretary Pete Hegseth's team pushed for renegotiated terms.
Anthropic had two red lines: no fully autonomous weapons (machines that select and kill targets without a human deciding), and no mass domestic surveillance of Americans.
The Pentagon almost budged. On the final Friday morning, Hegseth's team said they'd remove the weasel words — phrases like "as appropriate" that would let the government reinterpret its promises later. Anthropic's team was relieved.
Then came the afternoon. The Pentagon still wanted to use Claude to analyze bulk data collected from Americans — your search history, GPS movements, credit card transactions, chatbot conversations, all cross-referenced together. Anthropic said that was a bridge too far.
The deal collapsed. Hours later, Hegseth posted on X that anyone doing business with the U.S. military was barred from commercial activity with Anthropic.
The Replacement Race
The defense AI market reorganized in about a week.
xAI — Elon Musk's company, maker of Grok — signed a deal to put its model into classified military systems. According to TechRepublic, xAI "embraced the Pentagon's terms without reservation," agreeing to an "all lawful use" standard. No restrictions on autonomous weapons. No carve-outs for surveillance.
OpenAI moved in too. Sam Altman's company secured classified network access, positioning itself as the other replacement. MIT Technology Review noted that the Pentagon granted itself six months to phase out Claude and phase in OpenAI and xAI models — all while actively conducting military operations against Iran.
That last detail matters. Claude is currently being used to support U.S. military operations in Iran, even after the ban was announced. CNBC confirmed the model is still running because there's nothing ready to replace it yet.
The $13.4 Billion Question
The U.S. military has budgeted $13.4 billion for autonomous weapons systems in fiscal year 2026 alone. That covers everything from individual drones to swarms operating in air and at sea.
Anthropic didn't argue these weapons shouldn't exist. The company offered to help the Pentagon improve their reliability — the same way self-driving cars have gotten safer than human drivers in some cases. Their position was narrower: the AI models aren't reliable enough yet. They worried Claude could cause machines to fire indiscriminately, miss targets, endanger civilians, or even hit American troops.
One attempted compromise would have kept Claude in the cloud and out of the weapons themselves — a separation between AI in the planning room and AI in the trigger mechanism. That proposal also failed.
Why Contractors Are Complying Anyway
Here's the part nobody's saying out loud: most legal experts think the Pentagon's move won't survive a court challenge.
Anthropic itself cited a federal statute showing Hegseth lacks the authority to restrict companies this way. The BBC quoted a former official calling the legal basis "extremely flimsy." Lawfare, the national security law publication, published an analysis titled "Pentagon's Anthropic Designation Won't Survive First Contact with Legal System."
None of that matters right now. Defense contractors aren't waiting for courts.
Alexander Harstrick, managing partner at J2 Ventures, told CNBC that 10 of his firm's portfolio companies working with the Department of Defense have already backed off Claude. "Most of our companies are actively involved in large defense contracts and so are very strict in their interpretation of the requirements," he said.
The math is simple. Anthropic gets about 80% of its revenue from enterprise customers. But for defense contractors, government contracts are their entire business. When the Pentagon says jump, you don't wait for a judge to rule on whether they can make you.
What This Actually Means
Three things are now true that weren't true two weeks ago.
First, the AI safety experiment has a data point. A major AI company drew an ethical line, refused to cross it under enormous pressure, and got replaced within days by competitors who imposed no restrictions at all. That's the market speaking.
Second, the U.S. military's AI infrastructure is about to get less safe, not more. Claude was the only frontier AI model operating in classified networks. It had restrictions. Its replacements have none. The Pentagon is swapping a model with guardrails for models that agreed to anything.
Third, Anthropic just became the most interesting company in tech. Not because it's winning — it might be losing its most important customer — but because it's the first frontier AI company to face a genuine "build what they want or walk away" moment and choose to walk. Whether that decision is principled or financially suicidal (or both) won't be clear for years.
The autonomous weapons budget is $13.4 billion. The surveillance infrastructure exists. The only question was always which AI company would power it, and whether any of them would say no.
One did. It lasted about a week.
Sources for this article are being documented. Albis is building transparent source tracking for every story.
Keep Reading
The AI Safety Company Just Dropped Its Safety Promise. Then the Pentagon Called.
Anthropic ditched its core safety pledge and faces a Pentagon ultimatum — all in the same week. The company built on caution is learning what happens when safety meets power.
300 Google Employees Just Drew a Line the Pentagon Can't Cross
AI workers at Google, OpenAI, and Anthropic are refusing to build weapons and surveillance tools. The deadline is tonight.
The Pentagon gave Anthropic a Friday deadline. This is what happens when AI safety meets national security.
The only frontier AI with classified Defense Department access just refused to remove usage restrictions. The Pentagon threatened to invoke the Defense Production Act. Friday is the deadline.
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.