300 Google Employees Just Drew a Line the Pentagon Can't Cross
AI workers at Google, OpenAI, and Anthropic are refusing to build weapons and surveillance tools. The deadline is tonight.
Eight years ago, Google employees staged walkouts over Project Maven — a Pentagon drone program that used Google's AI to identify targets. Google backed down. It dropped the contract and published ethical guidelines promising never to build weapons or surveillance systems.
That was 2018. A different era.
On Thursday, more than 300 Google employees signed a letter begging the company not to repeat its past — or worse, to go further than it ever did. Over 60 OpenAI employees signed too. Their message was aimed at their own bosses, but it was really about Anthropic.
And a clock ticking toward 5:01 p.m.
The Deadline
Here's what's happening. The Pentagon wants unrestricted access to Anthropic's Claude AI. Not for missile defense. Not for cybersecurity. Unrestricted. That means mass domestic surveillance. That means autonomous weapons.
Anthropic said no. CEO Dario Amodei published a statement Thursday: "We cannot in good conscience accede to their request."
Defense Secretary Pete Hegseth gave Anthropic until 5:01 p.m. ET Friday to change its mind. If Anthropic refuses, the Pentagon will either label it a "supply chain risk" — effectively blacklisting it from government work — or invoke the Defense Production Act to force compliance.
Amodei pointed out the contradiction. "One labels us a security risk; the other labels Claude as essential to national security."
He's right. You can't be both irrelevant and indispensable.
The Letter
The employee letter, published at notdivided.org, doesn't mince words.
"They're trying to divide each company with fear that the other will give in," it reads. "That strategy only works if none of us know where the others stand."
The signatories want their companies to hold two lines: no mass surveillance, no fully autonomous weapons. They're asking Google and OpenAI leadership to publicly back Anthropic's position.
"We hope our leaders will put aside their differences and stand together," the letter says.
These aren't junior engineers. Three hundred Google employees includes researchers from DeepMind — the division that builds the company's most advanced AI. Google's chief scientist Jeff Dean, who received the letter, posted on X that "mass surveillance violates the Fourth Amendment and has a chilling effect on freedom of expression."
He was speaking as an individual. But when a chief scientist tweets constitutional objections, it's hard to pretend he's off the clock.
The 2018 Parallel (And Why This Is Different)
In 2018, about 4,000 Google employees signed a petition against Project Maven. Google let the contract expire and published AI principles: no weapons, no surveillance, no tech that causes "overall harm."
The 2026 version is worse for three reasons.
First, the stakes are higher. Project Maven was a drone imagery program. Today's AI can write code, analyze intelligence, conduct cyber operations, and — as we reported this week — hack governments. The gap between "analysis tool" and "autonomous weapon" has narrowed to almost nothing.
Second, the pressure is direct. In 2018, the Pentagon didn't threaten Google with the Defense Production Act. They didn't call the CEO into the Pentagon for a meeting and give him a Friday deadline. Hegseth's approach isn't persuasion. It's coercion.
Third, it's not just one company anymore. The Pentagon has working arrangements with Google's Gemini, OpenAI's ChatGPT, and X's Grok for unclassified tasks. It's negotiating classified access with Google and OpenAI. The play is clear: pick off one company at a time. If Anthropic folds, Google and OpenAI face the same demand next week with weaker ground to stand on.
The employee letter names this strategy directly: divide and conquer doesn't work when everyone sees it.
Where the Companies Actually Stand
OpenAI CEO Sam Altman told CNBC Friday morning he doesn't "personally think the Pentagon should be threatening DPA against these companies." An OpenAI spokesperson confirmed the company shares Anthropic's red lines on autonomous weapons and mass surveillance.
Google DeepMind hasn't made a formal statement. But Jeff Dean's tweet wasn't ambiguous.
The informal consensus is striking. Three companies that compete fiercely on products, talent, and market share are quietly aligning on a principle: some things shouldn't be built, even if the government demands it.
Grok, Elon Musk's AI, doesn't share that position. According to Axios, the military already uses Grok for unclassified tasks with no restrictions.
The Bigger Question
Anthropic was founded by former OpenAI employees who left over safety concerns. Its entire brand is "the safety company." Dropping that would be corporate suicide — it'd lose researchers, customers, and the trust that justifies its $380 billion valuation.
But the Pentagon's leverage is real. A "supply chain risk" designation could cut Anthropic off from government contracts worth hundreds of millions. The Defense Production Act, if legally viable, could force the company to hand over its technology regardless.
Some legal experts question whether the DPA applies here. It was designed for wartime production — steel, ammunition, medical supplies. Using it to compel an AI company to remove safety guardrails would be a first. Courts would likely get involved.
Meanwhile, Undersecretary Emil Michael called Amodei a "liar with a God-complex" on X. Pentagon spokesman Sean Parnell warned publicly that the clock was running.
The rhetoric isn't what you'd expect from a customer negotiation. It's what you'd expect from a government accustomed to getting what it wants.
What the Workers Are Really Saying
Strip away the policy language and the letter says something simple: we didn't build this to kill people.
The researchers who created these models — who spent years on alignment, safety testing, red-teaming — are watching their work get requisitioned for purposes they explicitly designed against. That's not an abstract ethical concern. It's personal.
In 2018, Google's walkout worked because the company needed AI talent more than it needed one military contract. The same calculus applies today, magnified. AI researchers are the scarcest resource in the industry. If Google, OpenAI, or Anthropic caves, the best people leave. They've said so.
The letter ends: "We will not be divided."
What Happens at 5:01
By the time you read this, the deadline may have passed. Anthropic has been clear: it won't comply. The question is what the Pentagon does next.
A supply chain risk designation would be a shot across the bow of every AI company. It would say: cooperate fully or get cut off. For companies that depend on government contracts — and all of them do, increasingly — that's a threat with teeth.
Invoking the DPA would be a legal and political firestorm. Congressional hearings. Court challenges. The spectacle of the U.S. government forcing a private company to remove safety features from its AI.
Or the Pentagon blinks. Finds a compromise. Accepts Anthropic's existing offer of Claude for missile defense and cybersecurity, with guardrails intact. Declares victory and moves on.
The employees are betting their careers that the third option is possible. That red lines work. That saying no — loudly, together, across competing companies — still means something.
We'll know soon.
Keep Reading
The Pentagon gave Anthropic a Friday deadline. This is what happens when AI safety meets national security.
The only frontier AI with classified Defense Department access just refused to remove usage restrictions. The Pentagon threatened to invoke the Defense Production Act. Friday is the deadline.
The AI Safety Company Just Dropped Its Safety Promise. Then the Pentagon Called.
Anthropic ditched its core safety pledge and faces a Pentagon ultimatum — all in the same week. The company built on caution is learning what happens when safety meets power.
DeepSeek Trained on Nvidia's Best Chips. Now Nvidia Can't Use the Result.
Export controls created a world where the customer gets the product and the supplier doesn't. How America's chip restrictions just inverted tech dominance.
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.