The Pentagon Thinks Anthropic Might Sabotage Its Own AI. Anthropic Thinks That's Absurd.
When Your Best Client Calls You a Security Threat
In what might be the most dramatic falling-out between a tech company and the US government since, well, ever, the Department of Defense has formally labelled Anthropic a 'supply chain risk' to national security. The allegation? That Anthropic staff might 'sabotage, maliciously introduce unwanted function, or otherwise subvert' military AI systems. Anthropic, for its part, has called the designation retaliatory nonsense.
If you are struggling to follow the plot, you are not alone. Let us rewind.
How a $200 Million Deal Went South
Back in July 2025, the Pentagon awarded a $200 million contract spread across Anthropic, OpenAI, Google, and xAI to bring frontier AI into military operations. Claude, Anthropic's flagship model, ended up on classified networks via AWS and was reportedly used in US military operations involving Iran and the arrest of Venezuelan leader Nicolas Maduro.
Then came the sticking point. Anthropic had two non-negotiable restrictions baked into its contract: no mass domestic surveillance and no fully autonomous weapons systems. Defence Secretary Pete Hegseth gave Anthropic until 5:01 PM on Friday 27 February 2026 to drop those restrictions and allow unrestricted military use of Claude. Anthropic declined.
What followed was swift. The Pentagon slapped Anthropic with a supply chain risk designation, a tool typically reserved for foreign adversary contractors, not homegrown Silicon Valley firms. Senator Kirsten Gillibrand called it 'a dangerous misuse of a tool meant to address adversary-controlled technology.' Then President Trump ordered all US federal agencies to stop using Anthropic technology entirely, with a six-month phase-out window.
The Sabotage Claim
The most eyebrow-raising element came from Department of Justice attorneys, who argued in filings that Anthropic employees could theoretically tamper with Claude to undermine national security systems. Pentagon CTO Emil Michael put it bluntly: 'The model itself learns what you're trying to do and it stops working. That is a risk I cannot take.'
Anthropic has firmly denied this is even possible, calling the sabotage framing baseless. On 9 March 2026, the company filed two lawsuits against the federal government, one in San Francisco and another in a federal appeals court in Washington, D.C. A court hearing is scheduled for 24 March before Judge Rita F. Lin.
The Fallout
OpenAI moved quickly to fill the gap, securing a Pentagon deal to replace Claude on classified systems after agreeing to allow its models to be used 'for any lawful purpose.' Defence sources estimate it could take three to twelve months to fully replace Claude on classified military networks, given it was one of only two frontier AI models operating in that environment.
Lockheed Martin announced it would stop using Claude and seek alternatives. Meanwhile, in an internal memo that promptly leaked, Anthropic CEO Dario Amodei called OpenAI's public messaging around the replacement deal 'straight up lies,' before apologising for the leak itself.
In a twist that would make any PR team smile, Claude actually outpaced ChatGPT in US app downloads during the dispute, and NPR reported over one million daily signups in a single week. Nothing drives consumer interest quite like a principled stand against the world's largest military.
Why This Matters Beyond the US
For those of us in the UK, this is not just transatlantic theatre. The precedent of a government blacklisting a domestic AI company for maintaining safety commitments is being closely watched by policymakers here and across Europe. If the US can strong-arm its own AI developers into removing ethical guardrails, it raises uncomfortable questions about what happens when the UK negotiates its own military AI procurement deals.
Twenty-two former military officials, over 30 cross-competitor AI researchers, and amicus briefs from Microsoft, OpenAI workers, and Google have all come out opposing the designation. As Sam Altman himself put it: 'We are not elected. I really do not want us to decide what to do if a nuke is coming towards the US.'
Fair point. But someone has to draw the line somewhere, and right now Anthropic is paying the price for trying.
Read the original article at source.
No comments yet. Be the first to share your thoughts.