Anthropic Wins First Round Against Pentagon in $200M AI Contract Dispute
Anthropic refuses Pentagon use of Claude for mass surveillance or autonomous weapons. A federal judge sided with Anthropic.
Anthropic vs. Pentagon: A $200M AI Contract Standoff Sets Legal Precedent
In March 2026, Anthropic and the U.S. Department of Defense entered an unprecedented confrontation over a $200 million AI contract. Anthropic drew two red lines: Claude would not be used for mass domestic surveillance nor integrated into fully autonomous weapons systems. The Pentagon demanded "any lawful use" terms.
Anthropic refused and was designated a "supply chain risk." A federal judge sided with Anthropic in the first ruling, affirming AI companies' right to set ethical use terms in contracts - a historic first.
On the same day the Pentagon formally designated Anthropic a national security threat, its Under Secretary privately emailed Dario Amodei saying they were "very close" on disputed issues.
ChatGPT uninstalls surged 295% in a single day. Claude hit #1 on the App Store. Anthropic's annualized revenue nearly doubled from $9B to $20B. U.S. companies paying for Anthropic tools jumped from 4% to 40%, while OpenAI's enterprise share dropped from 50% to 27%.
The ruling provides legal basis for AI companies to maintain ethical restrictions in military contracts. Over 30 employees from OpenAI and Google signed a legal brief backing Anthropic. This case proves AI ethics can be tested and enforced within legal frameworks.