Acting swiftly in response to an unprecedented government action, Microsoft has filed a legal brief in a San Francisco federal court in support of Anthropic’s challenge against a Pentagon supply-chain risk designation. Microsoft argued that immediate judicial intervention was necessary to prevent the disruption of the many companies and defense programs that depend on Anthropic’s AI tools. The filing is accompanied by a joint brief from Amazon, Google, Apple, and OpenAI, demonstrating the breadth of industry concern over the Pentagon’s decision.
The Pentagon’s designation followed the collapse of a $200 million contract negotiation, during which Anthropic refused to permit its AI to be used for mass surveillance of US citizens or to power weapons capable of operating without human oversight. Defense Secretary Pete Hegseth labeled the company a supply-chain risk, setting off a cascade of contract cancellations and effectively barring Anthropic from federal work. The designation is unprecedented in its application to a domestic American technology firm.
Microsoft’s brief carries special significance because the company directly incorporates Anthropic’s AI into systems provided to the US military. As a partner in the Pentagon’s $9 billion cloud computing contract and holder of numerous other government agreements, Microsoft is both a user of Anthropic’s technology and a key stakeholder in the outcome of this dispute. Microsoft said the nation needed a path that allowed access to cutting-edge AI while preventing its use for surveillance or unauthorized military action.
Anthropic filed suits in both California and Washington DC on the same day, arguing that the Pentagon’s action constituted unconstitutional retaliation for the company’s public advocacy of responsible AI development. Court documents revealed that Anthropic itself is not confident Claude can function safely in lethal autonomous warfare situations, which it said was the core reason for its contract demands. The company accused the Pentagon of using the supply-chain risk label as a political instrument rather than a legitimate national security tool.
Separate congressional inquiries are now underway into the use of AI in recent US military strikes in Iran that reportedly resulted in the deaths of over 175 civilians at a school. Lawmakers have formally asked whether AI targeting tools were involved and what level of human review was applied. These developments, combined with Anthropic’s legal fight, are forcing a long-overdue public conversation about accountability and oversight in AI-assisted warfare.
Microsoft Rushes to Court in Defense of Anthropic After Pentagon Issues Unprecedented AI Penalty
13