The Pentagon responded to Anthropic’s lawsuit; Says: Refusing to accept the government’s contract terms is not protected by…
The U.S. Justice Department filed a 40-page response to Anthropic’s lawsuit on Tuesday, arguing that the AI startup’s refusal to sign a contract allowing “any lawful use” of its cloud model by the military is a commercial dispute — not a free speech issue — and that the Pentagon was within its rights to eliminate the company. The government called Anthropic an “unacceptable” and “substantial” national security risk, and asked the federal judge overseeing the case to reject the company’s bid for a preliminary injunction, which would prevent the supply chain risk designation during litigation.
Pentagon says Anthropic could harm its own AI during active combat operations
The government’s basic logic goes beyond standard procurement logic. In the filing, DOJ lawyers said that because AI systems are “extremely sensitive to manipulation”, giving Anthropic continued access to the War Department’s warfare infrastructure poses serious risks – particularly as company employees could “sabotage, maliciously introduce unwanted functions, or otherwise destroy” its models during active operations if Anthropic feels its internal red lines are being crossed. The government also noted that it can’t “just flip a switch” right now, noting that the cloud is currently the only AI model approved for use on the department’s classified systems and ongoing high-intensity combat operations.That framing is an important addition. This distinguished Anthropic not just as an inflexible vendor, but as a potential threat – a company whose constant ability to update and tune its own models makes it inherently unreliable in the context of warfare.
Anthropic’s lawsuit argues Pentagon used ‘supply chain risk’ label as ideological punishment
PREVIOUS STORY: Anthropic’s $200 million contract with the Pentagon collapses after both sides fail to agree on terms of use. Anthropic wanted clear contractual guarantees that the cloud would not be used for large-scale domestic surveillance or fully autonomous lethal weapons. The Pentagon countered that it is not a private company’s place to decide how the military uses its equipment, and instead demanded “all lawful uses” access.When negotiations broke down, Defense Secretary Pete Hegseth designated Anthropic as a supply chain risk – a label previously reserved for foreign adversaries – effectively barring the company from federal contracts. Anthropic filed cases in both the Northern District of California and the DC Circuit Court of Appeals on March 9. It argued that the designation was unconstitutional retaliation for its security policies, and warned that more than 100 enterprise customers could walk away because of it, potentially costing the company billions in losses.
Preliminary injunction hearing set for March 24
The DoJ also emphasized those financial concerns, calling Anthropic’s claimed damages “speculative” and arguing that they could be addressed through standard contract remedies – not emergency court relief.The Pentagon also revealed in the filing that it is actively working to deploy AI from Google. OpenAIAnd as an alternative to xAI Cloud. According to the chief digital and AI officer of the department, the engineering work has already started.Anthropic has until Friday to file its counter reply. A preliminary injunction hearing is scheduled for March 24 in federal court in San Francisco.
