Microsoft Backs Anthropic’s AI Safety Principles With Legal Force as Pentagon Doubles Down on Blacklist

Date:

As the Pentagon doubles down on its controversial decision to blacklist Anthropic, Microsoft has backed the AI company’s safety principles with legal force, filing an amicus brief in a San Francisco federal court that calls for a temporary restraining order against the supply-chain risk designation. The brief argued that the designation threatens the technology infrastructure supporting national defense and commercial AI applications. Amazon, Google, Apple, and OpenAI have also filed in support of Anthropic, creating a formidable legal coalition.

Anthropic’s safety principles became a flashpoint when the company refused to allow its Claude AI to be used for mass surveillance of US citizens or autonomous lethal weapons during a $200 million Pentagon contract negotiation. Defense Secretary Pete Hegseth responded by labeling the company a supply-chain risk, and the Pentagon’s technology chief later publicly ruled out any renegotiation. Anthropic filed two simultaneous lawsuits in California and Washington DC.

Microsoft’s legal support for Anthropic’s safety principles is grounded in its direct integration of Anthropic’s AI into military systems and its participation in the Pentagon’s $9 billion cloud computing contract. The company also holds additional federal agreements spanning defense, intelligence, and civilian agencies. Microsoft publicly argued that responsible AI governance and national security were complementary goals requiring collaboration between government and the technology sector.

Anthropic’s court filings argued that the supply-chain risk designation was an unconstitutional act of retaliation for the company’s public advocacy of responsible AI development. The company disclosed that it does not currently believe Claude is safe or reliable enough for lethal autonomous operations, which it said was the genuine technical basis for its contract demands. Anthropic argued the designation, never before applied to a US company, violated its First Amendment rights.

Congressional Democrats have separately asked the Pentagon whether AI was involved in a strike in Iran that reportedly killed over 175 civilians at a school, demanding information about AI targeting systems and human oversight. Their inquiries are adding legislative pressure to an already extraordinary legal confrontation. Together, Microsoft’s legal force, the industry coalition, and congressional scrutiny are creating a powerful challenge to the Pentagon’s doubling down on the Anthropic blacklist.

Related articles

Mark Zuckerberg’s $80 Billion Metaverse Adventure Ends With Almost No One Noticing

Billions of dollars were spent so that a few hundred thousand people could interact as cartoon avatars —...

Instagram Encrypted DMs Are Over: A New Chapter Begins for Meta’s Platform

The era of encrypted direct messaging on Instagram is officially ending. Meta has confirmed that the feature will...

How Google Launched a Controversial Health AI Feature and Then Made It Disappear

Google introduced an AI search feature promising to revolutionize how people access community health knowledge, then removed it...

Musk’s xAI Power Plant Victory: Southaven Regulators Greenlight 41 Turbines

Mississippi officials have officially authorized 41 permanent methane gas turbines for Elon Musk’s xAI, marking a major milestone...