HomeTechnologyMicrosoft Declares Support for Anthropic as Pentagon's AI Power Grab Faces Legal...

Microsoft Declares Support for Anthropic as Pentagon’s AI Power Grab Faces Legal Reckoning

Published on

Microsoft has formally declared its support for Anthropic in the AI company’s landmark legal battle against the Department of Defense, filing an amicus brief in a San Francisco federal court that calls for a temporary restraining order. The filing argues that the Pentagon’s controversial decision to label Anthropic a supply-chain risk would cause immediate and serious harm to the many businesses and government systems that rely on Anthropic’s AI technology. Alongside Microsoft, Amazon, Google, Apple, and OpenAI have also filed in support of Anthropic, presenting a united front from the top tier of the American technology industry.
The conflict began when Anthropic refused to sign a $200 million contract to deploy its AI on classified military systems without guarantees that the technology would not be used for mass surveillance or to power autonomous weapons. Defense Secretary Pete Hegseth responded to the collapse of negotiations by branding Anthropic a supply-chain risk, a label historically reserved for companies with ties to foreign adversaries such as China. The Pentagon’s technology chief has since publicly ruled out any prospect of renewed negotiations.
Microsoft’s involvement is grounded in practical reality: the company integrates Anthropic’s AI tools into systems it supplies to the US military and is a major partner in the Pentagon’s $9 billion cloud computing contract. The company also signed additional multibillion-dollar agreements with the federal government under the Trump administration. In a public statement, Microsoft called for a collaborative path forward in which the government and tech sector work together to harness the best available AI while safeguarding against its misuse.
Anthropic’s legal challenges, filed simultaneously in California and Washington DC, argue that the supply-chain risk designation represents unconstitutional retaliation for the company’s publicly stated AI safety positions. The company revealed in court filings that it does not currently have confidence that Claude can function safely in lethal autonomous warfare environments, which it said was the genuine basis for the restrictions it sought. Anthropic also pointed out that no US company had ever previously received this designation.
Congressional Democrats have separately raised concerns about the use of AI in US military operations in Iran, where a strike reportedly killed more than 175 civilians at an elementary school. Lawmakers are asking whether AI targeting tools were used and whether sufficient human oversight was maintained. These parallel investigations are intensifying public pressure on the Pentagon to explain how and under what constraints artificial intelligence is being used in active military operations.

Latest articles

Mark Zuckerberg’s $80 Billion Metaverse: The Story of How Not to Build a New Platform

Business schools will assign the Meta metaverse as required reading. How not to build...

Why Instagram’s Encryption Decision Is About More Than Just Safety

Meta's decision to end encrypted DMs on Instagram, effective May 8, 2026, has sparked...

Google Took Down Its Amateur Medical Advice AI Feature — And Said Almost Nothing

Google has quietly removed a search feature that used AI to compile health advice...

Musk’s xAI Facility to Become Major Mississippi Polluter Following Permit Win

Elon Musk’s AI company, xAI, has received a controversial permit to run 41 methane...

More like this

Mark Zuckerberg’s $80 Billion Metaverse: The Story of How Not to Build a New Platform

Business schools will assign the Meta metaverse as required reading. How not to build...

Why Instagram’s Encryption Decision Is About More Than Just Safety

Meta's decision to end encrypted DMs on Instagram, effective May 8, 2026, has sparked...

Google Took Down Its Amateur Medical Advice AI Feature — And Said Almost Nothing

Google has quietly removed a search feature that used AI to compile health advice...